Finding Closed Quasigeodesics on Convex Polyhedra††thanks: A preliminary version of this paper appeared at the 36th International Symposium on Computational Geometry (SoCG 2020) [DHK20].
Abstract
A closed quasigeodesic is a closed curve on the surface of a polyhedron with at most of surface on both sides at all points; such curves can be locally unfolded straight. In 1949, Pogorelov proved that every convex polyhedron has at least three (non-self-intersecting) closed quasigeodesics, but the proof relies on a nonconstructive topological argument. We present the first finite algorithm to find a closed quasigeodesic on a given convex polyhedron, which is the first positive progress on a 1990 open problem by O’Rourke and Wyman. The algorithm establishes for the first time a quasipolynomial upper bound on the total number of visits to faces (number of line segments), namely, where is the number of vertices of the polyhedron, is the minimum curvature of a vertex, is the length of the longest edge, and is the smallest distance within a face between a vertex and a nonincident edge (minimum feature size of any face). On the real RAM, the algorithm’s running time is also pseudopolynomial, namely . On a word RAM, the running time grows to , where is the number of distinct edge lengths in the polyhedron, assuming its intrinsic or extrinsic geometry is given by rational coordinates each with at most bits. This time bound remains pseudopolynomial for polyhedra with distinct edges lengths, but is exponential in the worst case. Along the way, we introduce the expression RAM model of computation, formalizing a connection between the real RAM and word RAM hinted at by past work on exact geometric computation.
1 Introduction
A geodesic on a surface is a path that is locally shortest at every point, i.e., cannot be made shorter by modifying the path in a small neighborhood. A closed geodesic on a surface is a loop (closed curve) with the same property; notably, the locally shortest property must hold at all points, including the “wrap around” point where the curve meets itself. In 1905, Poincaré [Poi05] conjectured that every convex surface has a non-self-intersecting closed geodesic.111Non-self-intersecting (quasi)geodesics are often called simple (quasi)geodesics in the literature; we avoid this term to avoid ambiguity with other notions of “simple”. In 1927, Birkhoff [Bir27] proved this result, even in higher dimensions (for any smooth metric on the -sphere). In 1929, Lyusternik and Schnirelmann [LS29] claimed that every smooth surface of genus in fact has at least three non-self-intersecting closed geodesics. Their argument “contains some gaps” [BTZ83], filled in later by Ballmann [Bal78].
For non-smooth surfaces (such as polyhedra), an analog of a geodesic is a quasigeodesic — a path with of surface on both sides locally at every point along the path. Equivalently, a quasigeodesic can be locally unfolded to a straight line: on a face, a quasigeodesic is a straight line; at an edge, a quasigeodesic is a straight line after the faces meeting at that edge are unfolded (developed) flat at that edge; and at a vertex of curvature (that is, a vertex whose sum of incident face angles is ), a quasigeodesic entering the vertex at a given angle can exit it anywhere in an angular interval of length , as in Figure 1. Analogously, a closed quasigeodesic is a loop which is quasigeodesic. In 1949, Pogorelov [Pog49] proved that every convex surface has at least three non-self-intersecting closed quasigeodesics, by applying the theory of geodesics on smooth surfaces to smooth approximations of arbitrary convex surfaces and taking limits.

The existence proof of three closed quasigeodesics is nonconstructive, because the smooth argument uses a nonconstructive topological argument (a homotopy version of the intermediate value theorem).222A proof sketch for the existence of one closed geodesic on a smooth convex surface is as follows. By homotopy, there is a transformation of a small clockwise loop into its (counterclockwise) reversal that avoids self-intersection throughout. Consider the transformation that minimizes the maximum arclength of any loop during the transformation. By local cut-and-paste arguments, the maximum-arclength intermediate loop is in fact a closed geodesic. The same argument can be made for the nonsmooth case. In 1990, Joseph O’Rourke and Stacia Wyman posed the problem of finding a polynomial-time algorithm to find any closed quasigeodesic on a given convex polyhedron (aiming in particular for a non-self-intersecting closed quasigeodesic) [O’R20]. This open problem was stated during the open problem session at SoCG 2002 (by O’Rourke) and finally appeared in print in 2007 [DO07b, Open Problem 24.24]. Two negative results described in [DO07b, Section 24.4] are that an -vertex polyhedron can have non-self-intersecting closed quasigeodesics [DO07a, Theorem 24.4.1] (an unpublished result by Aronov and O’Rourke) and that, for any , there is a convex polyhedron whose shortest closed geodesic is not composed of shortest paths (an unpublished result from the discussion at SoCG 2002).
Even a finite algorithm is not obvious. Since the conference version of this paper [DHK20], two other approaches have been developed. Sharp and Crane [SC20] gave a practical heuristic based on edge flipping, but it has no proof of convergence. Chartier and de Mesmay [CdM22] gave a finite guaranteed-correct algorithm, but the running time is pseudo-exponential (exponential in both and defined below).
One tempting approach is to find two (or ) shortest paths whose union is a closed quasigeodesic. For example, the source unfolding [AAOS97, DO07b] (Voronoi diagram) from a polyhedron vertex consists of all points on the polyhedron having multiple shortest paths to , as in Figure 2. Can we find a point on the source unfolding and two shortest paths between and whose union forms a closed quasigeodesic? We know that there is a closed quasigeodesic through some vertex , because if we have a closed quasigeodesic through no vertices, we can translate it until it hits at least one vertex. But there might not be any choice for that makes the two shortest paths meet with sufficiently straight angles at both and , as in Figure 2.

A more general approach is to argue that there is a closed quasigeodesic consisting of some function (e.g., ) segments on faces. If true, there are combinatorial types of quasigeodesics to consider, and each can be checked via the existential theory of the reals (in exponential time), resulting in an exponential-time algorithm. But we do not know any such bound . It seems plausible that the “short” closed quasigeodesics from the nonconstructive proofs satisfy , say, but as far as we know the only proved property about them is that they are non-self-intersecting, which does not obviously suffice: a quasigeodesic can wind many times around a curvature-bisecting loop, turn around, and symmetrically unwind, all without collisions, as in Figure 3. Polyhedra such as isosceles tetrahedra have arbitrarily long non-self-intersecting closed geodesics (and even infinitely long non-self-intersecting geodesics) [IRV19], so the only hope is to find an upper bound on some (fewest-edge) closed quasigeodesic.

1.1 Our Results
We develop in Section 3 the first finite algorithm that finds a closed quasigeodesic on a given convex polyhedron, using real-RAM operations (arithmetic and square roots), where is the number of vertices of the polyhedron, is the smallest curvature at a vertex, is the length of the longest edge, and is the smallest distance within a face between a vertex and a nonincident edge (minimum feature size of any face). Furthermore, the found closed quasigeodesic consists of segments on faces, which is the first finite upper bound . Indeed, these time and segment bounds are both pseudopolynomial.
The real-RAM model of computation is common in computational geometry, but it is an unrealistic model for digital computers which are restricted to finite-precision computation. We introduce in Section 2 a model of computation for realistic manipulation of radical real numbers, called the expression RAM. This model provides a simple interface for manipulation of radical expressions (arithmetic, roots, and comparisons), similar to the real RAM. The key feature is that the expression RAM model can be implemented on top of either the real RAM or the word RAM (the standard model for digital computers), with different operation costs. In particular, we give the first general transformation of a real-RAM algorithm into a word-RAM algorithm with “only” singly exponential slowdown. Furthermore, when any involved expressions has height and contains at most distinct -roots (e.g., square roots) and integers of at most bits, the transformation has slowdown . For example, real-RAM expressions of constant size can be implemented with slowdown , preserving polynomial time bounds.333An earlier version of this paper [DHK20] mistakenly claimed that our closed-quasigeodesic algorithm fit within this “-expression RAM”, and thus mistakenly claimed that the running time remained pseudopolynomial on the word RAM. These results formalize and analyze the existing practical work on exact geometric computation pioneered by LEDA/CGAL reals [BFM+01] and CORE reals [KLPY99].
Applied to our algorithm for finding a closed quasigeodesic, we obtain a running time of on the word RAM, where is the number of distinct edge lengths and is the maximum number of bits in a coordinate of a rational-coordinate representation of the intrinsic metric or extrinsic vertex coordinates of the input polyhedron. (In fact, our algorithm supports more general expression inputs for either representation, but the time bound gets more complicated.) When or , this running time is still pseudopolynomial. For example, any zonohedron (Minkowski sum of 3D vectors) satisfies , resulting in pseudopolynomial running time. In the worst case, however, the running time is exponential in the input size . Nonetheless, this is the first finite algorithm for finding a closed quasigeodesic, so getting it to run at all on the word RAM is interesting.
In fact, it is unlikely that a closed quasigeodesic can be found in worst-case subexponential time on the word RAM, until we resolve the famous sum-of-square-roots open problem [DMO09]. For example, comparing the lengths of two paths given by segments on faces (as required by any shortest-path algorithm) requires comparing two sums of square roots, which is not known to be solvable in subexponential time. (For the same reason, the natural decision versions of Euclidean shortest paths and minimum spanning tree are not known to be in NP.) For closed quasigeodesics, it seems necessary to answer decision problems such as “is there a geodesic path connecting two given vertices that visits the given sequence of edges/faces in order?” While we do not know a reduction from sum-of-square-roots to this problem, the most natural solution by unfolding the faces one after the other involves arithmetic (sums and multiplications) over the lengths of the edges, which are square roots.
2 Models of Computation
In this section, we review standard models of computation and introduce a new model — the expression RAM — that makes it easy to state an algorithm that manipulates real numbers while grounding it in computational feasibility. The expression RAM is essentially a form of the real RAM, and indeed the expression RAM can be implemented on top of the real RAM (except for possible lack of floor/ceiling operations). More interesting is that the expression RAM can be implemented on top of the word RAM, with different operation costs; see Figure 4. Thus we can express a single algorithm in the expression RAM, and then analyze its running time on both the real RAM and word RAM.

Our approach is essentially the “exact geometric computation” framework of LEDA/CGAL reals [BFM+01] and CORE reals [KLPY99]. So while the model and associated algorithms have essentially been described before (and even implemented in code), they have not previously been analyzed to the level of precise costs in the word-RAM model. We expect that the expression RAM can be applied to analyze many algorithms in computational geometry, both old and new, making it a contribution of independent interest.
2.1 Standard Models: Real RAM and Word RAM
First we recall the two most common models of computation in computational geometry and data structures, respectively: the real RAM and the word RAM.
Both models define a Random Access Machine (RAM) to have a memory consisting of an array with cells (where the first cells act as “registers”), but they differ in what the cells can store. The RAM supports the -time operations “” (read) and “” (write) where and are constant integers (e.g., and ) representing register indices, and is promised to be an integer (or else the operation fails). When we later allow operations on cell values, such as “”, these operations are actually restricted on a RAM to be of the form “” where are all constant integers (again representing register indices).
The real RAM model (dating back to Shamos’s original thesis [Sha78]) allows storage and manipulation of black-box real numbers supporting arithmetic operations , radical operations , and comparisons in constant time, in addition to storage and manipulation of integers used for indexing.444Shamos’s original description of the real RAM [Sha78] supports the more general operations and , from which one can derive , but these features do not seem to be in common use in computational geometry. The original description also supports trigonometric functions, but this has fallen out of favor because of related undecidability results [Lac03] described below. While popular in computational geometry for its simplicity, this model is not directly realistic for a digital computer, because it does not bound the required precision, which can grow without bound. For example, the real RAM model crucially does not support converting black-box real numbers into integers (e.g., via the floor function), or else one can solve PSPACE [Sch79] and #SAT [BMS85] in polynomial time. The real RAM (without floor) is one of the models we will use to analyze our algorithm. (Refer ahead to Table 1, last column, for the exact operations we allow on a real RAM.)
The word RAM [FW93] allows storage and manipulation of -bit integers (called words) supporting arithmetic (), bitwise operations (), and comparisons () in constant time. Furthermore, the model makes the transdichotomous assumption that where is the problem size (named for how it bridges the problem and machine [FW93]). This assumption ensures that a single word can address the memory required to store the inputs; it is essentially the minimal assumption necessary to enable efficient random access on a RAM. Technically, we assume that the real RAM also includes all word RAM operations (i.e., the real RAM can store and manipulate both black-box real numbers and -bit words where ), so that it can support random access and standard data structures.555Shamos’s original description of the real RAM [Sha78] says that it supports “Indirect accessing of memory (integer addresses only)”, presumably with the intent that such addresses should be formed via arithmetic on integers (as opposed to real arithmetic that happens to result in integers, which would be difficult to detect). Adding the entire uncontroversial word RAM model (which did not exist in 1978) enables more flexible address manipulation (e.g., word division and mod), so that the real RAM is just about adding black-box real numbers.
The word RAM is the standard model of computation in data structures and much of theoretical computer science, capturing the word-level parallelism in real computers. But it is highly restrictive for computational geometry because it restricts the inputs to be integers or rationals, whereas we normally want to allow continuous real inputs (e.g., a point set or a polygon). Recent work starting with [CP09, CP10] achieves strong results in computational geometry on the word RAM, but not all problems adapt well to integer or rational inputs. For example, in our quasigeodesics problem, should the input consist of vertex coordinates restricted to be integers, or an intrinsic polyhedron metric restricted to have integer edge lengths? Each answer defines a different subset of problems, and neither obviously captures all instances that we might want to solve.
Our solution is to build a new model called the “expression RAM”. On the one hand, this model has a built-in notion of restricted real numbers, thereby encompassing a form of the real RAM. This enables problem inputs to be given as certain real numbers, including integers and rationals but also roots (though these incur a cost in any computation involving them). As we show in Section 2.4, this model provides reductions between the different possible representations of polyhedral inputs, though at some cost. On the other hand, our model can be implemented directly on the word RAM (or the real RAM) with appropriate operation costs, so its algorithms can be realistically run on digital computers. We believe our model is the first to unify the simplicity of the real RAM with guaranteed algorithmic performance as measured on the word RAM, though our approach is closely based on the theory and practice of exact geometric computation [BFM+01, KLPY99].
2.2 Expression RAM
The expression RAM (Random Access Machine) is a RAM model of computation allowing storage and manipulation of real radical expressions over integers, called “expressions” for short. More precisely, an expression is a real number specified by an ordered DAG with a specified source node666We use the term “source” instead of the more standard “root” to avoid confusion with nodes that represent roots in radical expressions. (representing the overall value of the expression), where each leaf corresponds to an integer, each two-child node corresponds to a binary operator among , each one-child node corresponds to a unary operator for an integer stored within the node.777In our quasigeodesics algorithm, we only need square roots (), but we define general fixed roots for potential future uses in other algorithms. Another potential addition to the model is the expression “th smallest real root of the degree- polynomial with coefficients given by expressions” [BFM+01, KLPY99], but this would require additional work which we do not carry out here. A special case of an expression is an integer expression which consists of a single integer (leaf) and no operators. Like the real RAM, we define the expression RAM to extend the word RAM, so it also allows storage and manipulation of -bit words where is a model parameter satisfying . To unify the two data types, we define a -bit integer expression to be explicitly encoded as words of bits, which can be manipulated as usual by word RAM operations. In particular, we can convert words into -bit integer expressions and vice versa in time.
Table 1 defines the expression-RAM operations for forming and evaluating expressions. Constructing expressions (Operation 1) is essentially free, but also does not compute anything other than an expression DAG, which can then be input into a computation (Operations 1, 1, and 1). Roughly speaking, Operation 1 computes the integer part and the first bits of the fractional part of (useful e.g. for reporting or plotting approximate values), but the last bit may be incorrect. An incorrect last bit is especially problematic when all the other bits are zero, but it turns out that in this case the correct last bit must in fact be zero (because , where is defined in (1) in Section 2.2.1 below). Operations 1 and 1 (which are built on Operation 1) show that this enables exact computation of sign, floor, and ceiling, despite these depending on the value of the last bit. (In our quasigeodesic algorithm, we do not actually need the floors and ceilings of Operation 1, or the approximations of Operation 1, but we describe how to easily implement them for possible future use.)
The expression-RAM operations of Table 1 can be implemented on the real RAM by representing an “expression” as a “built-in real number”, except for Operation 1 (assuming no floor or ceiling operation is available, as motivated in Section 2.1). Thus we can state a single algorithm in terms of Operations 1, 1, and 1, and then analyze the running time both on the real RAM and on the word RAM. The rightmost column of Table 1 gives the real-RAM cost of each operation. Operations 1 and 1 are directly supported in one operation on the real RAM (where Operation 1 now actually computes the real number resulting from the expression). Operation 1 requires a bit more effort, and will be analyzed in Lemma 2.2 below.
Word RAM Time Cost | |||||||||||||
Expression RAM Operation |
|
|
|
||||||||||
O1. | Given two expressions and (possibly integer expressions) and a positive integer , construct the expression , , , , or . | ||||||||||||
O2. | Given an expression , compute the sign of , i.e., whether it is zero, positive, or negative. |
|
|||||||||||
O3. | Given an expression and a positive integer (defined in (1)), -compute : compute an interval of rational numbers (represented by quotients of integer expressions) such that contains the value of and has length . | ||||||||||||
O4. | Given an expression , compute integer expression or . |
|
n/a | ||||||||||
In Operations 1, 1, and 1, if contains any invalid computation — division by zero, or even roots of negative numbers — then these computations simply produce a special result of “undefined” (following [BFM+01]). |
In the remainder of this section, we define and prove the various time bounds in Table 1, including two different cost models on the word RAM. In particular, we develop algorithms for the expression-RAM operations on the real RAM and word RAM, and analyze these algorithms to prove the claimed time bounds in the underlying models. Our “recursive” cost model (Section 2.2.2) specifies the running time of expression-RAM operations on the word RAM in terms of a recurrence over the structure of the expression. This cost model is more difficult to work with, but may be of use in certain situations where the expressions have specific form. It will also serve as the foundation for our “simple” cost model (Section 2.2.3), which bounds the running time of expression-RAM operations on the word RAM in terms of finitely many parameters. This model also includes algorithms to remove duplicate roots in an expression, so that the running time can depend on the number of distinct roots.
It is unlikely that trigonometric functions could be added to the expression RAM, as it is unclear how to bound their separation from zero in general and thus obtain word-RAM algorithms for Operation 1. For example, it is undecidable to determine whether a single-variable function built from operators , , , , , and composition is always nonnegative [Lac03].
2.2.1 Expression Bounds: , , and
At the center of our approach is a “separation bound” from [BFM+01] that limits how close to zero an expression can be without actually being zero. We express this bound in terms of three functions from expressions to positive integers, which are simplifications of functions from [BFM+01, Theorem 1]. Specifically, define the functions (“bit bound”), (“calculation complexity”), and (“degree doom”) as follows:
(1) | ||||
(2) | ||||
(3) | ||||
(4) |
(The recursive upper bound for is exact for trees, but over-counts for DAGs.)
Crucially, these functions provide bounds on how large an expression can get, and on how close to zero a nonzero expression can get:
Theorem 2.1.
Proof.
In fact, [BFM+01, Theorem 1] states that
(7) |
where is defined the same, and and are defined recursively as in the cases below.888The functions and from [BFM+01] are used only within our proof of Theorem 2.1, and are unrelated to the rational numbers and output by Operation 1. We have simplified the statement of the theorem to give a rough upper bound that applies in all cases, related to their functions via . The base case (for integer expressions ) has been modified to guarantee , which allows us to simplify some of the recursive formulas. We have also loosened the bound by exponentiating both and by (which is also larger than needed).
We prove by structural induction on the expression that . First note that and always. Then we proceed in cases:
-
1.
If is an integer expression , then and , so .
-
2.
If for some operator , then and , so
Because , we have that , as claimed.
-
3.
If , then and , so
as claimed.
-
4.
If , then and , so
as claimed.
-
5.
If and , then and , so , as claimed.
-
6.
If and , then and , so , as claimed.
Finally, (7) shows that , which is at most ; and that , which is at least , as desired. ∎
We can use this separation bound to analyze the only nontrivial expression-RAM operation on the real RAM:
Lemma 2.2.
Operation 1 can be implemented in time on a real RAM (without floor or ceiling).
Proof.
We start with the interval , which by Theorem 2.1 contains the given real number . Then we perform a binary search on the interval . At each step, we compute the midpoint via Operation 1, compare against via Operation 1, and set or to according to whether or . Each step takes time, preserves , and reduces the interval length by a factor of . After steps, the interval length is . After another steps, the interval length is . The total number of steps and thus running time is . ∎
Corollary 2.3.
2.2.2 Recursive Cost Model
In the recursive cost model, the expression-RAM operations have the following time costs on the word RAM. Operation 1 runs in time, Operation 1 runs in time, and Operation 1 runs in time, where and is given by the following recurrence:
(8) |
Here represents the maximum number of bits in the numerator or denominator of or in the interval output by Operation 1 on input and ; is given by another recurrence, with the same recursive structure as but differing in the additive terms:
(9) |
To give some intuition about how and behave, we prove that their dependence on is at most linear:
Lemma 2.4.
For a fixed expression , is an affine function of , where both and are nonnegative integers. The same is true for .
Proof.
Let represent the parameter at the top level of the recurrence. The recurrences for and are all sums of terms. Expanding out all occurrences of and , we obtain a sum of terms all of the form , , and for positive integers (dependent only on ). In addition, some terms are multiplied by (from the last case of ). The terms use a derived value of , which is formed from by repeatedly adding , adding , adding , or multiplying by . By induction, every value is an affine function of with nonnegative integer coefficients, and thus so is each term . Therefore the sum of terms resulting from expanding or is also such an affine function. ∎
Corollary 2.5.
For any constant , . The same is true for .
We also analyze how quickly grows when gains an operation:
Lemma 2.6.
For two expression DAGs and , .
Proof.
First we show that :
by (3) | ||||
by (2) | ||||
because | ||||
Define . Now we compute as follows:
where the last inequality follows from by inspection of (8).
Therefore . By Lemma 2.4, for nonnegative integers (which depend on ). Thus we can bound the sum
In conclusion, as desired. ∎
Now that we have stated the target time bounds, we develop algorithms for the expression-RAM operations and prove that they achieve these bounds. First we need a lemma for working with arbitrary-precision integers and rationals.
Lemma 2.7.
In time on the word RAM, we can add, subtract, multiply, integer divide, and compare -bit integers; and we can add, subtract, multiply, divide, and compare rationals represented as quotients of -bit integers.
Proof.
Integer addition and subtraction can be implemented using the grade-school algorithms, working in base , with a running time of time. Integer comparison follows from subtraction and checking the resulting sign. An -time algorithm for integer multiplication on a word RAM with goes back to Schönhage in 1980 [Sch80]; see also [Für14] and [Knu69, §4.3.3.C, p. 311]. Furthermore, integer division can be implemented by a series of integer multiplications of geometrically decreasing size [Knu69, §4.3.3.D, pp. 311–313], so also in time on a word RAM.
Each rational arithmetic operation reduces to integer arithmetic operations via the grade-school identities:
The sign of a rational number is the product of signs of its numerator and denominator, so we can support comparisons via subtraction. (We do not bother to reduce fractions, as that does not help in the worst case.) ∎
As the proof shows, the cost of integer addition, subtraction, and comparison can be tightened to time. Recently, -bit integer multiplication was shown to be possible in bit operations [HvdH19], but to our knowledge it remains open whether it is possible to achieve time (ideally, time) on the word RAM. Thus we opt for the simpler universal upper bound of for all operations, though perhaps this could be lowered in the future.
Now we implement the core expression-RAM operation, Operation 1. Operations 1 and 1 will be implemented on top of this operation.
Theorem 2.8.
Operation 1 can be implemented on a word RAM in time.
Proof.
We show how to -compute where and recursively in cases. Roughly speaking, each expression asks its children expressions for increasing precision so that, when we combine the intervals from the children, we meet the desired interval length bound at the parent. Along the way, we keep track of how many bits are needed to represent the intervals themselves. We apply standard identities for interval arithmetic, specifically addition, subtraction, and multiplication; see [Gib61] and [Moo79, §2.2].
More formally, we prove by induction on the number of nodes in input expression that Operation 1 (1) runs in time, and (2) produces an interval of rational numbers whose numerators and denominators each have at most bits (ignoring the sign bit). Assume by induction that smaller expressions satisfy these two induction hypotheses.
Addition and subtraction.
To -compute where , we first recursively -compute and where . In other words, the recursive call is with . Call the resulting intervals and . Then is an -computation of :
We can compute the interval of rationals using Lemma 2.7. By induction, the numerators and denominators of and have at most bits, so the sums/differences and have at most
bits, and computing them takes time.
Multiplication.
To -compute , we first recursively -compute and where . In other words, the recursive call is with . Call the resulting intervals and . To properly handle signs, we can take all pairwise products and take their min and max [Gib61, Moo79]: contains the value of . In fact, we claim it is an -computation of .
There are cases for the choices of max and min. To enumerate the cases, we introduce some notation: for , define is the other of and , i.e., ; and for , define . Note that all terms in the min and max are of the form , or by symmetry and commutativity, of the form . We split the cases into groups according to how many of appear.
-
1.
Exactly two of appear. These four of the sixteen cases have the form where . Then the max equals the min, so we get a -computation.
-
2.
Exactly three of appear. These eight of the sixteen cases have the form . where and . (Only four of these cases have positive length.) Then the length of the interval can be bounded as follows:
-
3.
All four of appear. These four of the sixteen cases have the form . where . (Only two of these cases have positive length.) Then the length of the interval can be bounded as follows:
We can compute the interval in time using rational multiplications and comparisons via Lemma 2.7. By induction, the numerators and denominators of and have at most bits, so the number of bits in the products is at most
Division.
To -compute , we reduce to multiplying and , as just analyzed, and computing . To -compute , we first recursively -compute where . In other words, the recursive call is with . Call the resulting interval . If , then
so by Theorem 2.1; thus we return “undefined” to indicate division by zero. Otherwise, is an -computation of :
We can compute each reciprocal in time by swapping the numerator and denominator, which preserves the number of bits . When we multiply with , we add to the recursive (another for , and a first for ), and the resulting number of bits is the sum . The total running time is beyond the recursive calls, as claimed.
Roots.
To -compute , we first recursively -compute where . In other words, the recursive call is with which we round up to for cleaner formulas. Call the resulting interval . If and , then , so we return “undefined” to indicate a negative square root. If , then
so by Theorem 2.1; thus, if is even, we return “undefined” to indicate a negative square root.
Otherwise, we can compute the floor and ceiling of the th root of a -bit integer in time [Zim99, BMZ02]; see [BZ10, §1.5.1]. To compensate for the error to be introduced, we first scale by where , resulting in the interval , which is an -computation of .999Potentially, the error could be improved to avoid the power of . We leave this improvement (which would remove ’s dependence on ) as an open problem. Next we round to the containing integer interval , which adds an additive error of at most , so is an -computation of . Now we apply the th-root algorithm to both ends of the integer interval, rounding down and up in each case respectively, to obtain , which is an -computation of . Finally, we undo the scaling by multiplying both ends of this integer interval by , and return which is an
-computation of . We can compute this interval using rational multiplications via Lemma 2.7 and the integer th-root algorithm, in time. The resulting number of bits grows by from the initial division by ; only decreases when we take the integer th root; and grows by when we multiply by . Thus because and . ∎
Proof.
Given an expression , we apply Operation 1 to with , which by Theorem 2.8 can be performed in time. By Corollary 2.5, because . Thus we obtain a rational interval of length that contains the value of .
First we show how to compute the sign of in additional time given this interval. If (i.e., ), then , so by Theorem 2.1, must in fact be zero. Otherwise, we have either , in which case must be positive; or , in which case must be negative. We can compute the signs of and in constant time by Lemma 2.7.
Second we show how to compute the floor and ceiling, by reducing to a sign computation. Because , the interval length , so contains at most one integer. We can compute and using integer division of Lemma 2.7. By measuring the length of the expanded interval , we determine whether contains an integer (the expanded interval has length ) or not (the expanded interval has length ). If does not contain an integer, i.e., for an integer , then and . If contains an integer , then we compute the sign of using the algorithm above, which costs an additional time. If is zero, then ; if is positive, then and ; and if is negative, then and . Thus we obtain the floor and ceiling of in all cases.
Corollary 2.10.
Corollary 2.11.
We can compare whether for two expression DAGs and in time.
2.2.3 Simple Cost Model
Our second cost model is easier to use, but in general may be weaker. It focuses on four key properties of an expression:
-
1.
The height of the expression DAG , i.e., the length of the longest directed path, where integer expressions have height .
-
2.
The number of nodes of the expression DAG . In particular, .
-
3.
The root set of the expression DAG , i.e., the set of distinct roots taken (such as and ).
-
4.
The maximum root degree . In our algorithm (and many computational geometry algorithms), , meaning we just take square roots.
-
5.
The maximum number of bits in the integer leaves of the expression DAG .
The main challenge in obtaining a running time dependent only on the number of distinct roots is in dealing with multiple radical nodes of equal value. First, in Theorem 2.12, we ignore this issue, and solve the recurrence in terms of . Next, in Lemma 2.13, we show how to modify expressions to remove equal-value radical nodes. Finally, in Theorem 2.14, we combine these tools to derive the simple cost model.
Proof.
First observe that the number of nodes in the expression DAG is at most .
Next we prove and . Taking the of (2), we obtain the recurrence:
The two recursive cases (second and third) have additive terms that simply count the number of nonleaf nodes in . The base case (first) is at most , and in at least one case is exactly . Thus is at least and is at most the number of nonleaf nodes plus times the number of leaf nodes, which is at most .
By (1), .
Next analyze the growth of the parameter in the and recurrences of (9) and (8), which follow the same pattern in terms of how grows down the recursion. Each radical node in recurses with , thus multiplying the input by a factor of . Each nonradical node in recurses with because , thus multiplying the input by a factor of at most . Thus each recursive level increases by a factor of at most . Let represent the parameter at the top level of the recurrence, i.e., in the call to Operation 1. Then after levels, we multiply by a factor of at most . Thus we can assume throughout the and recursions that .
Now we compute via the recurrence (9). The number of recursive calls to is at most . Each recursive call has an additive term of in the base case, and at most in the recursive case (dominated by the radical case). Multiplying, the total cost is at most .
Finally we compute via the recurrence (8). The number of recursive calls to is at most . Each recursive call has an additive term of at most . Multiplying, the total cost is at most . We obtain the desired upper bound on by substituting our previous upper bound . ∎
Define a uniquification of expression to be an expression satisfying , , , , , , and the number of radical nodes in is the number of distinct roots in (i.e., all roots in have distinct values). Define a near-uniquification in the same way, but allowing the number of radical nodes in to be at most larger (i.e., ).
Lemma 2.13.
Given an expression DAG , we can compute a uniquification of in
time, where each is a near-uniquification of a subexpression of .
Proof.
First we define a partial order on pairs of radical nodes of . Let be the set of radical nodes in . Define a partial order on by when or is a descendant of in . This partial order induces a partial order on the cross product by when and .
Our algorithm compares all pairs of distinct radical nodes in in a linearization of (found via topological sort), removing any duplicate (equal-value) nodes by combining those nodes. Thus, when we visit a pair , we know that the expression DAG with source and the expression DAG with source have already had all of their radical nodes pairwise compared and deduplicated, except for the single pair . In particular, and are near-uniquifications of the original expression DAGs with source nodes and respectively. We apply Corollary 2.11 to compare whether . If so, we have detected a duplicate root, and we remove it as follows. If is a descendant of , then exchange and , so that and are either incomparable or is a descendant of . Now replace in all references to by references to (which by the previous exchange will preserve acyclicity), and then remove from the DAG . As a special case, if is the source node of the expression DAG , then we replace the entire DAG with . The new is a uniquification of a subexpression of (the original expression DAG with source ). Repeating this process, we remove all duplicate roots from , while preserving the values of , , , and , and only decreasing and , resulting in a uniquification of .
Finally we analyze the running time of this algorithm. For each of pairs of distinct radical nodes , we pay time by Corollary 2.11, where and are near-uniquifications of subexpressions of . Therefore the claimed time bound holds. ∎
Proof.
We implement each operation by first removing all duplicate roots from via Lemma 2.13, resulting in a uniquification without duplicate roots, and second by running the appropriate algorithm from Theorem 2.9 on input .
By Lemma 2.13, the running time for the first step is given by
By Theorem 2.12, each satisfies
Because each is a near-uniquification, it has at most radical nodes, so . And because each is a near-uniquification of a subexpression of , we have , , , and . Thus
so the running time of the first step is
Because , we have , whose exponent is smaller than , so we can absorb by increasing the base by a factor of .
2.2.4 Special Cases
Now we look at some restricted expressions where the simple cost model is particularly simple. Our first restriction is to -size expressions, where real operations run as fast as integer operations. (An earlier version of this paper [DHK20] called this model the -expression RAM, and we suspect it is useful in many computational geometry algorithms; sadly, it will not suffice for our quasigeodesics algorithm.)
Corollary 2.15.
If and , then .
Proof.
By Theorem 2.12, , and . ∎
Next we consider expressions of logarithmic height, where the running time for real operations is polynomial except for an exponential dependence on the number of distinct roots. This result is what we will use in our quasigeodesics algorithm.
Proof.
By Theorem 2.14 and . ∎
Finally, we prove a general exponential bound on the running time for real operations:
Proof.
By Theorem 2.14, , and . ∎
Corollary 2.18.
Any algorithm running in time on the real RAM, where all radical operations are of degree, can be run in time on the word RAM.
2.2.5 Application to Multi-Expression Objects
We provide some notation to more easily express the simple cost model for algorithms whose input is not one expression but an “object” represented by multiple expressions. Let be the set of expressions representing . First we define parameters , , and as follows:
We also define to be the set of edge lengths of (when that makes sense), which is a set of square roots in the Euclidean metric.
Now we define to be the simple-cost-model running time of Theorem 2.14 according to these parameters, assuming root set and an added expression height of (defaulting to ):
(10) | ||||
We treat a vector of objects as itself an object, whose expression set is .
2.3 Associative Operations
Lemma 2.19.
Consider an associative operator over -dimensional real vectors, defined by expressions (forming ) whose leaves can also include one of the input variables. Given -dimensional vectors , we can compute in time a representation of as a -dimensional vector satisfying
In particular, .
Proof.
At a high level, we compute according to a complete rooted ordered binary tree , with one leaf for each in order and where each internal node computes . This tree has height . To transform this vector expression into real expressions, we make copies of . In , we replace the th leaf with the expression representing the th element of , and we replace each internal node with the expression representing the th element of , where each reference to the th element of the left [right] expression becomes a pointer to the left [right] child of in . Then where is the source node of expression DAG . The bound on follows from Equation (10). ∎
2.4 Polyhedral Inputs
The combinatorial structure of an input polyhedron can be encoded as a primal or dual graph, as usual, but which real numbers should represent the geometry? Because the quasigeodesic problem is about the intrinsic geometry of the surface of a polyhedron, the input geometry can be naturally represented intrinsically as well as extrinsically, leading to three natural representations:
-
1.
Extrinsic coordinates: 3D coordinates for each vertex.
-
2.
Intrinsic coordinates: For each face, for some isometric embedding of the face into 2D, the 2D coordinates of each vertex of the embedded face.
-
3.
Intrinsic lengths: For each face, the lengths of the edges. This representation assumes the faces have been combinatorially triangulated (so some edges may be flat).
In the expression RAM, we can convert coordinates (1 or 2) to edge lengths (3) as follows: given vertex coordinates , the distance between these two vertices is
We can also convert from intrinsic lengths (3) to intrinsic coordinates (2) by, for each triangle, placing two vertices on the axis and finding the third vertex by the intersection of two circles whose radii are the two incident edge lengths. (This transformation also requires a square root.) Therefore we can convert representations (1) (2) (3). The reverse direction, from intrinsic (2/3) to extrinsic (1), is more difficult, as it involves solving the Alexandrov problem [KPD09]. Accordingly, the intrinsic representations (2/3) represent a more general class of possible polyhedra.
Our quasigeodesic algorithm assumes the intrinsic input representation (2). By the reductions above, our algorithm also applies to polyhedra given in the extrinsic representation (1) or intrinsic length representation (3). On a real RAM, these conversions incur only linear additive time cost. On a word RAM, these conversions add to the root set of the input expressions, at most growing to include the edge lengths of the input polyhedron. Our algorithm will in fact introduce the edge lengths of the polyhedron to the root set of expressions anyway, so this does not introduce additional overhead for our algorithms. Thus we can assume any one of these input models.
By contrast, in a model restricting inputs to be integers or rationals, these three input models would define incomparable classes of polyhedra, so no representation would be clearly superior and no conversions would be possible.
3 Algorithm
In this section, we detail an algorithm to find a closed quasigeodesic on the surface of a convex polyhedron . First, a bit of terminology: we define a (quasi)geodesic ray/segment to be a one/two-ended path that is (quasi)geodesic.

3.1 Outline
The idea of the algorithm is roughly as follows. First, we define a directed graph where each node101010We use the word “node” and lower-case letters for vertices of the graph to distinguish them from vertices of a polyhedron, for which we use capital letters and the word “vertex”. is a pair of a vertex of and a small interval of directions at it, with an edge from one such node, , to another, , if a geodesic ray starting at the polyhedron vertex and somewhere in the interval of directions can reach and continue quasigeodesically everywhere in ;111111Since we consider only geodesic rays that can continue quasigeodesically everywhere in , there are some closed quasigeodesics that we cannot find: those that leave a polyhedron vertex in a direction in an interval for which some directions are not quasigeodesic continuations. In particular, this algorithm is unlikely to find closed quasigeodesics that turn maximally at a polyhedron vertex. see Figure 5. We show how to calculate at least one out-edge from every node of that graph, so we can start anywhere and follow edges until hitting a node twice, giving a closed quasigeodesic.
The key part of this algorithm is to calculate, given a polyhedron vertex and a range of directions as above, another vertex that can be reached starting from that vertex and in that range of directions, even though reaching may require crossing superpolynomially many faces.
3.2 Divergence of Geodesic Rays
In this section, we prove an upper bound on how long we must follow two geodesic rays before they diverge in the sequence of polyhedron edges they visit. First we define some terms.
Definition 3.1.
If is a point on the surface of a polyhedron, is a direction at , and , then is the geodesic segment starting at in the direction and continuing for a distance or until it hits a polyhedron vertex, whichever comes first.121212This definition is purely geometric; we reserve calculating these paths for Lemma 3.6. We allow ; in that case, may be a geodesic ray (or it may stop prematurely at a vertex).
Definition 3.2.
If is a geodesic segment or ray, the edge sequence is the (possibly infinite) sequence of polyhedron edges that visits.
Lemma 3.3.
If and are two geodesic rays from a common starting point with an angle of between them, then the edge sequences and are distinct, and the first difference between them occurs at most one edge after a geodesic distance of .
Proof.
For a finite distance , the prefix segment is a straight segment on the unfolded sequence of corresponding faces intersected by . Given a (prefix of) , the segment of is a straight line on the unfolded sequence of those faces. Thus, while , the two geodesics and form a planar wedge in a common unfolding, as in Figure 6. The distance between the points on the unfolded rays at distance from is (since ), so for points at a distance of , that distance is at least . So either and differ before then, or the next edge that and cross is a different edge, in which case and differ in the next edge, as claimed. ∎

If we had defined analogously to as not just the length of the longest edge but the greatest distance within a face between a polyhedron vertex and an edge not containing it, we could remove the “at most one edge after” condition from Lemma 3.3.
Lemma 3.3 gives a bound on the geodesic distance to the first difference in the edge sequences (or one edge before that). We now relate geodesic distance to the number of edges visited by the geodesic.
Lemma 3.4.
Let be a geodesic segment. Then consists of edges.
Proof.
We prove that, if the geodesic segment has length , then consists of edges. The lemma then follows by considering several consecutive subsegments along the full geodesic segment.
Consider the sequence of intersection points between the segment and the respective edges of the polyhedron. Call near a vertex if the intrinsic distance . Call the segment near a vertex if some is near .
We claim that can be near at most one vertex. Assume for contradiction that has a point near vertex and a point near vertex . By the triangle inequality,
contradicting that .
We can thus divide into two cases, depending on whether is near any (single) vertex:

Case 1: is far from all vertices.
Figure 7 shows an example where might cross many faces far from all vertices. Consider a segment crossing a face . In this case, and are not near a vertex. Consider shifting this segment, keeping it parallel to , until the shifted segment hits a vertex of the face ; refer to Figure 8.

Case 1.1:
If one shift direction causes the segment to get shorter and the shifted segment hits a vertex before it becomes zero length (as in Figure 8, left), then has length at least (by definition of ), and thus has length at least .
Case 1.2:
Otherwise, we know that there is a shift direction where the edge first hits a vertex of when it collapses to zero length at , which is a common endpoint of edges and of that and lie on (as in Figure 8, right). Now shift the edge in the opposite direction, which lengthens the edge, until we obtain an edge where either or is a vertex of . The shifted segment has length at least (by definition of ). Segments and form similar triangles with . We have (by definition of ) and either or (by farness), so the coefficient of similarity of at most . Therefore has length at least .
Hence, in both Case 1.1 and 1.2, has length at least . But has length . Therefore, in Case 1, the number of polyhedron edges hit by segment is at most .
Case 2: is near a vertex .
(This case includes when starts or ends at a vertex .) First consider a segment where one vertex (say, ) is near while the other vertex (say, ) is far from . By the triangle inequality,
But (by definition of ), so . But this contradicts that has length .
Thus any segment in Case 2 must have both and near vertex . Hence the edges hit by are all incident to . Consider the sequence of faces intersected by , where is the face between intersections and for ; is the face before the intersection (if any); and is the face after the intersection (if any). As argued above, the all have as a vertex. Thus, if we unfold these faces consecutively into the plane, as shown in Figure 11, then the unfolded faces all rotate around a common point . In this unfolded view, is a straight segment in the plane passing through the collinear points . Each angle is an angle of a face (at ), which is at least ; see Figure 11. Because the unfolding is planar, . Thus , so the number of faces hit by is . ∎


3.3 Computing Quasigeodesic Rays
Next we show how to algorithmically follow quasigeodesic rays. First we need a lemma about locally unfolding the polyhedron’s surface.
Lemma 3.5.
Given the local coordinate systems for two adjacent faces sharing an edge , where each system is a vector specifying the coordinates of all vertices of , we can compute in time the orientation-preserving isometry bringing on to on as a transformation matrix , satisfying
where denotes the length of edge .
Proof.
First we define the orientation-preserving isometry that brings edge of face from its location in ’s local coordinate system onto the positive axis, with mapping to the origin. Then the desired transformation is the composition that performs and then .
Transformation is the composition of translating by and then rotating to bring to the positive axis. Suppose has coordinates . Viewing 2D points as complex numbers, the rotation is equivalent to multiplying by the complex conjugate of , and rescaling by . Thus the rotation matrix (for multiplying a column vector on the right) is
Thus the overall affine transformation can be written as a matrix using additions, subtractions, multiplications, and divisions of the input coordinates and one square root, namely, . The only new leaf integer expression is , which does not affect . ∎
Lemma 3.6.
Let be a geodesic ray on an -vertex polyhedron. We can compute the first faces and edges visited by ; the corresponding intersection points; and a planar embedding of an unfolding of these faces, edges, and intersection points. In particular, this determines the direction at which exits the last face into the last intersection point. All points and directions are represented as (exact) expressions of height and root set .131313For the purposes of and , we ignore the component, and just view as the vector object ; see Section 2.2.5. If hits a vertex within the first steps, we stop there and also output the vertex; otherwise, we proceed for exactly steps. On a real RAM, the running time is . On the word RAM, the running time is .
Proof.
Suppose that starts at or enters face at point with direction . We binary search to find an edge of where exits . For each visited edge of with endpoints and , the intersection of the extension of , which has equation
and the extension of , which has equation
is the point given by the solution to the above linear system. Then we can check whether is between and (or is between and ) using Type-1 comparisons, to determine whether in fact crosses . If so, we have found a desired edge . Otherwise, we can determine whether the intersection is before or after on , using another Type-1 comparison on the signed triangle area, and direct the binary search accordingly.
If in fact hits a vertex of face , we will determine so by detecting that it hits two edges of . Otherwise, for the correct edge , we can thus determine as an affine transformation on , where the matrix elements are root-free -expressions (involving -expressions ). We then take Lemma 3.5’s affine transformation that brings edge of in the local coordinate system of to edge of the adjacent face in the local coordinate system of and apply it to both and to obtain the point and the direction that ray will next enter in its local coordinate system. Composing and , we obtain as an affine transformation of , where again the matrix elements are -expressions. By Lemma 3.5, the only added root is the length of edge .
Applying this method repeatedly, we can obtain the point and direction after traverses faces in sequence. In each step, is determined by an affine transformation on where the matrix elements are -expressions. On a real RAM, we compute each vector sequentially from the previous one, in time. Thus we spend time per binary-search iteration, for a total of . On an expression RAM, we instead apply Lemma 2.19 to represent each by expressions of height from the associative composition (matrix multiplication) of . At each step , we perform a Type-1 operation involving , , and the geometry of the polyhedron . Thus, according the simple cost model and Equation (10), this operation takes time. Summing over , we obtain a geometric series dominated by the last term, . We pay this cost per binary-search iteration, so the total cost is a factor of larger. ∎
Lemma 3.7.
Consider an angle- cone between geodesic rays and . We can compute a geodesic segment that is in the given cone and ends at a vertex of and has intersections between and edges of , along with the identity of the faces and edges intersected by . The output, consisting of , , and the intersections, is represented by (exact) expressions of height and root set . On a real RAM, the running time is . On the word RAM, the running time is .
Proof.
We apply Lemma 3.6 to follow the given geodesic rays and until they cross different edges. As a special case, if and immediately enter different faces, then is on an edge of and the cone contains one of the endpoints of that edge, and can simply be a portion of that edge. Otherwise, and initially enter the same face, and each step, either they exit the face along the same edge (and thus enter the same face) or they exit along different edges. By Lemma 3.3, after a distance of , and must exit a common face along different edges, say and . By Lemma 3.4, this event happens after crossing edges of . Thus we can find the desired vertex by choosing a vertex between and on (where “between” is defined by the sides of the rays corresponding to the cone). Lemma 3.6 also provides a planar embedding of an unfolding of the sequence of faces visited by and up to and including . Thus we can draw the unfolded segment from to in this embedding, and intersect with each of the unfolded edges to find the intersection points along the way (which can be mapped back to the polyhedron, if desired, via the transformations provided by Lemma 3.6). ∎
3.4 Full Algorithm
We are now ready to state the algorithm for finding a closed quasigeodesic in quasipolynomial time:
Theorem 3.8.
Let be a convex polyhedron with vertices all of curvature at least , let be the length of the longest edge, and let be the smallest distance within a face between a vertex and a nonincident edge. Then we can find a closed quasigeodesic on consisting of segments on faces of . On a real RAM, the running time is . On an expression RAM, each segment endpoint of the closed quasigeodesic can be represented by an expression with root set and height , and the running time is
In particular, if , this running time is
Proof.
First, we represent the minimum curvature of the polyhedron’s vertices by computing a vector whose angle with the positive axis is . For each vertex , we compute a planar embedding of the faces incident to in clockwise order, as follows. Let be the edge shared by and . We apply Lemma 3.5 to find a transformation from ’s local coordinate system that maps to the origin and maps to the positive axis, by constructing an artificial local coordinate system that places and in this way. Then we repeatedly apply Lemma 3.5 to place the subsequent faces , aligning corresponding edges (similar to Lemma 3.6). By Lemma 3.5, these transformations only involve the square roots in the lengths of edges incident to . To compute each on the word RAM, we apply Lemma 2.19 to compute the (associative) product of the transformation matrices from Lemma 3.5, and then apply that transformation to compute the vertex coordinates of . The computed placement of gives an embedding of which, viewed as a vector from , forms an angle with the axis equal to the curvature of vertex . We can compare the curvatures of two vertices by comparing the slopes of these vectors (each a ratio of the two coordinates of the vector) via Operation 1. Taking the minimum among all vertices, we find the vector whose angle with the positive axis is the minimum angle . On the real RAM, this computation costs time (by the Handshaking Lemma). On the word RAM, this computation costs
(11) |
time (as is an upper bound on vertex degree and thus is an upper bound on added expression height). We assume that : if ’s slope is not in , then we replace it with .
Second, we round the vector to an integer vector whose angle with the positive axis is between and . Specifically, we use repeated doubling and then binary search (“exponential search” or “one-sided binary search”) to find the smallest for which the slope is less than the slope of . Again, we can compare slopes of vectors via Operation 1. On the real RAM, the total running time is . On the word RAM, the total running time is
(12) |
time. The slope of is between and , whose ratio is . By our assumption that , , so the slope ratio is at most . For any slope between and , the ratio is between and . Thus the angles of vectors and have a ratio of at most . Furthermore, we can upper bound as follows: (as was the smallest integer where ), so . For , is between and where . Thus .
Third, for each vertex of each face of , we choose direction vectors in ’s local coordinate frame that include the two incident edges of , and so that the angle of the wedge between consecutive direction vectors is and, except for the extreme wedges that are bounded by a polyhedron edge, the angle is at least . We start with the two direction vectors for the two incident edges of , normalized (which adds edge-length roots), which forms an angle by convexity. Divide the angle at into the standard octants by the lines , , and . Divide each octant as follows; assume by possible reflection that we are working with the first octant. Consider the vectors for (including a scaling of our rounded vector ); refer to Figure 12. We include each vector that is in the desired wedge between the two edge direction vectors (which we can check by comparing slopes). To bound the angles between consecutive vectors and for , note that the triangle has base length and height , so the same area for all . By the side-angle-side area formula, , so . Each length is between and , so
Because , is between and . Thus
Because , we have , so
By our choice of in the second paragraph, . Therefore . Every coordinate we introduce here has value .

Fourth, we construct a directed graph with a one node for each pair of a vertex from and one of its wedges (for “angular range”), giving the graph nodes, with an edge from a node to a node if there exists a direction in such that a quasigeodesic ray starting in that direction from the polyhedron vertex hits the polyhedron vertex and can continue from every direction in .
Now we describe how to find an outgoing edge from any given node in , with corresponding vertex and a wedge of directions from to . If wedge contains a polyhedron edge (by our construction, this edge will be in the direction or from ), then we will proceed to polyhedron vertex using that direction vector , which we view in the local coordinate system of one of the faces incident to the polyhedron edge . Otherwise, apply Lemma 3.7 to follow the cone between geodesic rays and to find a vertex within the cone, i.e., reachable by a quasigeodesic segment starting at within the wedge , along with the final face visited by and the final direction vector of the segment in the local coordinate system of . In either case, we obtain a quasigeodesic that can then exit the vertex anywhere in a cone with angle equal to that vertex’s curvature, which is at least , so for at least one of the wedges of size at , the quasigeodesic can exit anywhere in that wedge. To find such a wedge, we can construct a planar embedding of the polyhedron faces incident to (as in the first paragraph of the proof) in counterclockwise order from , and then extend the vector from the embedded and compute via binary search which face we enter. If there is no such face , the curvature is , so the geodesic ray can exit anywhere; choose any graph node corresponding to this vertex . Otherwise, transform the vector to the local coordinate system of , and find the clockwise next whole wedge in ; if there is no such whole wedge within , choose the clockwise first wedge in the clockwise next face after . Thus we find an outgoing edge from node to node . The running time on the real RAM is . The running time on the word RAM is the sum of the ray-following cost and the final binary-search cost, both of which are
(13) |
We start from any node of , and repeatedly traverse outgoing edges of until we repeat a node of . In the worst case, we compute an outgoing edge for each of the nodes of before finding a cycle. The resulting cycle in exactly corresponds to a closed quasigeodesic on the polyhedron, by the definition of the graph. Each traversal of an edge in corresponds to a vertex-to-vertex geodesic path, which either follows a polyhedron edge so has just one segment, or applies Lemma 3.7 to a cone with angle (by the third paragraph, because the cone did not contain a polyhedron edge), so has segments. The closed geodesic can be described by such vertex-to-vertex paths, for a total of segments.
On the real RAM, the overall running time is for the preprocessing in the first paragraph, for the preprocessing in the second paragraph, and for walking/constructing the graph, of which the last term dominates. On the word RAM, the overall running time is the sum of Equations (11), (12), and times (13):
(14) |
Substituting Equation 10, the third term of (14) expands to141414The word-RAM running time appears to have one fewer top-level factor than the real-RAM running time (though it actually gains many more factors from the increase in expression complexity). This difference stems from the geometric series in Lemma 3.6, which is dominated by just the last step in ray following, so we lose the top-level factor of from the number of segments followed (instead it appears in the increase in expression complexity).
(15) |
The first term of (14) is clearly dominated by the third term. The second term of (14) is also dominated by the third term because (as argued above) and contributes only to (not , , , or ), so overall in adds a factor of , which is dominated by which appears in the expansion (15) of the third term. Therefore the expression in (15) bounds the overall running time. ∎
4 Conclusion
It has been known for seven decades [Pog49] that every convex polyhedron has a closed quasigeodesic, but our algorithm is the first finite algorithm to find one. We end with some open problems about extending our approach, though they all seem difficult.
Open Problem 1.
Is there a reduction from sum-of-square-roots [DMO09] to a decision problem involving quasigeodesics?
One goal would be to represent the sum of given square roots by unfolding/following a geodesic ray across edges and faces, where ideally the faces could come from a convex polyhedron. As described in Section 1.1, such a reduction would justify our algorithm taking exponential time in the worst case, as no better bound is known for sum-of-square-roots.
Open Problem 2.
Is there a quasipolynomial-time algorithm on the real RAM for finding a non-self-intersecting closed quasigeodesic? In particular, can we find the shortest closed quasigeodesic?
At least three non-self-intersecting closed quasigeodesics exist [Pog49], but Theorem 3.8 does not necessarily find one. Any approach similar to Theorem 3.8 is unlikely to resolve this, for several reasons:
-
1.
Parts of a quasigeodesic could enter a vertex at infinitely many angles. Theorem 3.8 makes this manageable by grouping similar angles of entry to a vertex, but if similar angles of entry to a vertex are combined, extensions that would be valid for some of them but invalid for others are treated as invalid for all of them. For instance, a quasigeodesic found by Theorem 3.8 will almost never turn by the maximum allowed at any vertex, since exiting a vertex at the maximum possible turn from one entry angle to the vertex may mean exiting it with more of a turn than allowed for another very close entry angle. So there are some closed quasigeodesics not findable by Theorem 3.8, and those may include non-self-intersecting ones.
-
2.
Given a vertex and a wedge determined by a range of directions from it, we can find one vertex in the wedge, but if we wish to find more than one, the problem becomes more complicated. When we seek only one vertex, we only need consider one unfolding of the faces, which the entire wedge stays in until it hits a vertex; when we pass a vertex, the unfoldings on each side of it might be different, so we multiply the size of the problem by 2 every time we pass a vertex. There may, in fact, be exponentially many non-self-intersecting geodesic paths between two vertices: for instance, Aronov and O’Rourke [DO07a] give the example of a doubly covered regular polygon, in which a geodesic path may visit every vertex in order around the cycle but may skip vertices.
On the other hand, recent work [CdM22] (after the conference version of this paper [DHK20]) gives an algorithm to find a non-self-intersecting closed quasigeodesic on a polyhedron, but the running time is pseudo-exponential (exponential in both and ).
Open Problem 3.
On the real RAM, the running time of Theorem 3.8 is polynomial in not just but the smallest curvature at a vertex, the length of the longest edge, and the shortest distance within a face between a vertex and an edge not containing it. Are all of those necessary? Can the last be simplified to the length of the shortest edge?
Open Problem 4.
On the expression RAM (or word RAM), the running time of Theorem 3.8 has an exponential dependence on and the geometric features. Can this be improved to a polynomial dependence, that is, a pseudopolynomial-time algorithm?
Open Problem 5.
Can the algorithm of Theorem 3.8 be extended to nonconvex polyhedra?
A quasigeodesic cannot pass through a nonconvex vertex. If the extended wedge in our algorithm contains a nonconvex vertex, the wedge will split in two, as shown in Figure 13, and complexity grows exponentially.

Recent work [CdM22] (after the conference version of this paper) defines an alternate notion of quasigeodesic path for nonconvex polyhedra, which requires at negative-curvature vertices that the path has an angle of at least on both sides (while at positive-curvature vertices the path still has angles of at most ). They prove that every nonconvex polyhedron has such a closed “quasigeodesic”, and gave an algorithm to find one. Indeed, our algorithm can also find such a path, redefining to be the smallest absolute curvature of a polyhedron vertex.
Acknowledgments
The authors thank Zachary Abel, Nadia Benbernou, Fae Charlton, Jayson Lynch, Joseph O’Rourke, Diane Souvaine, and David Stalfa for discussions related to this paper.
References
- [AAOS97] Pankaj K. Agarwal, Boris Aronov, Joseph O’Rourke, and Catherine A. Schevon. Star unfolding of a polytope with applications. SIAM Journal on Computing, 26(6):1689–1713, December 1997.
- [Bal78] Hans Werner Ballmann. Der Satz von Lusternik und Schnirelmann. Bonner Mathematische Schriften, 102:1–25, 1978.
- [BFM+01] Christoph Burnikel, Stefan Funke, Kurt Mehlhorn, Stefan Schirra, and Susanne Schmitt. A separation bound for real algebraic expressions. In Proceedings of the 9th Annual European Symposium on Algorithms, volume 2161 of Lecture Notes in Computer Science, pages 254–265, Aarhus, Denmark, August 2001.
- [Bir27] George D. Birkhoff. Dynamical Systems, volume 9 of Colloquium Publications. American Mathematical Society, 1927.
- [BMS85] A. Bertoni, G. Mauri, and N. Sabadini. Simulations among classes of random access machines and equivalence among numbers succinctly represented. Annals of Discrete Mathematics, 25:65–90, 1985.
- [BMZ02] Yves Bertot, Nicolas Magaud, and Paul Zimmermann. A proof of GMP square root. Journal of Automated Reasoning, 29:225–252, 2002.
- [BTZ83] Werner Ballmann, Gudlaugur Thorbergsson, and Wolfgang Ziller. On the existence of short closed geodesics and their stability properties. In Seminar on Minimal Submanifolds, pages 53–63. Princeton University Press, 1983.
- [BZ10] Richard P. Brent and Paul Zimmermann. Modern Computer Arithmetic. Cambridge University Press, November 2010.
- [CdM22] Jean Chartier and Arnaud de Mesmay. Finding weakly simple closed quasigeodesics on polyhedral spheres. In Proceedings of the 38th International Symposium on Computational Geometry, Berlin, Germany, June 2022. arXiv:2203.05853.
- [CP09] Timothy M. Chan and Mihai Pǎtraşcu. Transdichotomous results in computational geometry, I: Point location in sublogarithmic time. SIAM Journal on Computing, 39(2):703–729, 2009.
- [CP10] Timothy M. Chan and Mihai Pǎtraşcu. Transdichotomous results in computational geometry, II: Offline search. arXiv:1010.1948, 2010. Originally published at STOC 2007.
- [DHK20] Erik D. Demaine, Adam C. Hesterberg, and Jason S. Ku. Finding closed quasigeodesics on convex polyhedra. In Proceedings of the 36th International Symposium on Computational Geometry, pages 33:1–33:13, June 2020.
- [DMO09] Erik D. Demaine, Joseph S. B. Mitchell, and Joseph O’Rourke. Problem 33: Sum of square roots. In The Open Problems Project. 2009. https://topp.openproblem.net/p33.
- [DO07a] Erik D. Demaine and Joseph O’Rourke. Geodesics: Lyusternik-Schnirelmann. [DO07b], section 24.4, pages 372–375.
- [DO07b] Erik D. Demaine and Joseph O’Rourke. Geometric Folding Algorithms: Linkages, Origami, Polyhedra. Cambridge University Press, July 2007.
- [Für14] Martin Fürer. How fast can we multiply large integers on an actual computer? In Alberto Pardo and Alfredo Viola, editors, Proceedings of the 11th Latin American Symposium on Theoretical Informatics, pages 660–670, 2014.
- [FW93] Michael L. Fredman and Dan E. Willard. Surpassing the information theoretic bound with fusion trees. Journal of Computer and System Sciences, 47(3):424–436, 1993.
- [Gib61] Allan Gibb. Algorithm 61: Procedures for range arithmetic. Communications of the ACM, 4(7):319–320, July 1961.
- [HvdH19] David Harvey and Joris van der Hoeven. Integer multiplication in time . HAL Preprint hal-02070778, 2019. https://hal.archives-ouvertes.fr/hal-02070778.
- [IRV19] Jin-Ichi Itoh, Joël Rouyer, and Costin Vîlcu. Polyhedra with simple dense geodesics. Differential Geometry and its Applications, 66:242–252, 2019.
- [KLPY99] V. Karamcheti, C. Li, I. Pechtchanski, and C. Yap. A core library for robust numeric and geometric computation. In Proceedings of the 15th Annual Symposium on Computational Geometry, pages 351–359, Miami Beach, Florida, 1999.
- [Knu69] Donald E. Knuth. The Art of Computer Programming, volume 2. Addison-Wesley, 1969.
- [KPD09] Daniel Kane, Gregory N. Price, and Erik D. Demaine. A pseudopolynomial algorithm for Alexandrov’s Theorem. In Proceedings of the 11th Algorithms and Data Structures Symposium, volume 5664 of Lecture Notes in Computer Science, pages 435–446, Banff, Canada, August 2009.
- [Lac03] M. Laczkovich. The removal of from some undecidable problems involving elementary functions. Proceedings of the American Mathematical Society, 131(7):2235–2240, 2003.
- [LS29] Lazar Lyusternik and Lev Schnirelmann. Sur le probléme de trois géodésiques fermées sur les surfaces de genre 0. Comptes Rendus de l’Académie des Sciences de Paris, 189:269–271, 1929.
- [Moo79] Ramon E. Moore. Methods and Applications of Interval Analysis. Society for Industrial and Applied Mathematics, Philadelphia, 1979.
- [O’R20] Joseph O’Rourke. Personal communication, 2020.
- [Pog49] Aleksei Vasilevich Pogorelov. Quasi-geodesic lines on a convex surface. Matematicheskii Sbornik, 25(62):275–306, 1949. English translation in American Mathematical Society Translations 74, 1952.
- [Poi05] Henri Poincaré. Sur les lignes géodésiques des surfaces convexes. Transactions of the American Mathematical Society, 6(3):237–274, 1905.
- [SC20] Nicholas Sharp and Keenan Crane. You can find geodesic paths in triangle meshes by just flipping edges. ACM Transactions on Graphics, 39(6):1–15, December 2020.
- [Sch79] Arnold Schönhage. On the power of random access machines. In Proceedings of the 6th International Colloquium on Automata, Languages, and Programming, volume 71 of Lecture Notes in Computer Science, pages 520–529, 1979.
- [Sch80] A. Schönhage. Storage modification machines. SIAM Journal on Computing, 9(3):490–508, 1980.
- [Sha78] Michael Ian Shamos. Computational Geometry. PhD thesis, Yale University, 1978.
- [Zim99] Paul Zimmermann. Karatsuba square root. Research Report 3805, INRIA, 1999. http://hal.inria.fr/docs/00/07/28/54/PDF/RR-3805.pdf.