Improved Hardness of BDD and SVP Under Gap-(S)ETH
Abstract
We show improved fine-grained hardness of two key lattice problems in the norm: Bounded Distance Decoding to within an factor of the minimum distance () and the (decisional) -approximate Shortest Vector Problem (), assuming variants of the Gap (Strong) Exponential Time Hypothesis (Gap-(S)ETH). Specifically, we show:
-
1.
For all , there is no -time algorithm for for any constant , where and is the kissing-number constant, unless non-uniform Gap-ETH is false.
-
2.
For all , there is no -time algorithm for for any constant , where is explicit and satisfies for , for all , and as , unless randomized Gap-ETH is false.
-
3.
For all and all , there is no -time algorithm for for any constant , where is explicit and satisfies as for any fixed , unless non-uniform Gap-SETH is false.
-
4.
For all , , and all , there is no -time algorithm for for some constant , where is explicit and satisfies as , unless randomized Gap-SETH is false.
Our results for improve and extend work by Aggarwal and Stephens-Davidowitz (STOC, 2018) and Bennett and Peikert (CCC, 2020). Specifically, the quantities and (respectively, ) significantly improve upon the corresponding quantity (respectively, ) of Bennett and Peikert for small (but arise from somewhat stronger assumptions). In particular, Item 1 improves the smallest value of for which is known to be exponentially hard in the Euclidean norm () to an explicit constant for the first time under a general-purpose complexity assumption. Items 1 and 3 crucially use the recent breakthrough result of Vlăduţ (Moscow Journal of Combinatorics and Number Theory, 2019), which showed an explicit exponential lower bound on the lattice kissing number. Finally, Item 4 answers a natural question left open by Aggarwal, Bennett, Golovnev, and Stephens-Davidowitz (SODA, 2021), which showed an analogous result for the Closest Vector Problem.
1 Introduction
Lattices are geometric objects that look like regular orderings of points in real space. More formally, a lattice is the set of all integer linear combinations of some linearly independent vectors . The matrix whose columns are these vectors is called a basis of , and we denote the lattice it generates by , i.e., . The number of vectors in a basis is an invariant of , and is called its rank.
In recent years, lattices have played a central role in both cryptanalysis and the design of secure cryptosystems. One very attractive quality of many lattice-based cryptosystems (e.g., [Ajt98, AD97, MR07, Reg09, GPV08]) is that they are secure assuming that certain key lattice problems are sufficiently hard to approximate in the worst case. Motivated by this and myriad other applications of lattices in computer science, many works have studied the -hardness of both exact and approximate lattice problems (e.g., [vEB81, ABSS97, Ajt98, Mic00, Mic01, Kho06, Kho05, LLM06, HR12, Mic12]). More recently, motivated especially by the need to understand the concrete security of lattice-based cryptosystems, a number of works [BGS17, AS18, BP20, AC21, ABGS21] have studied the fine-grained complexity of lattice problems. That is, for a meaningful real-world security bound, it is not enough to say merely that there is no polynomial-time algorithm for a suitable lattice problem. Rather, a key goal is to show -hardness, or even -hardness for some explicit , of a problem under general-purpose complexity-theoretic assumptions, like variants of the (Strong) Exponential Time Hypothesis.
In this work, we extend the latter line of research by showing improved fine-grained complexity results for two key lattice problems, the Bounded Distance Decoding Problem () and the Shortest Vector Problem (). To define these problems, we first recall some notation. Let denote the minimum distance of , i.e., the length of a shortest non-zero vector in , and let denote the distance between a target vector and . When using the norm, we denote these quantities by and , respectively.
BDD and SVP.
The Bounded Distance Decoding Problem in the norm for relative distance , denoted , is the search promise problem defined as follows: given a basis of a lattice and a target vector satisfying as input, the goal is to find a closest lattice vector to the target vector such that . (We note that is guaranteed to be unique when , but that is well-defined for any .) The -approximate Shortest Vector Problem in the norm, denoted , is the decision promise problem defined as follows: given a basis of a lattice and a distance threshold as input, the goal is to decide whether (i.e., the input is a YES instance) or (i.e., the input is a NO instance), with the promise that one of the two cases holds.111In other literature, this decision problem is often called , whereas usually denotes the corresponding search problem (of finding a nonzero lattice vector for which , given an arbitrary lattice .) There is a trivial reduction from the decision problem to the search problem, so any hardness of the former implies identical hardness of the latter.
Although it seems far out of reach using known techniques, proving that is hard for a sufficiently large polynomial approximation factor , or that is hard for sufficiently small inverse-polynomial relative distance , would imply the provable security of lattice-based cryptography.222We note that the relative distance in is not an approximation factor per se, but it is analogous to one in a precise sense. Namely, for there is a rank-preserving reduction from to with [LM09, BSW16], so sufficiently strong (fine-grained) hardness of the former problem translates to corresponding hardness for the latter problem. A similar reduction holds in reverse, but with a larger loss: . On the other hand, most concrete security estimates for lattice-based cryptosystems are based on the runtimes of the fastest known (possibly heuristic) algorithms for exact or near-exact . So, apart from its inherent theoretical interest, understanding the fine-grained complexity of (near-)exact and sheds light on questions of great practical importance.
1.1 Our Results
![]() |
![]() |
In this work, we show improved fine-grained hardness of and , with an emphasis on results for smaller relative distance and larger approximation factor , and on analyzing the complexity of the problems as the underlying norm varies. (We note that trivially reduces to when , and so showing hardness results for for smaller is showing something stronger.)
At a conceptual level, our work gives very general reductions to (presented in Theorem 3.4), which reduce the task of showing hardness for to analyzing properties of certain gadget lattices (described in Section 1.2.1). The few known hardness results for (essentially just [LLM06, BP20] and this work) are all shown using this gadget lattice framework, but the previous works required separate reductions. The reductions in this work give a unified way to show hardness results using this framework.
At a technical level, our improved results for follow from improved analysis of the techniques used in [AS18] and [BP20] together with our new framework. Aggarwal and Stephens-Davidowitz [AS18] presented three main results on the fine-grained hardness of , summarized in Items 1 to 3 in its abstract. Bennett and Peikert [BP20] showed new hardness results for by refining and adapting the analysis used to show [AS18], Item 1. Analogously, in this work we obtain two of our hardness results for by refining and adapting the analysis used to show [AS18], Items 2 and 3. Specifically, Theorem 1.1 corresponds to [AS18], Item 3 and Theorem 1.2 to [AS18], Item 2. Our third hardness result for , presented in Theorem 1.3, uses ideas from the other reductions together with our new framework. Finally, our improved result for , presented in Theorem 1.4, answers a natural question left open by [AS18, ABGS21].
Our results assume (randomized or non-uniform) “gap” variants of the Exponential Time Hypothesis (ETH) and Strong Exponential Time Hypothesis (SETH). Recall that “plain” ETH asserts that solving the -SAT problem on variables requires time, and “plain” SETH asserts that for every there exists such that solving the -SAT problem on variables requires -time. The gap variants of these assumptions hypothesize that similar runtime lower bounds hold even for approximating the number of satisfiable clauses in a -SAT formula to within some small constant approximation factor; see Section 2.5 for the precise definitions. We sometimes informally use the terminology “(Gap-)ETH-hardness” to denote -hardness of a problem assuming a variant of (Gap-)ETH, and “(Gap-)SETH-hardness” to denote -hardness of a problem assuming a variant of (Gap-)SETH for some explicit constant .
1.1.1 Hardness for BDD
Our first result shows improved exponential hardness of for all sufficiently small values of , including the important Euclidean case of , assuming a variant of Gap-ETH (see the left plot in Figure 1). Indeed, it improves the smallest value of for which exponential hardness of is known under a general-purpose complexity-theoretic assumption to , showing such hardness for an explicit333Each of the quantities , , , , , and described in this section is “explicit” in the sense that it is expressible via some (not necessarily closed-form) expression. These expressions are easily computed to high accuracy in practice as shown, e.g., in Figure 1. Additionally, we emphasize that these quantities are constants in that they do not depend on the rank of the lattice in the corresponding problem. constant less than the threshold for the first time.444Using the ideas in this paper and [AS18], showing exponential hardness of essentially corresponds to showing exponential hardness of with for some constant . Additionally, using the -hardness framework in this paper, it would be relatively straightforward to show exponential hardness of with for any constant . So, the threshold is qualitatively quite natural, and trying to show hardness for an explicit constant is a natural goal.
Theorem 1.1 (Gap-ETH-hardness of BDD, first bound).
For all , there is no -time algorithm for for any constant , unless non-uniform Gap-ETH is false. Here , where is the kissing-number constant defined in Section 3.2.
Our second result shows improved exponential hardness of in a different regime and under a somewhat weaker assumption.
Theorem 1.2 (Gap-ETH-hardness of BDD, second bound).
For all , there is no -time algorithm for for any constant , unless randomized Gap-ETH is false. Here is an explicit constant, defined in Equation 20, which satisfies for , for all , and as .
In [AS18], Aggarwal and Stephens-Davidowitz showed SETH-hardness of for all . To partially overcome the “ barrier,” they generalized their proof techniques to show Gap-ETH-hardness of for all . The results in [BP20] adapted the former techniques of [AS18] to show SETH-hardness of , and similarly got stuck at the barrier in the sense that they could not prove hardness of with and simultaneously. The proof of Theorem 1.2 can be thought of as adapting the latter, generalized techniques of [AS18] to , and analogously allows us to prove Gap-ETH-hardness of with for all .
Compared to related quantities, is: at most “ with norm embeddings” for all ; strictly less than for all sufficiently small ; strictly less than for all sufficiently large ; and strictly less than the minimum of and for intermediate values of . That is, improves on both (even with norm embeddings) and for intermediate values of ; again, see the left plot in Figure 1. (Recall that [BP20] shows exponential hardness of for assuming randomized ETH, and Theorem 1.1 above shows such hardness for assuming non-uniform Gap-ETH.) However, Theorem 1.2 relies on a somewhat stronger hardness assumption than the one used in [BP20], and a somewhat weaker hardness assumption than Theorem 1.1, so the prior and new results are formally incomparable.
Our third result shows -hardness of for any and , where is an explicit constant. For all sufficiently small and sufficiently large , not only improves on the corresponding quantity in [BP20], but also on . For example, we show that while [BP20] was only able to show and (see the right plot in Figure 1).
Theorem 1.3 (Gap-SETH-hardness of BDD).
For all and all , there is no -time algorithm for for any constant , unless non-uniform Gap-SETH is false. Here is an explicit constant, defined in Equation 21, which satisfies as for any fixed .
Theorems 1.3 and 1.1 both crucially rely on the recent breakthrough work of Vlăduţ [Vlă19] showing an explicit exponential lower bound on the lattice kissing number, i.e., on the maximum number of vectors achieving . More specifically, [Vlă19] shows that there exists a lattice of every rank with kissing number at least , where (see Definition 3.5 and Theorem 3.6).
Vlăduţ’s specific lower bound on translates to the bounds on and that we obtain. Additionally, our use of “non-uniform” rather than “randomized” Gap-(S)ETH in the preceding results arises from the fact that it is not clear whether Vlăduţ’s exponential kissing number lattices can be efficiently constructed uniformly (even using randomness); an affirmative answer would relax the assumptions accordingly. Indeed, our results are agnostic to Vlăduţ’s specific construction, and an improved lower bound on would immediately imply improved upper bounds on the values and . Additionally, in Section 3.4.1 we sketch an approach for removing the “gap” part of the assumption in Theorem 1.3, which would be a further relaxation.
1.1.2 Hardness for SVP
Our final result shows the same strong runtime lower bounds for with some constant under (randomized) Gap-SETH as [AS18] showed for under (randomized) SETH. This answers a natural question left open by [ABGS21], which analogously showed the same runtime lower bounds for with some constant under Gap-SETH as [BGS17] showed for the Closest Vector Problem () under SETH.
Theorem 1.4 (Gap-SETH-hardness of SVP).
For all , , and all , there is no -time algorithm for for some constant , unless randomized Gap-SETH is false. Here is an explicit constant, defined in Equation 23, which satisfies as .
The reduction used to prove Theorem 1.4 is itself a natural modification of the reduction used in [AS18] to prove SETH-hardness of exact , but its analysis is more nuanced. We emphasize that simply plugging an instance of with rather than into the reduction of [AS18] does not yield corresponding hardness of approximation for with ; a modified reduction is necessary. Finally, we remark that the somewhat odd-looking requirement in Theorems 1.3 and 1.4 is an artifact of the “upstream” hardness results we employ for ; see Theorem 2.18.
1.2 Our Techniques
1.2.1 Locally Dense Lattices
As in nearly all prior work on the complexity of and (e.g., [Mic01, Kho05, LLM06, AS18, BP20]), a key component of our results is the construction of a family of “locally dense” lattices, which are specified by a lattice and corresponding target vector . For our purposes, a locally dense lattice is one with few “short” vectors, many vectors “close” to , but few vectors “too close” to . (Other works such as [Mic14] define locally dense lattices in a closely related but different way, e.g., without the requirement of few “too close” vectors.)
For a discrete set , which we will take to be a lattice or a subset of a lattice, define
Somewhat more formally, we define a locally dense lattice , with relative distance in the norm to be one for which
(1) |
for some . That is, , is such that the number of “close vectors” (within distance of ) is an exponential factor larger than the number of short vectors (of norm at most one). Similarly, we require there to be an exponential factor more close vectors than “too close” vectors, along with some other technical conditions. We defer discussing these issues until the main body of the paper, and for the remainder of the introduction focus on the constraint in Equation 1.
A crux in showing hardness of and is constructing good locally dense lattices, and their parameters govern the precise hardness results that we can obtain. A family of locally dense lattices with smaller relative distance and larger leads to stronger hardness results. To obtain ETH-type hardness results, we simply need to be a constant greater than , and then we can show -hardness of for any constant . For SETH-type hardness results, we get -hardness of whenever our reduction to has a multiplicative rank-increase factor of . The value of depends on the gap factor in Equation 1, so to show such hardness for explicit we need an explicit lower bound on . Our reductions also give a tradeoff between and , as shown in the right plot in Figure 1. The full situation is actually a bit more complicated when taking “too close” vectors into account, but we again defer discussing this for now. The situation for is similar to the situation for .
1.2.2 Sparsification
An important technique in our work is randomized lattice sparsification, an efficient algorithm that essentially does the following. Given (a basis of) a lattice and an index as input, the algorithm randomly samples a sublattice such that for any fixed, finite set of lattice points satisfying some mild conditions, with probability near . A variant of this algorithm, additionally given as input, randomly samples and such that for any fixed, finite set of points satisfying some mild conditions, with probability near .
Intuitively, some mild caveats aside, sparsification says that a lattice with few short vectors (and few “too close” vectors) is just as good as a lattice with no short non-zero vectors (and no “too close” vectors), since the latter can be efficiently obtained from the former. Indeed, sparsifying with index allows us to turn a lattice and target satisfying, say, into a lattice and target with and , so and . That is, the output , satisfies the promise . See Section 2.2 for a formal description of sparsification.
1.2.3 A Transformation Using Locally Dense Lattices
Define to be the following variant of the decision version of the -approximate Closest Vector Problem: given a basis of a rank- lattice and a target vector as input, decide whether there exists such that (i.e., the input is a YES instance), or whether (i.e., the input is a NO instance), under the promise that one of the two cases holds. In other words, is the variant of that asks whether there is a binary combination of basis vectors close to the target. Much is known about the (fine-grained) complexity of , which will be useful for us (see Theorems 2.17 and 2.18).
Our reductions from to and to have the same basic form. Given a rank- instance of for some , we apply the following transformation with some scaling factors and some locally dense lattice of rank with target satisfying Equation 1:
(2) |
Essentially the same transformation appears in both [AS18] and [BP20], and similar ideas appear in a number of works before that. Our work differs in its constructions of locally dense lattice gadgets , , its more general reductions, and its improved analysis.
Here we give a rough analysis of the transformation using two observations. First, we observe that appending to allows us to upper bound the number of short lattice vectors in by
(3) |
for any . Second, suppose that , is a YES instance of . Then there exists such that , and hence for each with we get that , where . So,
(4) |
To transform a YES instance of to a valid instance of for some , it essentially suffices to set the parameters and use suitable , so that, say,
(5) |
Indeed, if Equation 5 holds, then by Equations 3 and 4, . We can then sparsify to obtain a lattice with no non-zero vectors of norm less than , and at least one vector within distance of , as needed.
We recall that by assumption, , which is important for satisfying Equation 5 since will typically be exponentially large in . We also need that if the input instance is a NO instance, then there will be few vectors in that are close to , which depends on having few vectors “too close” to , but again we defer discussing this. See Lemma 3.2 for a precise description of the useful properties of the transformation given in Equation 2.
When reducing to instead of we apply a further transformation to before sparsifying. Namely, we apply Kannan’s embedding, which appends the vector , for some value , to to obtain a new basis:
The analysis in this case is a bit more subtle as well—we need to upper bound quantities of the form not just for (corresponding to short and “too close” vectors in the case, respectively) but for all integers too—but the idea is similar. In fact, we use a result from [AS18] (presented in Theorem 4.2) that analyzes the combination of Kannan’s embedding and sparsification already, essentially reducing our task to bounding the quantities .
1.2.4 Specific Locally Dense Lattices
We conclude this summary of techniques by describing the specific locally dense lattices that we use to instantiate Equation 2. We use two main families of locally dense lattices for our results.
Exponential kissing number lattices.
The first family of locally dense lattices is derived from a family of “exponential kissing number” lattices . Here, is of rank and has exponential Euclidean kissing number, i.e., . Previously, [AS18] showed how to use the existence of such a family to prove -hardness of for all and some (and in particular, for , for which the result was not already known from other techniques), assuming non-uniform Gap-ETH. However, no such family of lattices was known at the time, and in fact proving the existence of such a family was a longstanding open question.
In seminal work appearing shortly after the publication of [AS18], Vlăduţ [Vlă19] succeeded in constructing such a family of lattices. Moreover, he proved the existence of such a family with an explicit exponential lower bound. More specifically, he showed the existence of with where (see Theorem 3.6).
The proofs of Theorems 1.1 and 1.3 both use these “Vlăduţ lattices” to construct locally dense lattices, but in different ways. The proof of Theorem 1.1 constructs a locally dense lattice , with relative distance , but with a non-explicit lower bound on the gap factor in Equation 1. The proof of Theorem 1.3 constructs a locally dense lattice , with relative distance but with an explicit lower bound on —essentially . The values in Theorem 1.3 are defined to be a certain quantity relating the maximum possible kissing number in a lattice of rank , roughly , and the number of vectors in of norm at most for some .
The proof of Theorem 1.3 is actually somewhat simpler than that of Theorem 1.1, so we first give a bit more detail on it. We note that taking and for a Vlăduţ lattice almost yields a locally dense lattice family with many close vectors for and relative distance , but there are two issues: (1) Vlăduţ lattices have exponential kissing number with respect to the norm rather than general norms, and (2) the origin is itself a “too close” vector.
We handle issue (1) by applying norm embeddings [RR06] with distortion to to obtain a family of lattices with exponentially high “handshake number” in the norm:
where applying the norm embedding is efficient for any . We handle issue (2) by “sparsifying away the origin.” Specifically, for all sufficiently large we show how to sample a sublattice and satisfying
with positive probability. In particular, this shows that such lattices exist.
For the proof of Theorem 1.1, we take to be scaled so that , and take to be a uniformly random vector of norm for some appropriately chosen constant . We then analyze for some appropriately chosen constant . Intuitively, there is a tradeoff between choosing smaller , which makes larger but requires to be smaller, and larger , which makes smaller but allows for to be larger. The relative distance that our reduction achieves is essentially the smallest for which we can ensure that . To translate these results to general norms, we again use norm embeddings. We also use additional techniques for dealing with “too close” vectors.
We note that the construction of locally dense lattices from lattices with exponential kissing number in [AS18] does not give explicit bounds on the relative distance or gap factor achieved; the reduction there essentially just needs and for non-explicit . On the other hand, the construction in Theorem 1.1 gives an explicit bound on but not on , and the construction in Theorem 1.3 gives an explicit bound on but with . Therefore, Theorems 1.1 and 1.3 can be seen as different refinements of the corresponding analysis in [AS18]. A very interesting question is whether it’s possible to get a construction that simultaneously achieves explicit and relative distance ; our current techniques do not seem to be able to achieve this. Such a construction would lead to new (Gap-)SETH-hardness results for .
The integer lattice .
The second family of locally dense lattices that we consider, used to prove Theorems 1.2 and 1.4, simply takes to be the integer lattice , and to be the all-s vector for some constant (without loss of generality, ):
We will be especially interested in the case where . In this case , where . So, for our analysis it essentially suffices to upper bound for some . We have good techniques for doing this; see Section 2.3. (We note that the “ barrier” mentioned earlier comes from being the smallest value of satisfying .)
The SETH-hardness result for for in [AS18], the hardness results for in [BP20], and the Gap-SETH-hardness result for in Theorem 1.4 all take , and analyze for some . That is, they essentially just need to upper bound for some (with for the hardness results). As alluded to in the discussion after the statement of Theorem 1.2, the Gap-ETH-hardness result in [AS18] for essentially works by proving that for every there exist and (not necessarily or ) such that
(6) |
for some non-explicit .
We do something similar, but study the more refined question of what the minimum value of is such that Equation 6 holds for some and . This minimum value of is essentially how we define the quantities used in Theorem 1.2; see Equation 20 for a precise definition. We note that, interestingly, in experiments this minimum value of is always achieved by simply taking . That is, empirically it seems that we do not lose anything by fixing and only varying .555This is especially interesting since [EOR91] notes that is not maximized by for some , including , and some fixed . For , [MO90] and [EOR91] note that for any fixed , is in fact minimized (up to a subexponential error term) by taking . We leave proving this as an interesting open question, but note that the strength of our results does not depend on its resolution either way.
1.3 Open Questions
One of the most interesting aspects of this and other work on the complexity of lattice problems is the interplay between geometric objects—here, lattices with exponential kissing number and locally dense lattices generally—and hardness results. Proving a better lower bound on would immediately translate into an improved bound on the values of and , and more generally proving the existence of some family of gadgets with smaller relative distance and at least close vectors would translate into a hardness result improving on both Theorems 1.1 and 1.2.
There is also the question of constructing locally dense lattices. The difference between existence and efficient randomized construction of locally dense lattices roughly corresponds to needing “non-uniform” versus “randomized” hardness assumptions. It is also an interesting question whether randomness is needed at all for showing hardness of or . In this work we crucially use randomness for sparsification in addition to using it to construct locally dense lattices. Indeed, derandomizing hardness reductions for (and similarly, ) is a notorious, decades-old open problem.
1.4 Acknowledgments
We thank Noah Stephens-Davidowitz for helpful comments.
2 Preliminaries
We denote column vectors by boldface, lowercase letters (e.g., ). We occasionally abuse notation and write things like instead of . We use and to denote the all-s and all-s vectors of dimension , respectively. We sometimes omit the subscript when it is clear from context. Finally, we occasionally abuse notation by mixing elements of a finite field (for prime ) and integers when performing arithmetic. In this case, we are really associating elements of with some distinguished set of integer representatives, e.g., .
2.1 Lattices and Point Counting
A lattice is a discrete additive subgroup of ; concretely, a lattice is the set of all integer linear combinations of a set of linearly independent vectors in . This set of vectors is called a basis of the lattice. Its cardinality is defined to be the rank of the lattice (which turns out to be independent of the choice of basis), and the dimension of the basis vectors is defined to be the dimension of the lattice. A basis is often represented by a matrix whose columns are the vectors in the basis, and the lattice generated by is denoted . Using this representation, a rank- lattice with basis can be written as . We denote the Moore-Penrose pseudo-inverse of a matrix by . When is a basis and is a lattice vector, is its coefficient vector.
We recall that for the norm of a vector is defined as , and for it is defined as . The minimum distance of a lattice in the norm is defined to be the minimum length of nonzero vectors in the lattice, i.e.,
Equivalently, as the name suggests, it is the minimum distance between any two distinct vectors in the lattice. For , we simply write for . We denote the distance between a vector and a lattice in the norm by
For , a discrete set , a target , and a distance , we define the following two point-counting functions:
We will typically use a lattice or a subset of a lattice as the discrete set .
In the the following claim we observe two simple bounds on point counts in lattices.
Claim 2.1.
For any lattice , target , and :
-
1.
;
-
2.
.
Proof.
For Item 1, the inequality is trivial when . For , we have
where is an arbitrary shortest vector in with .
For Item 2, if then the inequality is trivial. Otherwise fix to be such that . Then for every satisfying , we have and by the triangle inequality. Moreover, for , . Hence we know , as desired. ∎
Given lattice-target pairs and , the following claim gives an upper bound on the number of close vectors in the “direct sum lattice” to in terms of the product of the numbers of close vectors in to and in to .
Claim 2.2.
For lattices , targets , and ,
Additionally, if or , . Moreover, these results hold with in place of .
Proof.
For every lattice vector , (respectively ) holds only if (resp. ) and (resp. ). Hence the desired inequalities follow. ∎
2.2 Sparsification
The following lemma shows that the number of vectors in a collection of pairwise linearly independent (respectively, distinct) vectors in that satisfy a random linear constraint (respectively, random affine constraint) is relatively concentrated around its expectation . Similar results appear in [Kho05] and [Ste16].
Lemma 2.3.
Let , let , let be a prime, and let .
-
1.
If for all and , then
-
2.
If for all , then
In particular, for , the probabilities are upper bounded by .
Proof.
We start by proving Item 1. Let be an indicator random variable for the event , where . It holds that , that , and that are pairwise independent. For the last point, because and for are linearly independent, the linear map has a kernel of dimension . It follows that for a fraction of vectors , and hence that are pairwise independent. The result then follows by applying Chebyshev’s inequality to .
The following lemma shows that when , it is probable that no vector in the collection satisfies a random affine constraint. This is similar to [BP20, Lemma 2.8].
Lemma 2.4.
Let , let be a prime, and let . Then:
-
1.
It holds that
-
2.
If , then for fixed ,
Proof.
We have that for each . Moreover, if then for any fixed . The claims then follow by union bound. ∎
We next show how to sparsify a lattice and argue about vector counts in the resulting sparsified lattice. In particular, we use Lemmas 2.3 and 2.4 to lower bound the minimum distance, lower bound the number of close vectors (and hence also upper bound the close distance), and lower bound the too-close distance in the resulting sparsified lattice.
Proposition 2.5.
Let , let be a lattice of rank with basis , let , let be a prime, and let . Let be sampled uniformly at random, and define
Then the following hold:
-
1.
(Minimum distance.) If , then .
-
2.
(Close vector count and distance.) If , then for any ,
In particular, for , .
-
3.
(Too-close distance.) .
Proof.
For Item 1, let , let be the distinct non-zero lattice vectors satisfying , and let for . Because , we know that and thus . Therefore by Lemma 2.4, Item 2 with ,
For Items 2 and 3 we will use the fact that the statistical distance between and with sampled uniformly at random is . This follows by a direct computation and noting that and are identically distributed conditioned on . Additionally, we note that
(7) |
For Item 2, let , let be the distinct lattice vectors satisfying , and let for . By triangle inequality, for all . Then, because , for we know that and thus . Let be sampled uniformly at random. Then by Equation 7 and Lemma 2.3, Item 2,
For Item 3, let , let be the distinct lattice vectors satisfying , and let for . Let be sampled uniformly at random. Then by Equation 7 and Lemma 2.4, Item 1,
2.3 Point Counting via the Theta Function
Here we present results related to counting points in using theta functions (which are the norm analogs of the Gaussian function). The main result presented in this section, Theorem 2.8, was originally proved in [MO90] and [EOR91]. Here we follow the nomenclature used in [AS18]. For , , and , define
We note that without loss of generality we may take . For , extend this to
For , , and , define
where is the probability distribution over that assigns probability to . For , extend this to
The following fact about is claimed but unproven in [AS18].
Claim 2.6.
For any , and , there exists a unique such that .
Proof.
The following defines functions that, as we will see, are equal to or closely approximate .
Definition 2.7.
For , , and , we define as follows.
-
1.
For , define .
-
2.
For , define and for define .
-
3.
For , define
where is the unique solution to .
We note that is well-defined in the case by 2.6. We will also work with the inverse function , which we show is well-defined in 2.9 for .
The following theorem says that is equal to the number of integer points in a -scaled, -centered ball in dimensions up to a subexponential error term. This also implies that corresponds to the radius at which , where the approximation again holds up to a subexponential error term. This theorem will be very important for our analysis. We state the result using the notation of [AS18, Theorem 6.1], but again note that it was originally proven for in [MO90] and for general in [EOR91].
Theorem 2.8.
For any constants and , there is another constant such that for any and ,
We conclude this subsection with two technical claims. 2.9 gives properties of the function that we will frequently use in later sections. 2.10 argues about several limits involving the functions , , and , and notes that they do not depend on the parameter . For readability, the proofs of these claims are deferred to Appendix A.
Claim 2.9.
For any and :
-
1.
For , for some constant ;
-
2.
For , , i.e., the unique solution of minimizes ;
-
3.
is strictly increasing for and differentiable (and hence continuous) for ;
-
4.
is well-defined and strictly increasing for , and differentiable (and hence continuous) for .
Claim 2.10.
For any and :
-
1.
;
-
2.
;
-
3.
.
In particular, none of the preceding limits depends on .
2.4 Lattice Problems
We next introduce the main lattice problems that we will work with, starting with a variant of the Closest Vector Problem ().
Definition 2.11.
For and , the decision promise problem is defined as follows. An instance consists of a basis and a target vector .
-
•
It is a YES instance if there exists such that .
-
•
It is a NO instance if .
For , we will simply write as . We note that for , the distance threshold in the YES case is without loss of generality, because we can scale the lattice and target vector. (Our definition of below uses an instance-dependent distance threshold , which will be convenient.) We also note that is trivially a subproblem of “standard” (which we do not define formally here), i.e., an instance of the former is also an instance of the latter.
We next introduce , a search variant of where the target vector is promised to be close to the lattice.
Definition 2.12.
For and , an instance of the search problem is a lattice basis and a target satisfying , and the goal is to find a closest lattice vector to such that .
We note that there is another frequently used version of where the goal is instead to find a lattice vector satisfying , which is less demanding than the goal in the definition above. By [LM09, Lemma 2.6],666Formally, as stated, [LM09, Lemma 2.6] applies only to the case where and , but a very similar reduction works for all and . the version of defined above (in rank ) efficiently reduces to this alternative version (in rank ), so the exponential-time hardness results we prove for the version defined above immediately apply to the alternative version as well.
Finally, we introduce the decision version of the Shortest Vector Problem ().
Definition 2.13.
For and , the decision promise problem is defined as follows. An instance consists of a basis and a distance threshold .
-
•
It is a YES instance if .
-
•
It is a NO instance if .
We note that “” and “” are sometimes instead used to refer to the search versions of the Shortest and Closest Vector Problems, with their respective decision problems instead called “” and “.” There is a trivial reduction from the decision versions to the corresponding search versions of these problems, so any hardness of the former implies identical hardness of the latter. In particular, our main hardness result for decisional in Theorem 1.4 immediately implies corresponding hardness of search as well.
2.5 Hardness Assumptions and Results
We will use the following hypotheses, which are “gap” variants of the celebrated (Strong) Exponential Time Hypothesis ((S)ETH) of Impagliazzo and Paturi [IP01]. Gap-ETH was introduced in works by Dinur [Din16] and by Manurangsi and Raghavendra [MR17], and Gap-SETH was introduced by Manurangsi in his Ph.D. thesis [Man19]. Recall that in the -Gap--SAT problem with , the goal is to distinguish between -SAT formulas in which at least a fraction of clauses are satisfiable, and ones in which strictly less than an fraction of clauses are satisfiable.
Definition 2.14 (Gap Exponential Time Hypothesis (Gap-ETH)).
There exists such that no -time algorithm solves -Gap--SAT on variables.
Definition 2.15 (Gap Strong Exponential Time Hypothesis (Gap-SETH)).
For every there exist and such that no -time algorithm solves -Gap--SAT on variables.
We also define “plain” SETH.
Definition 2.16 (Strong Exponential Time Hypothesis (SETH)).
For every there exists such that no -time algorithm solves -SAT on variables.
We define “randomized” (respectively, “non-uniform”) versions of the above hypotheses analogously, but with “randomized algorithm” (respectively, “non-uniform algorithm”) in place of “algorithm.” Here by an -time non-uniform algorithm for solving Gap--SAT, we mean a family of circuits such that solves Gap--SAT on variables and is of size at most .
We note that, trivially, the randomized versions of these hypotheses are at least as strong as the normal (deterministic) versions, and that, by standard randomness preprocessing arguments such as those used in the proof of Adleman’s Theorem (which asserts that ), the non-uniform versions are at least as strong as the randomized versions. See, e.g., [BP20].
2.5.1 Hardness of CVP’ Assuming (Gap-)(S)ETH
We next recall known hardness results about the complexity of assuming (Gap-)(S)ETH.
Theorem 2.17 ([BGS17, Corollary 5.2]).
For all there exists a constant such that has no -time algorithm on lattices of rank assuming Gap-ETH.
Theorem 2.18 ([ABGS21, Theorem 4.2]).
For all and all there exists a constant such that there is no -time algorithm for on lattices of rank assuming Gap-SETH.
We also present a variant of the preceding theorem which shows a similar runtime lower bound for assuming “plain” SETH, but for of the form as opposed to constant .
Theorem 2.19 ([ABGS21, Corollary 3.3]).
For all and all there exists for some constant such that there is no -time algorithm for on lattices of rank assuming SETH.
We remark that all of these theorems are stated for rather than in the original work, but we note that inspection of their proofs shows that they in fact hold for . We also remark that the approximation factors stated in the original versions of Theorems 2.17 and 2.18 have an explicit dependence on the soundness and completeness parameters and in the source Gap--SAT problems from which the reductions are given. Gap-ETH and Gap-SETH as stated in Definitions 2.14 and 2.15 assume soundness for some non-explicit and perfect completeness (), resulting in the non-explicit constant here. Finally, we remark that Theorem 2.19 is stated in its original version as only showing hardness for exact , but inspection of its proof shows that the result holds with , as claimed.
3 Gap-(S)ETH-hardness of BDD
The goal of this section is to prove Theorems 1.1, 1.3 and 1.2, which show the Gap-(S)ETH-hardness of . The common proof idea is to construct reductions from (whose Gap-(S)ETH-hardness is known; see Theorems 2.17 and 2.18) to .
We start by introducing two intermediate problems used in our reductions. The first problem, -, is essentially a relaxation of the decision version of in which there are at least (“good”) close vectors to the target (at distance at most ) in the YES case, at most (“annoying”) short non-zero vectors (of norm strictly less than ) in the YES case, and at most “too close” vectors to the target (at distance at most ) in the NO case.
Definition 3.1.
Let , , , and . An instance of the decision promise problem - is a lattice basis , a target , and a distance .
-
•
It is a YES instance if and .
-
•
It is a NO instance if .
We note that - is essentially a decision version of , which is a search problem, and the decision-to-search reduction from - to is trivial: call the search oracle and verify if it outputs a lattice vector satisfying .
3.1 General Reductions from CVP’ to BDD
Here we show the reductions from to - and - to -. Since we also have the trivial reduction from - to “ordinary,” search , composing these reductions gives us the overall desired reduction from to .
We start with the following lemma analyzing a transformation involving instances and a lattice-target gadget . This transformation can be regarded as a generalization of the techniques used in [AS18, BP20]; in particular the lemma generalizes [BP20, Lemma 3.5]. The transformation is versatile, and the lemma is used in the main reduction in this section as well as the main reduction in Section 4.
Lemma 3.2.
For any , define the following transformation from a basis of a rank- lattice to a basis of a rank- lattice , and from a target vector to a target vector , parameterized by a scaling factor and a gadget consisting of a rank- lattice and a target vector :
Then for any , , and ,
-
1.
for any ;
-
2.
if there exists such that , then ;
-
3.
if , then .
Proof.
Item 1 follows from the observations that every vector for corresponds bijectively to the vector and that .
For Item 3, similarly, for every such that the lattice vector satisfies , we have
Hence the claim follows. ∎
We then show the reduction from - to -. A similar reduction exists in [BP20]. In particular, Lemma 3.3 uses sparsification in a similar way as [BP20, Theorem 3.3] while targeting the - problem. We note in passing that the constraint in Lemma 3.3 is not essential but simplifies some expressions and suffices for our purposes.
Lemma 3.3.
For any , , and , , where is computable in time, there is a rank-preserving randomized Karp reduction with bounded error from - to -.
Proof.
The reduction works as follows. On input an - instance , it first samples a prime , It next samples uniformly at random and sets
Then, it computes a basis be a basis of . Such a is efficiently computable given ; see, e.g., [Ste16, Claim 2.15]). Finally, it outputs . Clearly, the reduction is efficient so it remains to show correctness.
First note that by 2.1, Item 1, we know and thus . If is a YES instance, i.e., and , then by Proposition 2.5, Item 1 (using for ), with probability at most . Moreover, by Proposition 2.5, Item 2, with probability at most . Hence by union bound, is a YES instance for - with probability at least
If is a NO instance of - (i.e., ) then by Proposition 2.5, Item 3, is a NO instance of - (i.e., ) with probability at least
In each case, the reduction succeeds with probability at least , as needed. ∎
We conclude this subsection with a theorem that analyzes the general class of reductions from to obtained by instantiating Lemma 3.2 with some lattice-target gadget and then applying Lemma 3.3. In particular, the theorem requires there to be roughly times as many close lattice vectors in the gadget as short lattice vectors , and roughly times as many close lattice vectors as “too close” lattice vectors .
More specifically, the theorem gives a reduction to with explicit multiplicative rank increase for any larger than some expression involving the input instance approximation factor , the gadget “close distance” , the gadget “too close distance” , the close/short vector count gap factor , and the close/“too close” vector count gap factor ; see Equation 9. (The gadget is normalized so that its “short distance” is equal to .) Additionally, the theorem gives a reduction to for any larger than a simpler expression not depending on or with linear but non-explicit rank increase; see Equation 10. Equation 9 leads to SETH-type hardness results, while Equation 10 leads to ETH-type hardness results. All three of our main results for , Theorems 1.1, 1.3 and 1.2, work by instantiating this theorem.
Theorem 3.4.
Suppose that for there exist constants , and a family of bases and targets satisfying for , where , and
(8) |
Then for any constants , , and
(9) |
where and ,777We note that is only defined for , and that the second term in this max corresponds to . there exists an efficient non-uniform reduction from in rank to in rank for all sufficiently large .
Moreover, for any constants and
(10) |
there exist a constant and an efficient non-uniform reduction from in rank to in rank for all sufficiently large .
Furthermore, if can be computed in randomized time and each term in Equation 8 can be approximated to within a factor in randomized time, then the preceding efficient non-uniform reductions can be replaced by efficient randomized reductions.
Proof.
By Lemma 3.3 as well as the trivial reduction from - to (ordinary, search) , it suffices to give a reduction that, on input a instance of rank , outputs an - instance of rank with for all sufficiently large . To do this, we apply the transformation in Lemma 3.2 with scale factor and basis-target gadget . I.e, on input the reduction sets
and certain . We will give the choice of the parameters later. Note that the reduction is (non-uniformly) efficient given that the gadget is provided as advice, has rank and dimension that are both polynomial in , and, as will be clear from their definitions later, the parameters are efficiently computable.
By Lemma 3.2 (using , , and for in Items 1, 2 and 3 in the lemma, respectively), this reduction from to - is correct for
By 2.2, we have
(11) | ||||
Then in order for to hold, it suffices to set parameters so that the following inequalities hold:
(12) | ||||
(13) | ||||
(14) | ||||
(15) | ||||
(16) |
The three inequalities in Equations 12, 13 and 14 ensure that we can apply the condition given in Equation 8 to the -scaled gadget . In particular, the expressions on the right-hand sides of these inequalities correspond to times the gadget “short distance” , the gadget “too close distance” , and the gadget “close distance” , respectively. The two inequalities in Equations 15 and 16 correspond to the two terms in the upper bound on in Equation 11, and by Equations 8, 12, 13 and 14 suffice to ensure that .
Now let and for some constants to be determined later, and set . First, for Equation 15 to hold, by 2.9, Item 1, it suffices to have
which in turn holds for all sufficiently large if
Equivalently, by the (strict) monotonicity of it suffices to have
Similarly, for Equation 16 to hold for all sufficiently large , it suffices to have .888Note that when we have by definition, in which case ensures that . Moreover, Equation 14 holds for all sufficiently large as long as .
Putting it all together, Equations 12, 13, 14, 15 and 16 hold for all sufficiently large if the following four constraints hold:
(17) | ||||
More specifically, the first constraint in Equation 17 implies both Equations 12 and 15 (since we set ), the second constraint implies Equation 13, the third constraint implies Equation 14, and the last constraint implies Equation 16. The last three constraints in Equation 17 are equivalent to the single constraint that
(18) |
and one can check that for every
(19) |
the left-hand side of Equation 18 is strictly less than the right-hand side of Equation 18. I.e., for every satisfying Equation 19 there exists such that and satisfy Equation 18. Dividing both sides of Equation 19 by and combining it with the first constraint in Equation 17 (i.e., ), we get that for all satisfying Equation 9 there exist and such that holds for all sufficiently large , as needed.
For the additional result associated with Equation 10, assume without loss of generality that , as otherwise one could use in place of both and in Equation 8. Then the result follows by taking the limit in Equation 9, and noting that by 2.10 we have and when . ∎
3.2 Gap-ETH-hardness of BDD from Exponential Kissing Number Lattices
Definition 3.5 (Lattice kissing number).
The kissing number of a lattice is defined as
The maximum lattice kissing number for rank is defined as
It was a longstanding open question whether there exists an infinite family of lattices with exponential kissing number, i.e., whether there exist infinitely many such that is greater than a constant. In recent breakthrough work, Vlăduţ [Vlă19] not only showed this, but also that there exists such a family with a lattice of every rank such that for some explicit constant . Accordingly, we define to be the largest value of such that for every .
Theorem 3.6 ([Vlă19, Theorem 1.5]).
It holds that .
We call the constant in the preceding theorem the “ kissing-number constant.”
In this section, we show gadgets for instantiating the general reductions in Theorem 3.4 based on exponential kissing number lattices (see Corollary 3.11 for the resulting gadgets, which are obtained by sequentially applying Lemmas 3.7, 3.8 and 3.9 to the exponential kissing number lattices).
The following lemma shows that, given a rank- lattice and a target , by moving by a small distance in a random direction to some new target and picking a distance that is times smaller than the original distance, the number of vectors within (relative) distance of is in expectation roughly a factor less than the number of vectors within the original distance of , where is (only) a function of and . This allows us to construct gadgets with smaller relative distances (where is as in Theorem 3.4) at the cost of an exponential decrease in the number of close vectors ; fortunately we can afford this decrease as long as the original count is exponential.
To remark, the lemma is similar to [AS18, Corollary 5.8], but has a larger parameter space for and , and shows that a larger fraction (essentially ) of close vectors is preserved.
Lemma 3.7.
For any lattice of rank , target , and constants and , there exists with , such that
where .
Proof.
Without loss of generality suppose that has dimension so that . It suffices to show that the expectation of for a uniformly random target sampled from the sphere is at least , and therefore by linearity of expectation it suffices to show that the probability that is at least for any vector with . Let , , and be the angle between and . We know and . We calculate the probability as follows.
where is the surface area of a spherical cap with height of the unit sphere , and is the surface area of the entire unit sphere . The ratio in the last quantity satisfies ; see, e.g., [BDGL16, Lemma 2.1 and Appendix A]. ∎
The following lemma is implicit in [AS18, Lemma 5.2], with small changes regarding and . It uses an averaging argument to show that if there exists a gadget with at least times as many close vectors as short vectors , then there exist a (potentially different) target and such that there are at least times as many close vectors as short vectors and in addition at least as many close vectors as “too close” vectors . Here is a parameter that controls the trade-off between and the difference of the distances for the close and the “too close” lattice vectors. In particular, taking larger allows us to decrease the difference between the “close distance” and “too close distance” in Theorem 3.4 at the cost of having a smaller factor in place of (this trade-off is analogous to the one in Lemma 3.7, which allowed us to decrease ).
Lemma 3.8.
For any , lattice , target , and constants and satisfying for some , there exist and , such that
where and .
Proof.
We use the following lemma about norm embeddings to extend our gadgets based on exponential kissing number lattices to arbitrary norms.
Lemma 3.9 ([FLM77], [RR06, Theorem 3.2]).
For any , , and , there exists and a linear map , such that for all ,
In particular, it suffices to take to be
We remark that for fixed , the increased dimensionality of the norm embeddings given by Lemma 3.9 is , which is polynomial in for any .
The following lemma allows us to adapt gadgets in norm to the requirements of Theorem 3.4; in particular, the lemma gives analogous gadgets in any norm by applying the norm embeddings in Lemma 3.9 and scaling properly, with the caveat that and may be distorted by a factor.999Miller and Stephens-Davidowitz [MS19] referred to the number of non-zero vectors in a lattice of length at most as its “-handshake number.” For our reductions, it essentially suffices to have a family of lattices where has rank and large -handshake number (in the norm, with satisying, say, ), which is a weaker property than having large kissing number. Indeed, all of our reductions are just as strong quantitatively (in terms of the relative distance and rank increase achieved) using lattices with large handshake number. The only issue with using lattices with large handshake number (rather than large kissing number) is qualitative. Their use leads us to rely on “Gap” variants of (S)ETH. However, in Section 3.4.1 we sketch why even this difference is likely not inherent. Additionally, we note that in follow-up work Vlăduţ [Vlă21] proved an exponential lower bound on the lattice kissing number in all norms. However, the lower bound from [Vlă21], which is independent of , is roughly . This is weaker than the lower bound of roughly for stated in Theorem 3.6. By norm embeddings, a lower bound on the kissing number implies a lower bound on the -handshake number in the norm for in lattices of dimension , and so, by the discussion in the preceding paragraph on kissing number versus handshake number, using [Vlă21] would not lead to quantitatively stronger results.
Lemma 3.10.
For any lattice of rank , target , and constants , if, for some ,
then for any , there exist a lattice of rank and dimension and a target , such that
Proof.
Without loss of generality suppose that has dimension so that , and is the closest lattice vector to so that (note that ). We apply the norm embeddings in Lemma 3.9 to the gadget , with error bound . Let be the embedded gadget. By Lemma 3.9, we know has rank and dimension , and
Then by normalizing with and , where , we get a desired gadget . ∎
By applying Lemmas 3.7 and 3.8 to exponential kissing number lattices along with targets , and adapting the resulting gadgets to the requirements of Theorem 3.4 using Lemma 3.10, we get the following.
Corollary 3.11.
For any , , and constants , and , there exist , a lattice with rank and dimension , and a target , such that
where and .
Proof.
By Theorem 3.6 we know there exists lattice of rank satisfying and . Then by Lemma 3.7, there exists a target such that . Therefore by Lemma 3.8, there exists and target such that
The result then follows by applying Lemma 3.10 to the gadget . ∎
Using the notation in Theorem 3.4, the family of gadgets given by Corollary 3.11 has , , and . For , we need . By rearranging this is equivalent to , where the right-hand side is minimized by setting .
Therefore, by taking to be a constant sufficiently close to and to be a constant sufficiently close to , we can take to be an arbitrary constant greater than and to be an arbitrary constant less than . Plugging such into Equation 10 in Theorem 3.4, we get that it simplifies to . By applying this family of gadgets to Theorem 3.4, we then get the following.
Corollary 3.12.
For any , , and , there exists an efficient non-uniform reduction from in rank to in rank for some constant for all sufficiently large .
Combining Corollary 3.12 with the Gap-ETH-hardness result for in Theorem 2.17, we immediately get Theorem 1.1, restated below. We recall that the bound on follows from Theorem 3.6.
Theorem 1.1.
For all , there is no -time algorithm for for any constant , unless non-uniform Gap-ETH is false. Here , where is the kissing-number constant defined in Section 3.2.
3.3 Gap-ETH-hardness of BDD from the Integer Lattice
In this section we show Gap-ETH-hardness of BDD by instantiating the general reductions in Theorem 3.4 with gadgets based on the integer lattice as opposed to exponential kissing number lattices. More specifically, we will consider lattice-target gadgets .
For , we define
(20) |
The numerator in Equation 20 corresponds to the resulting gadget’s “close distance,” and the denominator to the gadget’s “short distance.” More specifically, the value corresponds to the largest radius at which the number of “short” integer vectors of norms at most is less than the number of “close” integer vectors within distance of the target . We note that since
where the last equality follows from 2.10. Moreover, we note that (provably) for all by [AS18, Lemma 5.4], and that as (see the discussion at the end of this subsection). The left plot in Figure 1 shows explicit values of for around , and how compares to the related quantities (with norm embeddings) and .
The following lemma formally analyzes the gadgets based on the integer lattice.
Lemma 3.13.
For any and , there exist , and , such that for any and for ,
Proof.
By 2.9, Item 1, to have the desired inequality, it suffices to have
and then suffices to have
Since , by the monotonicity of , we know that for any there exists such that . Moreover, as , by definition of in Equation 20, we know there exists and such that
Hence, by the monotonicity of , there exists so that . ∎
By applying the normalized gadgets to Theorem 3.4, and setting to be arbitrarily close to , we get the following. Note that this family of gadgets is efficiently computable as is efficiently computable and is set to be for some constant , and the associated point counting of the form can be efficiently approximated to within a factor by computing .
Corollary 3.14.
For any , , and , there exists an efficient randomized reduction from in rank to in rank for some constant for all sufficiently large .
Combining Corollary 3.14 with the Gap-ETH-hardness result for in Theorem 2.17, we immediately get Theorem 1.2, restated below.
Theorem 1.2.
For all , there is no -time algorithm for for any constant , unless randomized Gap-ETH is false. Here is an explicit constant, defined in Equation 20, which satisfies for , for all , and as .
Bennett and Peikert [BP20, Equation 3.5] define and show an analogous result to Theorem 1.2 for all (assuming randomized “non-gap” ETH). By 2.9, Item 2, we know . Note that for . As a result, we have (see Equation 20 for the definition of ), and so Theorem 1.2 gives a result that is at least as strong quantitatively as the corresponding result in [BP20] for all (see the left plot in Figure 1). Moreover, the upper bounds on obtained in [BP20, Lemma 3.11] immediately apply to our as well. In addition, holds for any by triangle inequality (see 2.1, Item 2), implying that . Combining these bounds, we get that .
3.4 Gap-SETH-hardness of BDD from Exponential Kissing Number Lattices
In this section we show Gap-SETH-hardness of using the general reduction established in Section 3.1. We start by showing how to construct locally dense lattices with exponentially many close vectors, essentially the same close and short distances, and no too-close vectors by sparsifying exponential kissing number lattices.
Lemma 3.15.
For all sufficiently large , there exists a lattice of rank and a target such that , , and .
Proof.
By definition, there exists a lattice of rank with Euclidean kissing number . Suppose without loss of generality that has dimension and . We will apply Proposition 2.5 to lattice and target with index . In particular, supposing is an arbitrary basis of , the sparsification samples uniformly at random and sets
It is easy to see that by construction, . Moreover, note that with , we have . Then by Proposition 2.5, Item 2, with probability at most , and by Proposition 2.5, Item 3, with probability at most . By union bound, satisfies the desired properties with probability at least
which is positive for and all sufficiently large . In particular, , with the claimed properties exist for all sufficiently large . ∎
Note that the gadgets constructed in Lemma 3.15 have, using the notation in Lemma 3.10, , (by Theorem 3.6) and arbitrarily large . By applying Lemma 3.10 to the gadgets we immediately get the following.
Corollary 3.16.
For any , , and , there exist a lattice of rank and dimension and a target , such that
For , we define
(21) |
which we obtain by plugging , , and sufficiently large into Equation 9 in Theorem 3.4. Combining this with Corollaries 3.16 and 3.4, we get the following.
Corollary 3.17.
For any constants , , , and , there exists an efficient non-uniform reduction from in rank to in rank for all sufficiently large .
Combining Corollary 3.17 with the Gap-SETH-hardness result for in Theorem 2.18, we immediately get Theorem 1.3, restated below.
Theorem 1.3.
For all and all , there is no -time algorithm for for any constant , unless non-uniform Gap-SETH is false. Here is an explicit constant, defined in Equation 21, which satisfies as for any fixed .
We note that the extra factor in the runtime lower bound exponent in Theorem 2.18 gets absorbed by the multiplicative rank increase factor in Corollary 3.17.
We remark that for any fixed , , and for any fixed , . To show this, we consider the behavior of . By 4.5 in Section 4 we know that for some strictly increasing function with (see 4.5 for the explicit formula for ), and consequently . Then, recalling the definition of in Equation 21, we have , which implies the claimed limits.
3.4.1 An Approach for Using a Weaker Hypothesis
We conclude with an approach for making the result in Theorem 1.3 depend on a weaker assumption, namely non-uniform SETH rather than non-uniform Gap-SETH. The approach considers a reduction akin to the main reduction in Theorem 3.4, which reduces an instance of of rank to an instance of of rank for some constant assuming the existence of a family of locally dense lattices parameterized by , , , and . Theorem 3.4 assumes that all of these parameters are constants, but this requirement is for simplicity and does not seems inherent. Here we will consider what happens if we disregard this requirement and allow , , and to depend on . The approach uses three observations:
-
1.
Theorem 2.19 shows hardness of instances of rank with assuming (plain) SETH.
-
2.
Applying norm embeddings (Lemma 3.9) to lattices of rank with distortion can be done in time.
-
3.
Because we can take to be arbitrarily large in Corollary 3.16, when plugging the corresponding gadgets into Theorem 3.4 we can make arbitrarily large. In particular, we can make it so that the middle term in Equation 9 simplifies to .101010In fact, this is essentially the same observation we already used to derive Corollary 3.17 from Corollary 3.16.
Combining these observations, if satisfies —which it does for the SETH-hard instances from Theorem 2.19—then we can efficiently apply norm embeddings with distortion to obtain modified locally dense lattices of rank with the same bounds on and as in Corollary 3.16, but with satisfying and hence .
Naively plugging these modified locally dense lattices into Equation 9 and invoking Theorem 2.19 appears to yield a result with the same conclusion as in Theorem 1.3, but with the hypothesis weakened to rely only on non-uniform SETH instead of non-uniform Gap-SETH. However, we emphasize that this is not correct as is since we have violated the assumption that , , and are constants in Theorem 3.4. Nevertheless, it seems very likely that we could remove this assumption (at least in the special case of the locally dense lattices used in this section) and obtain this result.
4 Gap-SETH-hardness of GapSVP
The goal of this section is to show Gap-SETH-hardness of , stated in Theorem 1.4, using the integer lattice as a locally dense lattice as in Section 3.3. Here the proof idea is again to reduce from , whose Gap-SETH-hardness is known (see Theorem 2.18), to . We start by introducing the intermediate problem -, which was originally defined in [AS18] with a slightly different but equivalent parameterization.
Definition 4.1.
For , , and , an instance of the decision promise problem - is a lattice basis , a target , and distances .
-
•
It is a YES instance if .
-
•
It is a NO instance if , where
The following result from [AS18] gives an essentially rank- and dimension-preserving reduction from - to for . Although we will use it as a black box, we note in passing that it works by combining Kannan’s embedding and lattice sparsification, essentially reducing our task to upper bounding the terms .
Theorem 4.2 ([AS18, Theorem 3.2]).
For any , and computable in time, and , there is an efficient randomized reduction from - instances of rank and dimension to instances of rank and dimension that runs in time .
So, to show hardness of for some , it suffices to show hardness of - with . To do this, we give a reduction from to - similar to the one in [AS18, Corollary 4.2] but with different choice of parameters and analysis. We note that [AS18, Corollary 4.2] introduced a similar transformation to the one in Lemma 3.2, which we will again use.
The main idea behind the new choice of parameters is that scaling the input instance by for some constant makes it so that —which is at most if is a YES instance and at least if is a NO instance—is within a multiplicative constant factor of . This makes it so that, applying the transformation in Lemma 3.2 to (, ) to obtain (, ), we have that changes by a multiplicative constant factor depending on whether the input instance is a YES instance or a NO instance. Similarly, we will set for some constant . This choice of parameters allows us to reduce from approximate to approximate -; the corresponding result [AS18, Corollary 4.2] takes , and only works as a reduction to exact -.111111We also note that [AS18, Corollary 4.2] is only stated as a reduction from instances of a more specific form than . Here we observe that all that’s necessary there (respectively, here) is that the input instances be instances (respectively, that the input instances be instances with ); frequently appears in the study of the complexity of lattice problems, and so this shows that the reduction is quite natural.
Lemma 4.3.
Let , , and be constants, and let . Then for all
and satisfying , there is a deterministic Karp reduction from on lattices of rank to - on lattices of rank , where
for some constant .
Proof.
Let , be the input instance of . We will again use the transformation in Lemma 3.2 with and . Specifically, we set
with , set , and set (where ). The output - instance consists of , , , and . It is clear that the reduction runs in polynomial time, and remains to prove correctness of the reduction.
Let . Suppose that the input instance is a YES instance. Then there exists such that , so applying Lemma 3.2, Item 2 we get
It follows that the output - instance is a YES instance.
Now, suppose that the input instance is a NO instance. Then
where the inequality follows by the strict upper bound on the choice of and the definition of .
We will upper bound summands in the above sum according to three cases. First, consider summands in which is even. By Lemma 3.2, Item 1, and the fact that ,
(22) |
Second, consider the case where . Then using the fact that and applying Lemma 3.2, Item 3 we have that
where the last equality follows by noting that because there is no integer point in the open ball of radius . Third, consider the case where is odd and . Then, by Lemma 3.2, Item 1 and the fact that , we have
The fact that then follows by noting that the sum has a constant number of terms, that summands in which is even are upper bounded by the right-hand side in Equation 22, and that summands in which is odd are equal to . Hence the output - instance is a NO instance. ∎
It remains to set the parameters so that in Lemma 4.3. Following [AS18],121212Actually, [AS18] defines using the quantity . However, is equivalent to by 2.9, Item 2, and therefore Equation 23 is equivalent to the definition of in [AS18]. let be the unique solution to the equation , and for define
(23) |
This quantity is essentially the multiplicative rank-increase factor that our reduction from to incurs, and hence it determines the quality of the best runtime lower bound on that we are able show (assuming randomized Gap-SETH). We note that although we do not have a closed-form expression for itself, we can compute it to high accuracy and also have a rather tight closed-form upper bound on it; see Figure 2 and Equation 24.
Lemma 4.4.
For every , , and , there exists such that there is an efficient randomized reduction from in rank to in rank for all sufficiently large .
Proof.
By Theorem 4.2, it suffices to reduce in rank with to - in rank for some so that for all sufficiently large . To do this, by Lemma 4.3, it suffices to show the existence of so that for all sufficiently large , where
and . (Here the upper bound on is given by 2.9, Item 1.) Indeed, then we can take to be any value in the range .
By taking -th roots of and , we see that for to hold for all sufficiently large , it suffices to have
Then because is strictly increasing in , it suffices to have
By recalling the definition of in Equation 23, we know that holds for , and hence there exists satisfying the inequality. More specifically, one can check that the inequality is satisfied by taking any in the (non-empty) range
Combining Theorem 2.18 and Lemma 4.4 we immediately get Theorem 1.4, which we restate below.
Theorem 1.4.
For all , , and all , there is no -time algorithm for for some constant , unless randomized Gap-SETH is false. Here is an explicit constant, defined in Equation 23, which satisfies as .
We note that because the multiplicative rank increase factor in Lemma 4.4 is strictly greater than , the factor in the runtime lower bound exponent in Theorem 2.18 gets absorbed by .
We conclude by showing a closed-form upper bound on based on the following bound on the function .
Claim 4.5.
For any and , .
Proof.
The result is implicit in [BP20, Lemma 3.11], where it is shown that for any ,
and the minimizer for the right-hand side is . Hence by setting we have
By 4.5 and the definition of in Equation 23, we get the following upper bound on for all sufficiently large (all greater than approximately so that the denominator is positive):
(24) |

We remark that this gives a tighter closed-form bound on than the one in [AS18, Claim 4.4]; see Figure 2. Furthermore, and , and it is easy to see , so Equation 24 shows that .
References
- [ABGS21] Divesh Aggarwal, Huck Bennett, Alexander Golovnev, and Noah Stephens-Davidowitz. Fine-grained hardness of CVP(P) — everything that we can prove (and nothing else). In SODA, 2021.
- [ABSS97] Sanjeev Arora, László Babai, Jacques Stern, and Z. Sweedyk. The hardness of approximate optima in lattices, codes, and systems of linear equations. J. Comput. Syst. Sci., 54(2):317–331, 1997.
- [AC21] Divesh Aggarwal and Eldon Chung. A note on the concrete hardness of the shortest independent vector in lattices. Inf. Process. Lett., 167:106065, 2021.
- [AD97] Miklós Ajtai and Cynthia Dwork. A public-key cryptosystem with worst-case/average-case equivalence. In STOC, pages 284–293, 1997.
- [Ajt98] Miklós Ajtai. The shortest vector problem in is NP-hard for randomized reductions (extended abstract). In STOC, pages 10–19, 1998.
- [AS18] Divesh Aggarwal and Noah Stephens-Davidowitz. (Gap/S)ETH hardness of SVP. In STOC, pages 228–238, 2018.
- [BDGL16] Anja Becker, Léo Ducas, Nicolas Gama, and Thijs Laarhoven. New directions in nearest neighbor searching with applications to lattice sieving. In SODA, pages 10–24, 2016.
- [BGS17] Huck Bennett, Alexander Golovnev, and Noah Stephens-Davidowitz. On the quantitative hardness of CVP. In FOCS, 2017.
- [BP20] Huck Bennett and Chris Peikert. Hardness of bounded distance decoding on lattices in norms. In CCC, 2020.
- [BSW16] Shi Bai, Damien Stehlé, and Weiqiang Wen. Improved reduction from the bounded distance decoding problem to the unique shortest vector problem in lattices. In ICALP, pages 76:1–76:12, 2016.
- [Din16] Irit Dinur. Mildly exponential reduction from gap 3SAT to polynomial-gap label-cover. Electronic Colloquium on Computational Complexity (ECCC), 23:128, 2016.
- [EOR91] N. D. Elkies, A. M. Odlyzko, and J. A. Rush. On the packing densities of superballs and other bodies. Inventiones mathematicae, 105:613–639, December 1991.
- [FLM77] T. Figiel, J. Lindenstrauss, and V. D. Milman. The dimension of almost spherical sections of convex bodies. Acta Math., 139(1-2):53–94, 1977.
- [GPV08] Craig Gentry, Chris Peikert, and Vinod Vaikuntanathan. Trapdoors for hard lattices and new cryptographic constructions. In STOC, pages 197–206, 2008.
- [HR12] Ishay Haviv and Oded Regev. Tensor-based hardness of the shortest vector problem to within almost polynomial factors. Theory of Computing, 8(1):513–531, 2012. Preliminary version in STOC 2007.
- [IP01] Russell Impagliazzo and Ramamohan Paturi. On the complexity of -SAT. J. Comput. Syst. Sci., 62(2):367–375, 2001.
- [Kho05] Subhash Khot. Hardness of approximating the shortest vector problem in lattices. J. ACM, 52(5):789–808, 2005. Preliminary version in FOCS 2004.
- [Kho06] Subhash Khot. Hardness of approximating the shortest vector problem in high norms. J. Comput. Syst. Sci., 72(2):206–219, 2006.
- [LLM06] Yi-Kai Liu, Vadim Lyubashevsky, and Daniele Micciancio. On bounded distance decoding for general lattices. In RANDOM, pages 450–461, 2006.
- [LM09] Vadim Lyubashevsky and Daniele Micciancio. On bounded distance decoding, unique shortest vectors, and the minimum distance problem. In CRYPTO, pages 577–594, 2009.
- [Man19] Pasin Manurangsi. Approximation and Hardness: Beyond P and NP. PhD thesis, University of California, Berkeley, 2019.
- [Mic00] Daniele Micciancio. The shortest vector in a lattice is hard to approximate to within some constant. SIAM J. Comput., 30(6):2008–2035, 2000. Preliminary version in FOCS 1998.
- [Mic01] Daniele Micciancio. The hardness of the closest vector problem with preprocessing. IEEE Trans. Information Theory, 47(3):1212–1215, 2001.
- [Mic12] Daniele Micciancio. Inapproximability of the shortest vector problem: Toward a deterministic reduction. Theory of Computing, 8(1):487–512, 2012.
- [Mic14] Daniele Micciancio. Locally dense codes. In CCC, pages 90–97, 2014.
- [MO90] J. E. Mazo and A. M. Odlyzko. Lattice points in high-dimensional spheres. Monatshefte für Mathematik, 110:47–61, March 1990.
- [MR07] Daniele Micciancio and Oded Regev. Worst-case to average-case reductions based on Gaussian measures. SIAM J. Comput., 37(1):267–302, 2007. Preliminary version in FOCS 2004.
- [MR17] Pasin Manurangsi and Prasad Raghavendra. A birthday repetition theorem and complexity of approximating dense CSPs. In ICALP, pages 78:1–78:15, 2017.
- [MS19] Stephen D. Miller and Noah Stephens-Davidowitz. Kissing numbers and transference theorems from generalized tail bounds. SIDMA, 33(3), 2019.
- [Reg09] Oded Regev. On lattices, learning with errors, random linear codes, and cryptography. J. ACM, 56(6):1–40, 2009. Preliminary version in STOC 2005.
- [RR06] Oded Regev and Ricky Rosen. Lattice problems and norm embeddings. In STOC, pages 447–456, 2006.
- [Ste16] Noah Stephens-Davidowitz. Discrete Gaussian sampling reduces to CVP and SVP. In SODA, pages 1748–1764, 2016.
- [vEB81] Peter van Emde Boas. Another NP-complete problem and the complexity of computing short vectors in a lattice. Technical Report 81-04, University of Amsterdam, 1981.
- [Vlă19] Serge Vlăduţ. Lattices with exponentially large kissing numbers. Mosc. J. Comb. Number Theory, 8(2):163–177, 2019.
- [Vlă21] Serge Vlăduţ. On the lattice hadwiger number of superballs and some other bodies. Discrete & Computational Geometry, January 2021.
Appendix A Proofs of Claims in the Preliminaries
In this appendix, we restate and give proofs for two technical claims from the preliminaries.
See 2.9
Proof.
For Item 1, we note that because there exists a unique solution to by 2.6. The result then follows from the definition of and from Theorem 2.8.
For Item 2, a calculation shows that and (see e.g. the proof of 2.6 for relevant calculations). Hence , where is the unique solution to , is the global minimizer of , and therefore , as desired.
For Item 3, we start by proving that is right-continuous at , i.e., that for and for . By the definition of , we have that for every there exists such that
(25) |
By examining the first two terms in Equation 25, it is clear that for we have when and when .
We turn to showing that for every , when and when . We have that for any and ,
(26) |
The first inequality follows from Item 2 and Equation 25. The second inequality follows by noting that
for , and
for , where we used the fact that when and are both non-negative or both non-positive. The equality follows by using the formula for summing geometric series.
The desired bounds for then follow by taking the limits on both sides of Equation 26, where the limits applied to the terms on the right-hand side act as follows:
We have shown that is right-continuous at for every . To finish proving Item 3, it then suffices to show that is strictly increasing and differentiable for . Let be the unique solution to . We calculate the derivative as follows:
where the first equality regards as a function with two variables and uses the chain rule to get its total derivative at , and the zero term in the second equality follows from Item 2: is the minimizer of so at .
See 2.10