Pessimistic Cardinality Estimation††thanks: This work was partially supported by NSF-BSF 2109922, NSF IIS 2314527, NSF SHF 2312195, SNSF 200021-231956, and RelationalAI.
Abstract
Cardinality Estimation is to estimate the size of the output of a query without computing it, by using only statistics on the input relations. Existing estimators try to return an unbiased estimate of the cardinality: this is notoriously difficult. A new class of estimators have been proposed recently, called pessimistic estimators, which compute a guaranteed upper bound on the query output. Two recent advances have made pessimistic estimators practical. The first is the recent observation that degree sequences of the input relations can be used to compute query upper bounds. The second is a long line of theoretical results that have developed the use of information theoretic inequalities for query upper bounds. This paper is a short overview of pessimistic cardinality estimators, contrasting them with traditional estimators.
1 Introduction
Given a query, the Cardinality Estimation problem, or CE for short, is to estimate the size of the query output, using precomputed statistics on the input database. This estimate is the primary metric guiding cost-based query optimization. The optimizer uses it to make decisions about every aspect of query execution. These range from broad logical optimizations like the join order and the number of servers to distribute the data over, to detailed physical optimizations like the use of bitmap filters and memory allocation for hash tables.
Unfortunately, CE is notoriously difficult. Density-based estimators, pioneered by System R [36], are widely used today, but they underestimate significantly due to their strong reliance on assumptions like data uniformity and independence. They tend to have large errors for queries with many joins and many predicates [16, 23]. A major limitation of density-based CE is that it does not come with any theoretical guarantees: it may under-, or over-estimate, by a little or by a lot, without any warning. Alternative approaches have been proposed, but they come with their own limitations: sampling-based estimators require expensive data access at estimation time, while ML-based estimators suffer from large memory footprints and long training time [38, 28, 16].
An alternative to estimation is to compute an upper bound on the size of the query output, based on the precomputed statistics on the data. This is called pessimistic cardinality estimation, or PCE for short [9]. The PCE returns a number that is larger than the size of the query output for any database instance that satisfies the given statistics. The inclusion of the term “estimation” in PCE is a little misleading, since it does not estimate but gives an upper bound, yet the term has stuck in the literature, and we will use it too. Of course, we want the upper bound to be as small as possible, but not smaller. When it achieves accuracy that is similar to, or even better than traditional estimators, PCE offers several advantages. Its one-sided theoretical guarantee can be of use in many applications, for example it can guarantee that a query does not run out of memory, or it can put an upper bound on the number of servers required to distribute the output data. It also has the advantage that the upper bounds combine naturally: if we have multiple upper bounds due to different techniques, we can apply them all and take the minimum.
This paper gives a high level overview of pessimistic cardinality estimation, and contrasts it with traditional methods. We start by discussing single-block SQL queries:
SELECT * | |||
FROM R1, R2, ... | (1) | ||
WHERE [joins and filters] |
Later we will restrict our discussion to conjunctive queries. Some of the PCE techniques described here also apply to group-by queries.
2 Brief Review of CE
Modern database engines use a density-based approach for cardinality estimation. They compute periodically some simple statistics (or summaries) of the base relations, then use simplifying assumptions to estimate the cardinality. Specifically, the engine stores in its catalog, for each relation , its cardinality , and the number of distinct values111Denoted in a popular textbook [19]. , where is a single attribute or a set of attributes. This quantity is computed periodically, and approximately, by sampling from (e.g. Postgres) or by using Hyper-Log-Log (e.g. DuckDB). The ratio represents the average degree, or the density of the attribute(s) .
Consider the uniform probability space whose outcomes are the tuples of the Cartesian product of all relations in (1), This defines a probability over all attributes returned by the query. A density-based CE estimates the probability that a random tuple from the Cartesian product satisfies the condition in the where clause. The cardinality estimate is the multiplication of this probability with the size of the Cartesian product. For example, consider the following SQL query :
By the uniformity assumption and the containment of values assumption the probability that a random tuple satisfies the predicate is . The cardinality estimate is .
1-Dimensional Histograms relax the uniformity assumption, by storing separate statistics for each bucket of a histogram. The default number of buckets is small (200 for SQLServer [1], 100 for Postgres), and is strictly limited (typically to [38]).
Joins are estimated using similar assumptions. For example, consider the query given by:
(2) |
One estimate could be , because each tuple in matches an estimated tuples in . Or, symmetrically, . Density-based CE returns their minimum222Justified by the containment-of-values assumption., usually written as:
(3) |
Finally, the estimate for a conjunction of predicates is computed by assuming independence between the predicates. For example, the estimate of the following query
SELECT * FROM Store | ||
is , which is an underestimate, since that entire zip code is in Seattle. 2-Dimensional Histograms can capture correlations between attributes, but few systems support them.333There are several reasons why 2-d histograms are rarely used. (1) There are too many candidates: possible histograms for a relation with attributes. The number of buckets along each dimension is limited to . (2) It is unclear how to combine multiple 2-d histograms [33], e.g. in order to estimate a predicate on 3 attributes from 2-d histograms on .
A landmark paper [30] evaluated the cardinality estimators deployed in modern database systems and their impact on the query optimizer. It found that their CE almost always underestimates (because of the independence assumption) with typical errors of up to for queries with many joins. Later studies have confirmed these findings across a wide variety of workloads and database systems [29, 35, 38, 23, 28].
Given the importance of the problem and the limitations of density-based methods, many recent proposals have been published exploring alternatives to traditional density-based CE. Two alternatives have attracted particular interest: sampling-based CE and learned CE.
Sampling-based CE methods compute an unbiased estimate without requiring any assumptions. Offline sampling pre-computes a uniform sample of each relation, then estimates the size of the query output over the base relations from the size of the query output over the sample using the Horvitz-Thompson’s formula.444The size estimate is the multiplication of the size of the query output over the sample with the ratio for each relation in the query. More robust estimates, such as bottom-k [12], are not commonly used for CE [11]. This method can be very accurate for queries over a single relation and easily supports arbitrary user-defined predicates beyond equality and range predicates. However, it becomes ineffective for highly selective predicates, because of sampling collapse: when no sampled tuple matches the query, then the system must return 0 or 1. In particular, this is a problem for joins, since their selectivity is relative to the cartesian product of the input relations and therefore almost always extremely low.555The estimate is still unbiased, over the random choices of the samples, but the standard deviation is high. Online sampling addresses this issue by sampling only tuples that join with already sampled tuples. Based on this principle, Wander Join [31, 32] achieves remarkable accuracy [35]; however, it requires access to an index on every join column, and has a high latency.
Learned CE aims to remove the assumptions of traditional CE by training an ML model that captures the complex correlations empirically [38, 28]. Data-driven estimators compute a generative ML model for the probability over all attributes returned by the query. This is trained on the database [39, 40]. The model needs to represent all attributes, of all relations. Intuitively, these models aim for a lossy compression of the full outer join of the database relations then estimate the selectivity of the predicates in the query relative to it. This is an extremely ambitious approach, and it tends to require large models that either struggle with, or completely disallow queries that do not follow the schema’s natural join structure, like self-joins. Query-driven estimators compute a discriminative model for . The model is trained using a workload of queries and their true cardinalities. In general, ML-based estimates can be quite accurate on the training data, but they suffer from distribution drift, can be memory intensive (1MB to 1GB models are reported in the literature), support only limited types of queries and predicates, and require full retraining when the data changes, e.g. when a new relation is added [23].
3 A Wish List
Stepping back from existing estimators, we ask a more general question: What properties do we wish a cardinality estimator to have? We propose here six such properties, which we argue any good CE should have:
- Accuracy/Speed/Memory:
-
It should have good accuracy, small estimation time, small memory footprint.
- Locality:
-
It should use statistics that are computed separately on each input relation.
- Composition:
-
It should be able to compute an estimate for a query from the estimates for its subqueries; this is useful in bottom-up query optimizers.
- Combination:
-
It should be able to combine multiple sources of statistics on the database. Given two estimates and computed using different methods, or different statistics, one should be able to combine them to obtain a better estimate Est.
- Incremental Updates:
-
It should be possible to update the statistics incrementally when the input data is updated.
- Guarantees:
-
It should offer some theoretical guarantees. This will allow the application to reason about decisions based on CE.
All CE’s aim for good speed/accuracy/memory, with various degrees of success. Density-based and sampling-based CE’s are local, while learned CE are definitely not local: they are monolithic, in that they require access to all relations at training time. Density-based CE is compositional666Under the preservation of values assumption., but it cannot do combination. For example, to estimate the size of we could use and , or we could use and , but we cannot combine these two estimates to get a better estimate [33]. For another example, given the cardinalities of both joins and , there is no canonical way to combine them to estimate the cardinality of [10]. ML-based estimators can be neither composed nor combined, and need to be re-trained from scratch after an update. Finally, of all methods discussed here, only sampling-based methods offer theoretical guarantees: the others offer no guarantees.
Next, we will discuss the alternative, pessimistic approach to cardinality estimation. We will return to our wish-list in Sec. 11.
4 Pessimistic CE
A Pessimistic Cardinality Estimator, PCE, computes a guaranteed upper bound on the cardinality, instead of an estimate. For a very simple example, an upper bound of the join (2) is . If we know the largest number of occurrences of any value in , also called the maximum degree of in , then a better bound is ; for example, if is a key in , then and the bound becomes . Symmetrically, if we know the maximum degree of in , call it , then is also an upper bound. We can combine these bounds by taking their :
(4) |
This should be compared with the traditional estimator (3), which replaces the maximum degrees and with the average degrees.
Two advances make pessimistic cardinality estimators practical today. The first is the observation that degree sequences can be used to compute an upper bound on the query output size [15]. The second is a long line of theoretical results on using information inequalities to compute upper bounds on the query’s output [18, 21, 22, 20, 4, 5, 37, 2]. We review both topics in the rest of the paper.
5 Degree Sequences
The existing PCE’s use statistics derived from degree sequences of the input relations.
Let be a relation, and two sets of attributes. We assume throughout the paper that relations are sets. The degree sequence
(5) |
is defined as follows. If are the distinct values of , then is the degree of the value . We assume that the values are sorted in decreasing order of their degrees, i.e. . We say that the value has rank . Notice that .
Figure 1 illustrates some simple examples of degree sequences, while Figure 2 shows the degree sequence of the IMDB dataset from the JOB benchmark [30]. If the functional dependency holds, then . When then we call the degree sequence simple, and when are all the attributes of , then we call full and also denote it by . The degree sequence in Fig. 1 is both simple and full, and we can write it as .

The -norm of the degree sequence (5) is:
Degree sequences and their -norms generalize common statistics used by density-based estimators: the relation cardinality is , the number of distinct values is the length of the sequence (5) (and also equal to ); and the maximum degree of is . But degree sequences contain information that can significantly improve the accuracy of PCE. For example the inequality in (4) assumes pessimistically that all degrees in are equal to the maximum degree . By using the degree sequence and more sophisticated inequalities we can account for the true distribution of the data, as we will see later.
In the rest of the paper we describe several methods that compute an upper bound on the size of the query output from the -norms of degree sequences. We restrict our discussion to conjunctive queries instead of the SQL query (1) and use the notation:
(6) |
where . The query variables are . We omit filter predicates for now, and discuss them in Sec. 10.
6 The AGM Bound
Historically, the first cardinality upper bound was introduced by Atserias, Grohe, and Marx [8], and builds on earlier work by Friedugut and Kahn [18] and Grohe and Marx [21]. The bound is known today as the AGM bound. It uses only the cardinalities of the input relations, , and is defined as follows. A fractional edge cover of the query in (6) is a tuple of non-negative weights, , such that every variable is “covered”, meaning , for . Then, the following inequality holds:
(7) |
The AGM bound of is defined as the minimum of (7), taken over all fractional edge covers .
The classical illustration of the AGM bound is for the 3-cycle query:
(8) |
One fractional edge cover is , which proves . Other fractional edge covers are , , and , and each leads to a similar upper bound on . The AGM bound is their minimum:777These four fractional edge covers are called the vertices of the edge cover polytope. In (7) it suffices to restrict the to the vertices of the edge cover polytope, because any other fractional edge cover is than some convex combination of the vertices.
The AGM bound enjoys two elegant properties: it is computable in PTIME in the size of the query (by solving a linear program), and the bound is guaranteed to be tight. The latter means that, for any set of cardinalities there exists a worst case database instance, with the same cardinalities, where the size of the query output is equal to the AGM bound, up to a rounding error equal to a query-dependent multiplicative constant.888The constant is , where is the number of variables.
Despite these attractive theoretical properties, the AGM bound has not been adopted in practice, because it uses very limited statistics on the input data, namely just the relation cardinalities. In particular, for acyclic queries the AGM bound is achieved by an integral (not fractional) edge cover, and expression (7) is a product of the cardinalities of some relations. For example, for the 2-way join query:
(9) |
The AGM bound is . In contrast, a density-based estimator uses both cardinalities and average degrees, as seen for example in (3), and for this reason it returns a better estimate for than the AGM bound, even if it cannot provide any guarantees.
7 The Chain Bound
The chain bound [4], CB for short, uses as statistics both cardinalities , and maximum degrees . It strictly generalizes the AGM bound, but a popular simplification adopted by several implementations is weaker than the AGM bound. To describe CB, it is convenient to view each cardinality as a maximum degree, namely . Thus, all statistics are max degrees: , for .
Fix an ordering of the query variables. We say that the pair covers the variable w.r.t. , and write , if and all variables in strictly precede in the order . For a simple example, if , then covers all variables in ; for another example, if the order is then covers only . If a set of non-negative weights is a fractional cover of , meaning that it covers every variable (i.e., ), then the following holds (we give the proof in the Appendix):
(10) |
CB is the minimum of the quantity above over all fractional covers and variable orderings , and is always an upper bound on . If is fixed, then CB can be computed in PTIME by using an LP solver, but minimizing over all requires exponential time.999Computing CB is NP-hard [25]. When all statistics are cardinalities then CB is independent on the choice of and coincides with the AGM bound. When the statistics also include max-degrees, then CB is lower (better) than the AGM bound.
Several implementations have adopted a simplified version of CB which does not require the use of an LP solver, called BoundSketch in [9, 24] or MOLP in [10]: we will use the term BoundSketch. It corresponds to restricting the fractional cover in (10) to be an integral cover.101010BoundSketch also hash partitions the data in order to improve the estimate [9]. We will use BoundSketch to refer to integral CB, without data partitioning. We describe BoundSketch, following the presentation in [10]. Consider the following nondeterministic algorithm.
-
•
Set , .
-
•
Repeatedly choose non-deterministically one statistics, , such that . Set and .
-
•
Stop when contains all query variables, and return .
BoundSketch is the minimum value returned by all executions of the non-deterministic algorithm. This is equivalent to computing the shortest path in the graph called in [10]: its nodes are all subsets , and for every statistics there is an edge with weight from to , for all . Then BoundSketch is the shortest111111Weights along a path are multiplied. path from to . BoundSketch still requires exponential time.121212When the statistics are restricted to cardinalities (no max-degrees), then BoundSketch or, equivalently, MOLP [10], is larger (worse) than the AGM bound, as it considers only integral covers. The original definition of MOLP [27] (called MO in [27]) claims the opposite, a claim repeated in [10]. However, that claim only holds under the assumption that one first regularizes the data, then computes a separate bound for each degree configuration, using both cardinalities and max-degree statistics.
BoundSketch is similar to a density-based estimator, where the average degrees are replaced by maximum degrees: this was noted in [10]. We illustrate this on an example with a 3-way join query:
(11) |
BoundSketch is the minimum of the following quantities:131313We omit non-optimal expressions, like .
(12) | ||||
Let us compare this with the density-based estimator described in Sec. 2, which, for , is:
For illustration, assume that and . Then the estimate can be written as:
This is the same expression as (12), where the maximum degrees are replaced with the average degrees.
The main advantage of CB and its simplified variant BoundSketch is their simplicity and resemblance to the density-based estimator. A disadvantage is that their computation requires exponential time, because we need to iterate over all variable orderings . Another limitation is that, unlike the AGM bound, CB is not tight: the polymatroid bound described below can be strictly lower than CB. However, in the special case when the set of statistics is acyclic, CB is both tight and computable in PTIME. In this case, there exist variable orderings such that, for every statistics , all variables in come before those in , which implies for . Such an ordering can be computed in linear time using topological sort. We then use an LP solver to compute the optimal fractional cover for .
8 The Polymatroid Bound
The Polymatroid Bound [5], PolyB, can use any -norms of degree sequences to compute an upper bound on the size of the query output. The input statistics for PolyB are norms , for a relation , sets of attributes of , and numbers141414 can be fractional. . The system uses all available statistics to compute an upper bound on the query’s output. PolyB strictly generalizes both CB and the AGM bound. Its theoretical foundation lies in information inequalities, which we will review shortly, after presenting some examples of PolyB.
Examples To warm up, consider again the two-way join . The following upper bounds on the size of the query output hold:
(13) | ||||
(14) | ||||
(15) | ||||
(16) |
The PolyB is the minimum of these four quantities151515If all statistics consists of -norms with an integer, or , then the best bound on is the minimum of inequalities (14)-(16).. The first three inequalities are trivial (and were discussed in Sec. 4), while (16) follows from Cauchy-Schwartz.
Consider now the 3-way join . PolyB includes all CB inequalities (see Eq. (12)) and new inequalities using -norms, for example the following inequality holds for all :
(17) |
For example, assume that the system has computed the norms of all degree sequences, then it can compute the expression above for both and , and take their minimum.
Considering the 3-cycle query in (8), the following inequalities hold:
(18) | ||||
(19) | ||||
(20) |
The first is the AGM bound; the others are new, and quite surprising. This list is not exhaustive: one can derive many more inequalities, using various -norms. PolyB is the minimum of all of them (restricted to the available -norms), and can be computed using a linear program, as explained later.
Inequalities (17)-(20) do not appear to have simple, elementary proofs. Instead, they are proven using Shannon inequalities.
Shannon Inequalities We review briefly Shannon Inequalities, which are special cases of entropic inequalities, or information inequalities, and show how to use them to prove the inequalities above.
Let be a finite random variable, with outcomes , and probability function . Its entropy is defined as:161616Usually is in base 2, but any base can be used without affecting any results.
It holds that , and iff is uniform, i.e., .
Let be finite, jointly distributed random variables. For every subset of variables, denotes the entropy of the joint random variables in . For example, we have , , etc. This defines an entropic vector, , with dimensions, one for each subset of variables. The following inequalities hold and are called Shannon basic inequalities (the second is called monotonicity and the third is called submodularity):
A vector that satisfies Shannon basic inequalities is called a polymatroid. Every entropic vector is a polymatroid, but the converse does not hold in general [42].
We now have the tools needed to prove inequalities (17)-(20). We illustrate only (18), and defer the others to the Appendix. Consider three input relations , and let denote the output to the query, i.e. iff , , and . Define the uniform probability space on : there are outcomes , each with the same probability . Let be its entropic vector. Note that , because the probability distribution is uniform, and , because the support of the variables is a subset of the relation . Then the following inequalities hold, and imply (18):
We have applied Shannon submodularity inequality twice, and shown it by underlying the used terms.
Entropic Constraints To prove upper bounds that involve degree sequences, like (17), (19), (20), we need to relate their -norms to the entropic vector.
Let be jointly distributed random variables, whose support is the finite relation . Let two sets of variables . Let be the entropic vector of the variables. The conditional entropy of is defined as .
The following was proven in [3], for any :
(21) |
When , the inequality becomes , and when , it becomes . For a simple example, using (21) we can prove (16):
Inequalities (17)-(20) are proven similarly: we include them in Appendix B.
Computing PolyB So far, we introduced PolyB as the minimum bound that can be obtained from Shannon inequalities and entropic constraints (21). To compute it, we use an equivalent, dual definition, as the maximum value of a linear program (LP).
Suppose has variables . The LP has real-valued variables, , representing the unknown polymatroid vector . The objective is to maximize , given two sets of linear constraints:
-
•
Statistics constraints: there is one inequality of type (21) for each available input statistics.
-
•
Shannon basic inequalities: the list of all submodularity and monotonicity inequalities.
PolyB is the optimal value of this linear program. Intuitively, we maximize , while constraining to be a polymatroid that satisfies all statistics (21).
We illustrate with an example, by showing the linear program for the query in (8):
PolyB has several attractive properties. It strictly generalizes both CB and the AGM bound, which use only statistics given by and . In addition, PolyB can also be used to estimate output sizes for queries with group-by (and select distinct) clauses; for this, we just need to adjust the objective of the linear program to refer to the entropic term for the group-by attributes. Access to more statistics is guaranteed to never make the bound worse; e.g., we can improve the bound if we decide to add the statistics in addition to . The inference time is exponential in , the number of query variables, but several special cases are known when it can be computed in PTIME [34, 26, 7] (see Sec. 10). Finally, when all statistics are restricted to simple degree sequences, then PolyB is provably tight, up to a query-dependent multiplicative rounding error,171717The factor is . similar to the AGM bound. Beyond simple degree sequences, the PolyB is not tight in general, due to the existence of non-Shannon inequalities [42, 37], but the only known non-tight examples are artificial and unlikely to occur in practice.
9 The Degree Sequence Bound
The Degree Sequence Bound [15, 16], DSB for short, was the first system that proposed to use degree sequences for PCE. DSB provides tighter bounds than both AGM and CB bounds, and an empirical evaluation [16] found it to be often tighter than density-based estimators, while always returning a guaranteed upper bound. Instead of using -norms, DSB uses compressed representation of the degree sequences. We illustrate here DSB assuming access to the full degree sequence, and discuss compression in Sec. 10.
Consider the 2-way join , and assume that the degree sequences of and are:
Recall that the degree sequences are sorted, e.g. , and that the rank of a value in is the index for which the degree of is . If every value has the same rank in and in , then the size of the join is precisely . In general, the following inequality holds:
(22) |
This bound is a strict improvement over the chain bound (4), because , and similarly .
DSB enjoys several nice properties. It can be computed in linear time in the size of the compressed degree sequences; it is compositional, meaning that the estimate of a query plan can be computed bottom-up; given appropriate histograms (see Sec. 10) DSB is more accurate than density-based estimators [16]; finally, DSB is provably a tight upper bound of the query’s output. DSB also has a few limitations. It is limited to Berge-acyclic queries (see [17]), and its estimate is not given in terms of an inequality, like PolyB, but instead it is computed by an algorithm.
10 Pragmatic Considerations
We discuss here some practical aspects that need to be considered by PCE, or have been considered in previous implementations of PCE [9, 24, 16, 7].
Statistics selection While a density-based estimator only stores the number of distinct values of an attribute, PolyB can use multiple -norms of various degree sequences . A reasonable choice is to store, say, four numbers . We assume that is stored separately and do not include . This means that the memory footprint increased by 4 times that of a density-based estimator. But, in general, it is not clear whether we want to stop at . An implementation needs to choose which norms to precompute. Empirically, we have observed improvements of accuracy up to for JOB queries on the IMDB dataset, although with diminishing returns [7]. On the other hand, DSB compresses the degree sequence, and requires tuning the hyperparameters of the compression.
Computing the statistics All input statistics are computed offline. For example, the degree sequence can be computed using a group-by query:
(23) |
DSB sorts the query output, while PolyB computes all desired -norms in one pass over this output.
Conditional statistics In order to improve the estimates of queries with predicates, a pessimistic estimator can adopt traditional techniques like Most Common Values (MCV) and histograms. Assume that the system decides to store the following global statistics for column : for , for a total of 4 numbers (recall that is the cardinality which we store anyway). To support equality predicates on some attribute , the system computes additional conditional statistics:
-
•
For each value , compute the degree sequence , which we denote by :
(24) Then, compute the -norms of these degree sequences (shown only for below):
(25) -
•
For each Most Common Value (MCV) , store its -norms in the catalog. Call these conditional MCV statistics. If we used MCVs (which is the default in Postgres), then these statistics consist of 400 numbers.
-
•
Compute the conditional common statistics, , and add these 4 numbers to the catalog.
-
•
At estimation time, consider three cases. If the query does not have a predicate on , then use the global statistics; if the predicate is , where is an MCV, then use the corresponding conditional MCV statistics; otherwise use the conditional common statistics.
Histograms Histograms can further improve the accuracy. Continuing the example above, we partition the values of the attribute into buckets, and for each bucket store conditional common statistics: . There are 4 numbers per bucket. To support range predicates we need to store in each bucket the value .
Boolean Expressions The most powerful advantage of pessimistic estimators is that the Boolean connectives simply becomes and . For example, if the query contains predicate , then we use as statistics ; similarly, for , we use181818 We need to require , because Minkowski’s inequality only holds for . . This principle extends to IN predicates (which are equivalent to ), and to LIKE predicates, which can be upper-bounded by a set of 3-grams, see [16].
Lossy Compression The complete degree sequence can be as large as the data, and therefore it is not usable as a statistic for cardinality estimation. Since DSB needs access to the entire degree sequence, it uses three observations to drastically reduce the size of the statistics. First, run-length compression is very effective on degree sequences: it compresses a sequence by replacing any constant subsequence with the common value plus length. Second, DSB uses a lossy compression, replacing the original sequence with such that (element-wise) and the compression of is much smaller: since the DSB is monotone in the input degree sequences, it still returns an upper bound when is replaced with . However, since , the new sequence represents a relation with a larger cardinality. Instead, DSB upper bounds the CDF of the degree sequence, , instead of the PDF. That is, it uses a degree sequence such that where . It is no longer obvious why DSB is still an upper bound when it uses : we give the main intuition in Appendix C. Fig. 3 illustrates this method. The original sequence, , can be compressed to , but the norm increased from 11 to 18. Instead, DSB upper bounds the CDF to , see the second graph. The degree sequence that produces is : notice that , yet by using DSB still returns an upper bound.


Runtime of the Pessimistic Estimator The runtime of DSB is linear in the size of the compressed representation of all degree sequences. The runtime of PolyB is exponential in (the number of query variables, Eq. (6)), because the LP uses numerical variables. This is undesirable, but there are a few ways to mitigate this. First, when all statistics are for simple degree sequences (meaning of the form , where is a single variable, or ), then PolyB can be computed in PTIME, by using a totally different LP, with numerical variables instead of [26, 7]; furthermore, we only need numerical variables in case of Berge acyclic queries [7]. Restricting the pessimistic estimator to simple degree sequences is quite reasonable, because many current systems already restrict their statistics to a single attribute, e.g. they store , but rarely . When multi-attribute statistics are needed, then there are a few ways to optimize the LP that computes PolyB. First, ensure that it includes only the elemental Shannon inequalities [41, Chap.14.1]: there are elemental inequalities. Second, drop unnecessary variables. For example, to compute the PolyB of a 2-way join it suffices to keep only one non-joining variable in each relation, i.e. simplify the query to . Third, it is well known that creating the input for the LP solver often takes more time than the solver itself: to speedup the creation of the LP, the system can precompute and represent all elemental Shannon inequalities, since these depend only on and not on the query.
11 A Wish Come True?
We return now to our wish list in Sec. 3 and examine how pessimistic cardinality estimators do or do not satisfy them. The main advantage of PCE’s is their one-sided guarantee. An application can count on the fact that the query output size will never exceed the PCE bound. The ability to provide a strong guarantee without any assumption is the main advantage of PCE over both traditional methods and ML-based methods. They are also local, in the sense that they only use statistics collected on each relation independently: if a new relation needs to be added to the database schema, then we only need to add statistics for that relation. This is similar to traditional CE’s, and different from estimators based on trained models, which need access to the entire database.
The next question is how accurate the PCE’s are. Only a few empirical evaluations have been done: for BoundSketch [9, 10, 35], DSB [16], and PolyB [7]. Out of the box, BoundSketch produces worse estimates than traditional estimators, however all implementations augment BoundSketch with hash-partitioning [9, 35], which improves the accuracy, but at the cost of increased memory and significantly increased construction and estimation time.191919An improved partitioning is described recently in [14]. While BoundSketch uses only cardinalities and max degrees, DSB and PolyB use the complete degree sequences (under lossy compression) and norms on these degree sequences, respectively, and produce much better bounds with less memory and with small estimation time [16, 7]. What several implementations report is that, when the traditional estimator in Postgres is replaced by BoundSketch, DSB, or PolyB, the performance of expensive SQL improves, sometimes significantly, because it causes the optimizer to be more cautions in choosing an execution plan. Cheap queries, however, tend to suffer from a regression.
In general, the empirical evaluation of PCE’s is still very limited. Future research is needed to evaluate the impact of indices, on the essential query subexpressions, or of bitmap filtering, see [29] for an extensive discussion of the impact of the cardinality estimator on the query optimizer.
Moving down on our wish list, we note that DSB is compositional; in fact the algorithm computing DSB [15, 16] is defined by recursion on the structure of a Berge-acyclic query. On the other hand, PolyB is not compositional. This appears to be unavoidable for cyclic queries, which do not admit a recursive definition. For example, computing the upper bounds on the join does not help us in deriving the bound of the 3-cycle (Eq. (18)). An open question is whether PolyB can be made compositional on (Berge-) acyclic queries.
Pessimistic cardinality estimators can be combined naturally. If we have two bounds on the query, and , we can “combine” them by taking their . Similar combinations can be done to handle complex predicates, consisting of AND, OR, IN, LIKE. We believe that this is the second main advantage of PCE’s over traditional or ML-based methods. None of the other cardinality estimation methods have a natural way of combining estimates, and a recent study has found that different combination heuristics lead to quite different accuracies [10].
Finally, we consider the desire for incremental updates. Ideally, when the database is updated (via an insert/update/delete query), we would like to update the statistics incrementally. For context, we point out that, to the best of our knowledge, no relational database system updates its statistics incrementally, because it would slow down OLTP workloads, especially because updating a statistics requires acquiring a lock, which can quickly lead to high contention. A similar reason would prevent them to do incremental updates for pessimistic cardinality estimators. However, the question remains whether the data structures used by PCE’s are able to support incremental updates. This appears to require materializing the degree sequence computed by Query (23), which is unlikely to be practical. Future work is needed to explore whether -sketches [6, 13] can be adapted for the purpose of incremental updates of the input statistics.
Acknowledgments We thank Nicolas Bruno, Surajit Chaudhuri, Bailu Ding, Benny Kimelfeld, Kukjin Lee, and Hung Ngo for their comments that helped improve the paper.
References
- [1] Statistics in SQL server. https://learn.microsoft.com/en-us/sql/relational-databases/statistics/statistics?view=sql-server-ver16, 2024. Accessed: October 2024.
- [2] Abo Khamis, M., Nakos, V., Olteanu, D., and Suciu, D. Join size bounds using l-norms on degree sequences. CoRR abs/2306.14075 (2023).
- [3] Abo Khamis, M., Nakos, V., Olteanu, D., and Suciu, D. Join size bounds using l-norms on degree sequences. Proc. ACM Manag. Data 2, 2 (2024), 96.
- [4] Abo Khamis, M., Ngo, H. Q., and Suciu, D. Computing join queries with functional dependencies. In PODS (2016), pp. 327–342.
- [5] Abo Khamis, M., Ngo, H. Q., and Suciu, D. What do shannon-type inequalities, submodular width, and disjunctive datalog have to do with one another? In PODS (2017), pp. 429–444. Extended version available at http://arxiv.org/abs/1612.02503.
- [6] Alon, N., Matias, Y., and Szegedy, M. The space complexity of approximating the frequency moments. In STOC (1996), pp. 20–29.
- [7] Anonymous Authors. LpBound: Pessimistic cardinality estimation using l-norms of degree sequences, 2024. technical report.
- [8] Atserias, A., Grohe, M., and Marx, D. Size bounds and query plans for relational joins. SIAM J. Comput. 42, 4 (2013), 1737–1767.
- [9] Cai, W., Balazinska, M., and Suciu, D. Pessimistic cardinality estimation: Tighter upper bounds for intermediate join cardinalities. In SIGMOD (2019), pp. 18–35.
- [10] Chen, J., Huang, Y., Wang, M., Salihoglu, S., and Salem, K. Accurate summary-based cardinality estimation through the lens of cardinality estimation graphs. Proc. VLDB Endow. 15, 8 (2022), 1533–1545.
- [11] Chen, Y., and Yi, K. Two-level sampling for join size estimation. In SIGMOD (2017), pp. 759–774.
- [12] Cohen, E. Sampling big ideas in query optimization. In PODS (2023), pp. 361–371.
- [13] Cormode, G., and Yi, K. Small Summaries for Big Data. Cambridge University Press, 2020.
- [14] Deeds, K., Sabale, D., Kayali, M., and Suciu, D. Color: A framework for applying graph coloring to subgraph cardinality estimation. CoRR abs/2405.06767 (2024).
- [15] Deeds, K., Suciu, D., Balazinska, M., and Cai, W. Degree sequence bound for join cardinality estimation. In ICDT (2023), pp. 8:1–8:18.
- [16] Deeds, K. B., Suciu, D., and Balazinska, M. Safebound: A practical system for generating cardinality bounds. Proc. ACM Manag. Data 1, 1 (2023), 53:1–53:26.
- [17] Fagin, R. Degrees of acyclicity for hypergraphs and relational database schemes. J. ACM 30, 3 (1983), 514–550.
- [18] Friedgut, E., and Kahn, J. On the number of copies of one hypergraph in another. Israel Journal of Mathematics 105, 1 (1998), 251–256.
- [19] Garcia-Molina, H., Ullman, J. D., and Widom, J. Database systems - the complete book (2. ed.). Pearson Education, 2009.
- [20] Gottlob, G., Lee, S. T., Valiant, G., and Valiant, P. Size and treewidth bounds for conjunctive queries. J. ACM 59, 3 (2012), 16:1–16:35.
- [21] Grohe, M., and Marx, D. Constraint solving via fractional edge covers. In SODA (2006), pp. 289–298.
- [22] Grohe, M., and Marx, D. Constraint solving via fractional edge covers. ACM Trans. Alg. 11, 1 (2014), 4:1–4:20.
- [23] Han, Y., Wu, Z., Wu, P., Zhu, R., Yang, J., Tan, L. W., Zeng, K., Cong, G., Qin, Y., Pfadler, A., Qian, Z., Zhou, J., Li, J., and Cui, B. Cardinality estimation in DBMS: A comprehensive benchmark evaluation. Proc. VLDB Endow. 15, 4 (2021), 752–765.
- [24] Hertzschuch, A., Hartmann, C., Habich, D., and Lehner, W. Simplicity done right for join ordering. In CIDR (2021).
- [25] Im, S., Moseley, B., Ngo, H., and Pruhs, K. Tractable algorithms for join evaluation and cardinality estimation via the dual-project-dual technique, 2022. Personal communication.
- [26] Im, S., Moseley, B., Ngo, H. Q., Pruhs, K., and Samadian, A. Optimizing polymatroid functions. CoRR abs/2211.08381 (2022).
- [27] Joglekar, M., and Ré, C. It’s all a matter of degree: Using degree information to optimize multiway joins. In ICDT (2016), pp. 11:1–11:17.
- [28] Kim, K., Jung, J., Seo, I., Han, W., Choi, K., and Chong, J. Learned cardinality estimation: An in-depth study. In SIGMOD (2022), pp. 1214–1227.
- [29] Lee, K., Dutt, A., Narasayya, V. R., and Chaudhuri, S. Analyzing the impact of cardinality estimation on execution plans in microsoft SQL server. Proc. VLDB Endow. 16, 11 (2023), 2871–2883.
- [30] Leis, V., Gubichev, A., Mirchev, A., Boncz, P. A., Kemper, A., and Neumann, T. How good are query optimizers, really? Proc. VLDB Endow. 9, 3 (2015), 204–215.
- [31] Li, F., Wu, B., Yi, K., and Zhao, Z. Wander join: Online aggregation via random walks. In SIGMOD (2016), pp. 615–629.
- [32] Li, F., Wu, B., Yi, K., and Zhao, Z. Wander join and XDB: online aggregation via random walks. ACM Trans. Datab. Syst. 44, 1 (2019), 2:1–2:41.
- [33] Markl, V., Haas, P. J., Kutsch, M., Megiddo, N., Srivastava, U., and Tran, T. M. Consistent selectivity estimation via maximum entropy. VLDB J. 16, 1 (2007), 55–76.
- [34] Ngo, H. Q. On an information theoretic approach to cardinality estimation (invited talk). In ICDT (2022), pp. 1:1–1:21.
- [35] Park, Y., Ko, S., Bhowmick, S. S., Kim, K., Hong, K., and Han, W. G-CARE: A framework for performance benchmarking of cardinality estimation techniques for subgraph matching. In SIGMOD (2020), pp. 1099–1114.
- [36] Selinger, P. G., Astrahan, M. M., Chamberlin, D. D., Lorie, R. A., and Price, T. G. Access path selection in a relational database management system. In SIGMOD (1979), pp. 23–34.
- [37] Suciu, D. Applications of information inequalities to database theory problems. In LICS (2023), pp. 1–30.
- [38] Wang, X., Qu, C., Wu, W., Wang, J., and Zhou, Q. Are we ready for learned cardinality estimation? Proc. VLDB Endow. 14, 9 (2021), 1640–1654.
- [39] Wu, Z., and Shaikhha, A. Bayescard: A unified bayesian framework for cardinality estimation. CoRR abs/2012.14743 (2020).
- [40] Yang, Z., Kamsetty, A., Luan, S., Liang, E., Duan, Y., Chen, X., and Stoica, I. Neurocard: One cardinality estimator for all tables. Proc. VLDB Endow. 14, 1 (2020), 61–73.
- [41] Yeung, R. W. Information Theory and Network Coding, 1 ed. Springer Publishing Company, 2008.
- [42] Zhang, Z., and Yeung, R. W. On characterization of entropy function via information inequalities. IEEE Trans. Inf. Theory 44, 4 (1998), 1440–1452.
Appendix A Proof of the Chain Bound
We use the notations introduced in Sec. 7. Let be a fractional cover, and let be the last variable in the order . We prove by induction on :
(26) |
For , set ; then . For , let . Then is a fractional cover of , and by induction we have because . The chain bound (10) follows from (26) using the standard argument from Sec. 8.
Appendix B Some Inequality Proofs
Appendix C CDF Compression
We prove here that the upper bound in Eq. (22) continues to hold if we replace the sequences with two sequences whose CDFs are larger.
It will be convenient to use some notations. Given any sequence , we denote by and the following sequences:
where . If is a probability distribution, meaning that , then the sequence is the Probability Density Function (PDF), and is the Cumulative Density Function (CDF). With some abuse, we use the terms PDF and CDF even when is not a probability distribution. Obviously, .
The following summation-by-parts holds, for any two sequences .
Using this formula, we prove:
Lemma C.1
Let be two, non-negative sequences, and assume that is non-decreasing (meaning, ). Let , and let be such that . Then, the following holds, where :
The proof follows from:
which holds because and . By applying the lemma a second time, we infer that the output of a join query (22) can be upper bounded by , where are the PDFs of and .