CARAMEL: A Succinct Read-Only Lookup Table via Compressed Static Functions
Abstract.
Lookup tables are a fundamental structure in many data processing and systems applications. Examples include tokenized text in NLP, quantized embedding collections in recommendation systems, integer sketches for streaming data, and hash-based string representations in genomics. With the increasing size of web-scale data, such applications often require compression techniques that support fast random lookup of individual parameters directly on the compressed data (i.e. without blockwise decompression in RAM). While the community has proposd a number of succinct data structures that support queries over compressed representations, these approaches do not fully leverage the low-entropy structure prevalent in real-world workloads to reduce space. Inspired by recent advances in static function construction techniques, we propose a space-efficient representation of immutable key-value data, called CARAMEL, specifically designed for the case where the values are multi-sets. By carefully combining multiple compressed static functions, CARAMEL occupies space proportional to the data entropy with low memory overheads and minimal lookup costs. We demonstrate 1.25-16x compression on practical lookup tasks drawn from real-world systems, improving upon established techniques, including a production-grade read-only database widely used for development within Amazon.com.
1. Introduction
Efficient data structures for storing key-value pairs are indispensable components for countless modern systems, spanning a diverse array of applications including recommender systems, search engines, genomics, and natural language processing. With the ballooning volume of web-scale data, practitioners maintain an intense interest in building caches with a lower memory footprint and faster query times. Inspired by the observation that many applications involve static data collections that need not support real-time insertions or deletions, which allows for more aggressive compression strategies, we focus on developing a read-only data structure with space close to the information-theoretic lower bound while supporting constant-time lookups directly on the compressed representation.
In this work, we design such a representation using compressed static functions and call our data structure CARAMEL (Compressed Array Representation And More Efficient Lookups). A static function is a mapping from a static, known set of hashable keys to encodable values such as strings and integers. A compressed static function (CSF) is a static function that can represent in space proportional to the entropy of and can evaluate in time (dietzfelbinger2008succinct, ). While CSFs were historically a purely theoretical curiosity, recent advances in practical construction techniques allow near-optimal sized CSFs to be developed for millions of parameters (8416577, ; genuzio2016fast, ; genuzio2020fast, ).
However, prior work on utilizing CSFs in practical applications largely focus on storing a single value for a given key (genuzio2016fast, ; shibuya2022space, ). In this paper, we study the problem of how to efficiently build a CSF structure for a multiset of values. This setting arises in a number of real-world application. For example, search and recommendation systems often cache the most popular query-result pairs to reduce the throughput demands on machine learning models and improve the average latency (Luo2022, ). In large-scale production serving systems, this table can contain millions of keys each with hundreds of values and consumes the majority of the server memory budget (guo2020accelerating, ; nigam2019semantic, ). Furthermore, common NLP and genomics tokenization processes yield massive arrays of integer tokens (shu2018compressing, ; chen2018learning, ; ondov2016mash, ). To that end, we propose a novel data structure based on arrays of multiple CSFs to address these data-intensive applications. By carefully designing our data structure to support multiple CSFs with minimal overhead, we find that we can achieve further compression than simply using a single static function and validate our approach across a myriad of real-world static retrieval tasks.
With this goal in mind, we consider the following problem. Given a static set of pairs consisting of a key and a list of values , we wish to construct a small data structure that can return an individual value in constant time and hence the entire list in time . Standard file compression methods that employ methods such as run-length encoding, delta coding, specialized codecs and other techniques (witten1999managing, ; scholer2002compression, ; williams1999compressing, ; thompson2022github, )are insufficient for this task because they do not allow queries directly on the compressed representation. Moreover, existing succinct data structures do not fully leverage the underlying structure of the data to achieve lower space. In summary, our ideal data representation should satisfy three requirements:
1) Lossless Compression: must represent with no error, and should be much smaller than . Ideally, is of similar size to that obtained via standard compression methods.
2) Random Access: The cost to obtain a value from should scale with and be fast in practice. Ideally, the lookup time should be comparable to the latency of a hash table.
3) Fast Construction: The time and computation needed to construct should not exceed practical limitations. For example, a standard workstation should be able to compress an input with a million keys in a few minutes.
1.1. Motivating Applications
Machine Learning Prediction Serving: Compressed array representations can serve as space-efficient databases for serving precomputed predictions from machine learning models. In large-scale online settings, such as in search engines, latency and throughput limitations make it infeasible to process every query through a large, expensive ML model. Therefore, a common optimization is to cache the outputs for frequent queries (Luo2022, ). This mapping must have access time and be space-efficient to handle the data volume of modern systems. Moreover, the reduced space footprint enabled by a compressed representation can be especially beneficial for storing and serving results in edge computing environments.
Tokenization: Tokenization, the process of breaking down a text into smaller units, is an essential step for nearly all NLP modeling tasks. To feed this tokenized text into an NLP model such as BERT, practitioners typically map each token to a unique integer ID (devlin2018bert, ; song2020linear, ; kudo2018sentencepiece, ). For very large text corpora, it quickly becomes infeasible to store the entire tokenized dataset in main memory for model training. Consequently, practitioners have adopted more sophicated data pipelines that tokenize text on the fly in parallel to training on a separate batch of data. This approach, though, adds additional engineering complexity to a model training framework and may even become a computational bottleneck when training large but shallow models, such as those commonly found in recommender systems (naumov2019deep, ). In addition, this approach hinders the ability to access arbitrary training examples, preventing the use of recently developed algorithms for dynamic negative sampling, importance sampling and other batch selection approaches (daghaghi2021tale, ; coleman2019selection, ; katharopoulos2018not, ; jiang2019accelerating, ). These challenges motivate the need for a compressed representation of the tokenized data, which is simply a matrix of integers and thus a candidate for CSF compression.
Genomics: Given the massive scale of modern genome datasets, genomics has emerged as a particularly critical domain for compressed representations that allow for efficient lookups. Much like text processing, genome datasets involve representing sequences of nucleotides as strings. Modern genomics archives contain petabytes of data, and the data generation rate has only increased with new developments in sequencing technology (elworth2020petabytes, ). Applications such as genome search (gupta2021fast, ), -mer coverage estimation (brown2012reference, ), metagenomic analysis (segata2012metagenomic, ) and several others all require mappings from -mer strings (genomic -grams) to integers. Depending on the application, these integer IDs can refer to sequencing coverage metrics, species identifiers, minhash-based featurizations (ondov2016mash, ) and other meta-information. The genomics community has recently studied the application of CSFs prior to our work for the purpose of mapping sequence reads to coverage numbers during assembly (shibuya2022space, ).
In this work, we apply CARAMEL to compress RefSeq, a collection of reference sequences for genomic DNA, transcripts and proteins (pruitt2006ncbi, ). Compressed representations of large reference sequence databases are incredibly important for the genomics community because they allow biological analyses to be conducted against larger references and with fewer computational resources. It is for this reason that the Mash featurization, a way of representing sequences as integer hash codes, became a highly popular first step in many analysis pipelines. We focus on compressing this featurization even further, which could enable analysis to scale to more problems and computational environments.
1.2. Contributions
We observe that several tasks in data-intensive systems – quantized embedding lookups, precomputed result caches, genomics, and more – critically rely on fast key-value lookups over a multiset of values. Based on recent advances in static function representations, we propose a compressed data structure for fast lookups while achieving a near-optimal space footprint. Overall, our contributions can be summarized as follows:
Compressed Memory for Parameter Arrays: We extend compressed static functions to build a novel data structure consisting of arrays of CSFs, which we call CARAMEL. We show that our structure provides access to rows, memory proportional to the column-wise entropy of the matrix, and removes the need to explicitly store the keys. We also provide evidence that an array of CSFs provides more potential for compression via implicit parameter groupings.
Theoretical and Practical Improvements: We improve the existing theory of CSFs in two respects. Previously, (shibuya2022space, ) introduced the idea of storing the dominant tokens in a separate Bloom filter (bloom1970space, ) to reduce the CSF size. We show that the heuristic developed by (shibuya2022space, ) is suboptimal. To address this we develop a rigorous proof establishing the precise conditions when a Bloom filter aids in overall compression (Theorem 5.2). We also improve the CSF construction process by introducing a greedy heuristic to permute the rows of the input matrix to minimize the column entropies, achieving up to 40% additional compression for settings like genomics where the order of tokens is unimportant.
Practical Applications: To our knowledge, we provide the first demonstration that CSFs can be applied to practical applications in a variety of data systems, such as text tokenization, quantized embedding tables, count sketches, genome datasets, and caching search results. In these settings, we show that CSFs can achieve a significant reduction in memory footprint while maintaining acceptable construction times and lookup latencies (Table 1). In fact, we find that CARAMEL can achieve faster query times than a standard hash table (see Figure 1), which underscores the practical relevance of our proposal. We also conduct a case study comparing CARAMEL to a production grade read-only database used widely within Amazon.com, where we find that our proposed method achieves significantly improved compression with comparable construction times. Finally, we open source the Python library111https://github.com/brc7/caramel we developed for conducting experiments which, to the best of our knowledge, is the first publicly available CSF implementation to operate on general hashable types as opposed to only integers. We expect this implementation to be a useful reference for the community, especially in genomics where the lack of a Python module is a serious barrier to the adoption and use of CSF structures (shibuya2022space, ).
2. Background
In this section, we introduce the tools needed for our proposed data structure, beginning with compressed static functions. A CSF, first formally introduced by (belazzougui2013compressed, ), is a function satisfying the following definition. Constructing such functions is the subject of considerable research in the theory community (porat2009optimal, ; dietzfelbinger2008succinct, ; hreinsson2009storing, ; belazzougui2013compressed, ; botelho2013practical, ).
Definition 2.1.
Given a set of key-value pairs and a universe of possible keys, let denote the set of unique keys and denote the multiset of values . A compressed static function is a function which returns if and any value if .
It should be noted that the values are a multiset because the elements are not necessarily unique. For example, we may require that . When combined with the fact that CSFs are static, the non-uniqueness of values will permit us to attain good compression by developing an optimal instantaneous code for the values of . To quantify the space needed by the CSF, we introduce the concept of empirical entropy.
Definition 2.2.
Given a multiset of values, let be the set of unique values, or support, of and be the number of occurrences of . The first-order entropy of is:
Compressed static functions aim to represent in space (belazzougui2013compressed, ). Several different construction methods are possible. The simplest method is to use an -space bijection from to the integers to index into an array of encoded values (e.g. using a minimal perfect hash function and Huffman coding). However, this technique requires substantial overhead in the form of the bijection and pointers to the array. Recent work demonstrates substantial improvements by directly computing the encodings of the values.
CSFs via Linear Systems: The fundamental insight of (8416577, ) is that a CSF may be constructed via a binary linear system. To illustrate this construction technique, suppose we wish to construct a CSF for the set of key-value pairs . We begin by hashing each key using three universal (random) hash functions, resulting in three integers for each key. We also apply an instantaneous code to obtain a binary encoding for each value. This process gives rise to a linear system that, when solved, yields a CSF.
For a concrete example, let’s write out a portion of the linear system for the keys and . Suppose that the hash functions output values in the range and that maps to , to and to . Furthermore, suppose that we have the encodings . We consider the following linear system on the binary space , where addition and multiplication are replaced by the OR and XOR operations, respectively.
In the solution vector, we use the notation to denote the th bit of the instaneous encoding for the value . The positions of the 1’s in the matrix are decided by the hash functions of the corresponding key , offset by the position of the corresponding bit of the encoding . For example, the first row in the matrix corresponds to , as has a 1-bit encoding. The next three rows are responsible for the 3-bit encoding of . Thus, there is one linear equation for each bit of each encoded value. The solution of this system, if it exists, can serve as a lookup table to compute . To obtain the first bit of , we access the bits of at the locations identified by the hash values of and compute the XOR. To obtain the subsequent bits of the encoding, we shift the locations to the right. Because is an instantaneous code, the process finishes as soon as we read a valid symbol.
The probability that the system is solvable depends on the length of and is governed by the satisfiability of -XORSAT instances (where is the number of hash functions used) (dubois20023, ). The major contribution of (8416577, ; genuzio2016fast, ; genuzio2020fast, ) is to make these large systems practically solvable.
Theorem 2.3.
Given the values in Definition 2.1, let be a prefix-free encoding for where is represented by code word with length . Then the static function of (hreinsson2009storing, ; 8416577, ) occupies space:
where is a constant and the term is the cost to store the codebook for the encoding .
Sources of Overhead: When the code lengths from Theorem 2.3 exactly match the inverse frequencies of the values, the sum of code lengths becomes exactly , leading to an overhead of (roughly 8.9% for ). However, this is not the case in practice - we observe a much larger per-key overhead. There are two reasons for this: code length mismatch and codebook storage. While Huffman codes have an expected per-key overhead within 1 bit of the Shannon bound (), the encoding only hits the bound with equality when frequencies decay in inverse powers of two. Also, the codebook requires storage of , which can be expensive when consists of long strings or integers.
Bloom Filter Heuristics: A particularly bad situation is when one value occurs more than 50% of the time. When this happens, prefix codes incur an overhead of at least bits (since one bit is needed for each value). This observation has led genomics practitioners to adopt a filtering approach wherein a Bloom filter (shibuya2022space, ) is used to identify keys with non-dominating values, reducing the load on the CSF and improving the overhead. The filter parameters are based on a complex learned heuristic that predicts the size of the CSF. To the best of our knowledge, this is the only other work that applies CSFs to practical real-world problems. Unfortunately, we found that their filtering method actually introduces substantial overhead, making it worse than using the CSF, likely because the heuristics are specifically tuned for their -mer count table application. We conduct a precise analysis of pre-filtering methods and give an information-theoretic bound on the combined space of the CSF and the filter that provides optimal hyperparameters.
3. Related Work
In this section, we introduce two algorithms that can be used for similar purposes as CARAMEL. We compare against both algorithms in the experiments.
3.1. Minimal Perfect Hash Tables
The problem of minimal perfect hashing is closely related to CSFs. A minimal perfect hash function is a function that maps a set of unique keys to a permutation of the consecutive integers . Clearly, minimal perfect hash (MPH) functions are special cases of static functions (where all the values are different). Similar construction methods (e.g. hypergraph peeling (belazzougui2014cache, )) can be used to obtain both CSFs and MPH functions, and similar bounds on the size (in terms of the number of bits / key) exist for MPH functions.
MPH functions provide a straightforward baseline for the compression problems we consider here. We begin by constructing a MPH from the set of keys to a set of memory locations. Then, we place a compressed representation of the data or value at the corresponding location. This approach, described in further detail by (belazzougui2013compressed, ), has larger overhead in practice than a CSF-based representation but can still provide reliable performance. In our experiments we compare against such an implementation, which was written in Java and is used in production by Indeed.
3.2. Succinct Data Structures
There is a large body of literature on succinct data structures, which aims to develop compressed implementations of data types that support certain operations in time. The CSF representation that we use was developed as a succinct approach to the dictionary lookup problem, which is to implement a compressed key-value store with keyed lookup queries. However, many other types of succinct data structures have been developed, and some of them have been used for practical applications of compression (agarwal2015succinct, ; duan2021succinct, ).
One particularly useful class of methods focus on succinct string data structure, which support byte-level acccess on the compressed representation. Techniques such as wavelet trees, Burrows-Wheeler transforms and compressed tries can be used to implement such a data structure, and there are several options available. We compare against the succinct data structure from (agarwal2015succinct, ), which is designed to provide fast random access to byte strings using a variation on the Burrows-Wheeler transform.
4. Algorithm
In this section, we describe our approach to compress key-value datasets through (CARAMEL). For clarity of the mathematical presentation, we will describe our algorithm in terms of partitioning an integer matrix into different CSFs, one for each dimension. We note that CARAMEL can also operate on general hashable types such as strings. However, we can discuss the theoretical properties of CARAMEL in terms of integer matrices without loss of generality by assuming that the integers are the hashed representations of some initial input (which is also how we implement the data structure in practice). We also consider the set of keys to be the integers . In practice, for applications such as embedding lookup, we use the text string corresponding to each row of as the key but we can again consider integer keys in our analysis without loss of generality.
Construction and Query: The construction process consists of the following tasks, which are described by Algorithm 1. First, if the application supports the shuffling of items without changing the representation, we permute the items in to find the configuration with the lowest column-wise entropy (see Definition 5.3). This improves both the construction time and space of the CSF. Next, we decide whether to use a Bloom filter to pre-filter keys for each dimension. If the frequency of the most common value exceeds a threshold, we create a Bloom filter of non-dominating values and construct a CSF over the keys for which the filter returns ‘True.’ We will derive the decision criterion and optimal filter parameters in Section 5. Finally, we construct an independent CSF over the permuted and filtered elements in each column of . To query the array with a key , we first check the Bloom filter to determine whether the CSF will be needed. If the Bloom filter returns ‘True,’ (i.e. is not the dominating value), then we query the CSF. Otherwise, we return . We repeat the process for as many lookups as are required.
Discarding the Keys: The only practical constraint on the keys is that they be hashable. This allows integers, strings and many other types to be used. It should be noted that each CSF does not require the keys to be maintained or stored in any way. This property of the CSF structure allows us to achieve additional memory savings over alternative methods.
Why not use a single CSF? Since we may just as easily treat the entire matrix as a flat set of integer parameters, it is worth questioning whether a single CSF might result in a more efficient representation. However, there are good reasons for using a 2D representation. For example, there are information-theoretic benefits to independently encoding each dimension when different columns have different vocabulary distributions. For example, suppose we wish to compress a matrix with two columns. Column 1 has the vocabulary with relative frequencies while column 2 has the vocabulary with the same frequencies. If we compress the columns independently, and are both representable by 1 bit. However, the Huffman code for the merged version uses 2 bits for both and (while having the same codebook storage cost). Because CARAMEL is able to implicitly encode the knowledge that different columns have different vocabulary distributions, we can use specialized encodings within each column, enabling more optimizations (e.g. Algorithm 2).
Exploiting Parallelism: There are also practical reasons to decompose a large set of integer parameters into several independent CSFs. We find it advantageous to parallelize the construction process, since our larger tasks (e.g. MSMarco (nguyen2016ms, ) and the Pile (gao2020pile, )) would need systems with billions of equations to represent the flat version. The CSF array representation is also helpful at query time, where we may query multiple structures in parallel.
5. Theory
In this section, we discuss the space and query time of our CSF array. We also develop a rigorous analysis of CSFs with prefiltering, giving rise to optimal criteria and hyperparameters for Bloom-enhanced CSF structures. Finally, we discuss optimizations to reduce the column-wise entropy of the matrix for applications that support the reordering of items along one of the dimensions of .
Memory and Lookup Time: We begin with an easy consequence of the results in (hreinsson2009storing, ). The space occupied by our data structure is proportional to the sum of column entropies on .
Corollary 5.1.
Given a matrix , let be the multiset of values in column (i.e. ). Our representation occupies space and has query time.
5.1. Entropy Threshold for Bloom-Enhanced CSF
We develop a rigorous theory to analyze prefiltered CSFs, beginning from first principles. As discussed in Section 2, the presence of a dominating value can result in substantial overhead when its normalized frequency exceeds 50%. We may filter keys with this value from the CSF, but we must pay to store a Bloom filter of the remaining keys. Using Theorem 2.3, we can determine the overall cost to store a CSF for a set of key-value pairs. For simplicity of notation, we suppose that there are unique values, sorted in order of decreasing frequency . We use to denote the Huffman encoding, for the length of the encoding and for the frequency of .
The canonical Huffman dictionary requires space to store the unique values and bits to store the encoding length for each value. Suppose we construct a Bloom filter with false-positive rate . This introduces an overhead of bits per item in the filter (bloom1970space, ) and causes the Huffman encoding to change from to :
Here, denotes the number of keys with value that erroneously pass through the Bloom filter. We wish to identify when . That is, we wish to find a condition on the value distribution for which the following inequality holds in expectation.
Theorem 5.2.
Given a set of key-value pairs , let be the normalized frequency of the most common value. If
Then . Moreover, the optimal filter length is bits (i.e. the filter should be designed to contain items with false positive rate )
We defer the proof of Theorem 5.2 to the appendix.
Practical Interpretation: If we examine the inequality in Theorem 5.2, we find that a Bloom filter reduces the space whenever more than ~65% of the keys share a value. This is a substantial improvement over the existing method of (shibuya2022space, ), which attempts to estimate and from the entropy using application-dependent heuristics. In Section 6, we demonstrate that this is true for many applications of practical interest (e.g. NLP tokenization).
5.2. Row Permutation to Minimize Column Entropy
There are several applications for which the order of integers in each row of the matrix does not affect the validity of the representation. Such a situation can arise in product recommendation, where each row is a set of pre-computed product IDs, or in genomics, where each row is a set of minhash values. In these situations, we can further reduce the space by finding the order of integers that minimizes for each column of the matrix. We wish to perform the following minimization:
Definition 5.3.
Given a matrix , let be the set of matrices such that for all and indices , , where is a permutation function. For notational convenience, let be the multiset of values in column (i.e. ). The row permutation problem is to perform the following minimization.
We will refer to the problem given in Definition 5.3 as the Row Permutation Problem. The (standard) notation refers to a permutation on , or a bijection from the set of numbers onto itself. Note that the problem allows for different permutations of items in each row.
Greedy Method to Minimize Entropy: We propose a greedy heuristic method to optimize the column-wise entropy by iteratively swapping entries in the rows of . The core of the method is to identify the value that can be relocated to group as many identical values into the same column as possible. This requires knowledge of two things: the rows where is present and eligible for relocation, and the possible destination columns where could be placed in each row. Once the optimal value and destination column have been identified, we swap the corresponding columns of and mark the value and column as ineligible for relocation in the future.
Dataset | (,) | (bits) | Flat | HT | MPH (indeedMPH, ) | Succinct (agarwal2015succinct, ) | CARAMEL |
---|---|---|---|---|---|---|---|
Synthetic (Uniform Distribution) | 100K, 1K | 9959 | 400 MB | 402 MB | 400.7 MB | 360 MB | 147 MB (2.7x) |
Synthetic (Uniform Distribution) | 10K, 1K | 9892 | 40 MB | 40.13 MB | 40.07 MB | 34.8 MB | 22.8 MB (1.8x) |
Synthetic (Power Law) | 100K, 1K | 2343 | 400 MB | 402 MB | 400.7 MB | 207 MB | 38.24 MB (10.5x) |
Synthetic (Power Law) | 10k, 1K | 2325 | 40 MB | 40.13 MB | 40.07 MB | 19.5 MB | 5.56 MB (7.2x) |
Featurized Genomes (o2016reference, ) | 125k, 1k | 8192 | 1.0 GB | 1.03 GB | - | 736 MB | 324.29 MB (3.1x) |
Featurized Proteomes (o2016reference, ) | 125k, 1k | 8192 | 1.0 GB | 1.03 GB | - | 736 MB | 324.29 MB (3.1x) |
Count Sketch (AOL terms (pass2006picture, )) | , 5 | 7.9 | 5.24 MB | 5.6 MB | 5.25 MB | 5.90 MB | 1.56 MB (3.9x) |
Count Sketch (URL (kaggleURL, ; mamun2016detecting, )) | 40k, 20 | 4.7 | 3.20 MB | 3.28 MB | 3.31 MB | 1.55 MB | 0.55 MB (5.9x) |
Amazon Polarity (mcauley2013hidden, ; zhang2015character, ) | 3.6M, 128 | 904.2 | 1.84 GB | 1.92 GB | 1.88 GB | 1.15 GB | 484 MB (3.8x) |
MSMarco (nguyen2016ms, ) | 3.2M, 128 | 1432 | 1.65 GB | 1.714 GB | 1.682 GB | 1.16 GB | 671 MB (2.6x) |
MSMarco (nguyen2016ms, ) | 3.2M, 512 | 5195 | 6.58 GB | 6.64 GB | 6.62 GB | - | 2.45 GB (2.6x) |
MSMarco (nguyen2016ms, ) | 8.8M, 512 | 883.2 | 18.1 GB | 18.27 GB | 18.2 GB | - | 1.12 GB (16x) |
Pile (sample) (gao2020pile, ) | 7M, 128 | 265.2 | 34.8 GB | 34.9 GB | - | - | 2.4 GB (14.5x) |
ABC Headlines Embeddings (DVN/SYBGZL_2018, ) | 1.2M, 20 | 148.3 | 33.2 MB | 33.2 MB | 30.55 MB | 27.8 MB | 24.1 MB (1.4x) |
Word2vec Embeddings (mikolov2013efficient, ) | 4M, 20 | 149.4 | 124.8 MB | 124.8 MB | 102.17 MB | 180 MB | 83.7 MB (1.5x) |
SIFT Embeddings (aumuller2017ann, ) | 1M, 32 | 243.3 | 39.9 MB | 39.9 MB | 36.9 MB | 67.5 MB | 34.1 MB (1.25x) |
Yandex Embeddings (yandexT2I, ) | 1B, 20 | 159.3 | 27.7 GB | 27.7 GB | 30.27 GB | - | 22.2 GB (1.25x) |
S-BERT Embeddings (reimers2019sentence, ) | 2.9M, 96 | 679.5 | 277.1 MB | 277.1 MB | 302.8 MB | 257 MB | 273.4 MB (1.01x) |
A basic version of the algorithm is listed in Algorithm 2. In practice, we employ a variety of short-circuit evaluations and heuristics to reduce the time needed to identify the best value and column to relocate. We also allow for early termination when we can no longer relocate a block of at least values to the same column. While we do not guarantee a solution to Definition 5.3, we will see in the experiments that Algorithm 2 can reduce the memory by up to 40% (Figure 1).
6. Experiments
We provide experiments for various classes of applications in Table 1 and Figure 1, analyzing popular datasets from web search, genomics, information retrieval, and more (aumuller2017ann, ). Additionally, we provide baselines for related methods as discussed in Section 3.
Compression on Synthetic Datasets: As a preliminary exercise to examine the potential for CSFs to compress integer matrices, we conducted two synthetic data experiments where we generated integer matrices first sampled from a discrete uniform distribution and then from a discrete truncated power law distribution where we set , set the support over , and select the normalizing constant accordingly. This synthetic experiment closely follows that of (8416577, ) with the notable exception that we examine compressing a two-dimensional matrix as opposed to a one-dimensional array.
Experiment Setup: Preprocessing work was done in python and resulted in some transformed integer matrix from which to build our CARAMEL structure. All results in Table 1 were obtained using using the publicly available Sux4j Library 222http://sux4j.di.unimi.it/ in Java. Compression numbers are recorded excluding keys and we report additional baselines as the sum of the column-wise entropy of the matrix. Query latencies shown in Figure 1 were obtained using timing modules built in C++. For our NLP experiments, we tokenize the text using the popular SentencePiece library 333https://github.com/google/sentencepiece (kudo2018sentencepiece, ). For our case study comparing against a production read-only database at Amazon, we build CSFs using our own Python implementation, which, unlike existing CSF libraries, supports generic hashable types such strings.
For our remaining baseline methods, we conduct all of the evalations in Java. For Succinct, we use the standard Java release by the authors (agarwal2015succinct, ). For MPH tables, we use a library that has been optimized to work in production by Indeed (indeedMPH, ). It should be noted that Succinct can only support files up to 2 GB in size and that Indeed’s MPH library can be slow to construct tables for large inputs. For this reason, we were unable to run the baselines for some of our data compression tasks.
Dataset | Records | Size | Methods | Indexing | Index | Query | C-Rate |
---|---|---|---|---|---|---|---|
Amazon Large | 1.9M | 4.1G | CARAMEL | 321.07s | 1.77G | 2.42ms | 2.31x |
Production DB | 76.34s | 2.5G | 7.0ms | 1.64x | |||
Amazon Small | 1.0M | 1.5G | CARAMEL | 149.62s | 0.88G | 2.10ms | 1.7x |
Production DB | 31.08s | 0.95 G | 6.8ms | 1.58x |
Product-Quantized Embedding Tables: It is fairly common in enterprise systems to approximate large embedding tables with quantization (jegou2010product, ; guo2020accelerating, ; johnson2019billion, ; kang2020learning, ; ko2021low, ). We apply CARAMEL to quantized embedding tables and report our compression numbers compared to the space of these tables with keys included. Additionally, all embedding tables are quantized to ensure a reasonable metric distortion of ¿80% KNN-recall@100 (jegou2010product, ; kang2020learning, ).
Construction Time: Construction times are reasonable, especially in the context of machine learning systems that frequently don’t update for upwards of a day or more. For 10M keys we construct a single CSF in just over 1 minute, for 100M keys: 10+ minutes, and for 1B keys: less than 2 hours. For billion scale data this construction time is more than reasonable, especially considering CARAMEL and CSF constructions are trivially parallelizable.
Query Time: We compare query times between our CARAMEL structure and a standard hash table implemented in C++. We look at both median and 99th percentile query time (P99) as P99 is the defining factor for latency in web-scale production systems. We also investigate the query time scaling as increases to demonstrate the the worst case complexity of hash-tables against the lookups into CSFs. Timing experiments were done on a machine equipped with Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz over 20,000 queries.
6.1. Discussion
We observe that CARAMEL achieves strong compression rates ranging from 1.25-16x across a variety of applications, with better rates for large, low-entropy data. We also see that the compression rate tends to improve as we increase the number of rows . From our synthetic dataset benchmarks, we find that our method is particularly effective in representing power law data, which is commonplace in web-scale settings. In addition, we find that our two novel algorithmic contributions to constructing CSFs, namely the greedy permutation heuristic and an optimal decision criterion for applying Bloom filter augmentations, both contributed to further substantial memory reductions, as shown in Figure 1. Finally, and perhaps surprisingly, we observe that CARAMEL achieves a faster lookup latency than an uncompressed hash table, possibly due to improved cache locality. This result suggests that CARAMEL could be a strictly more effective primitive in latency and memory-critical settings involving integer data than traditional key-value stores that do not exploit the underlying entropy.
6.2. Case Study on Amazon Product Search
Amazon’s product search engine stores highly frequent queries and their corresponding frequently purchased or clicked products for the purposes of caching results and producing search ranking features (yang2022can, ). During the online ranking model prediction phase, the search engine accesses these features from a read-only lookup table. Due to very tight latency requirements for this lookup process, these database queries typically must be executed in less than 10 milliseconds (luo2022rose, ).
Amazon uses an in-house read-only database to host this lookup request. This read-only database is a read-only file based map store similar to Berkeley DBs (olson1999berkeley, ). We refer to this lookup table as Amazon DB. Amazon DB is an industry standard solution for read-only key-value lookups used, amongst other applications, for serving precomputed search results produced by machine learning models. We compare CARAMEL with Amazon DB for indexing query-product ranking features. For these experiments, we use the Python CSF library we developed that, unlike existing tools, is capable of operating on generic hashable types such as strings.
We sampled two datasets of query-product pairs derived from Amazon.com search logs. This data set contains the query and the associated products that were clicked in one day. Table 2 shows the statistics of the datasets we evaluate in this case study.
From Table 2, we observe CARAMEL has more efficient memory usage than the Amazon DB. On the larger benchmark Amazon dataset, CARAMEL achieves a 2.31x compression rate compared to the 1.64x mark of Amazon DB with a considerably faster query time. We observe a similar trend on the smaller dataset. We also observe that the space and latency improvements of CARAMEL come at the cost of longer index construction times. However, we note that CARAMEL’s construction costs are still on the order of minutes for these datasets which is reasonable given the one-time cost of constructing the index. Moreover, we believe that CARAMEL can achieve faster construction times by better utilizing the multiprocessing capabilities available in Python, an optimization we leave for future work.
7. Conclusion
We introduce CARAMEL, a new data structure for read-only key-value lookups based on the key insight of combining multiple compressed static functions (CSFs) together. We demonstrate in this paper that CSFs are practically useful data structures that provide a powerful primitive for improving the scalability of numerous data-intensive applications. We also show that CARAMEL performs favorably when compared to existing algorithms and systems proposed in both the academic literature and in active production use at a leading online technology company, which underscores the relevance of our work for industry practitioners. Furthermore, we also push the theoretical foundations of CSFs forward by proving precise guarantees on when to apply Bloom filter enhancements. We hope that our work will bring the community’s attention to both the prevalence of low-entropy key-value mappings across data-intensive applications and the potential for representations like CARAMEL to aid in the important effort of scaling data infrastructure in the future.
References
- [1] YS Abu-Mostafa and RJ McEliece. Maximal codeword lengths in huffman codes. Computers & Mathematics with Applications, 39(11):129–134, 2000.
- [2] Rachit Agarwal, Anurag Khandelwal, and Ion Stoica. Succinct: Enabling queries on compressed data. In 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI 15), pages 337–350, 2015.
- [3] Martin Aumüller, Erik Bernhardsson, and Alexander Faithfull. Ann-benchmarks: A benchmarking tool for approximate nearest neighbor algorithms. In International conference on similarity search and applications, pages 34–49. Springer, 2017.
- [4] Djamal Belazzougui, Paolo Boldi, Giuseppe Ottaviano, Rossano Venturini, and Sebastiano Vigna. Cache-oblivious peeling of random hypergraphs. In 2014 Data Compression Conference, pages 352–361. IEEE, 2014.
- [5] Djamal Belazzougui and Rossano Venturini. Compressed static functions with applications. In Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 229–240. SIAM, 2013.
- [6] Burton H Bloom. Space/time trade-offs in hash coding with allowable errors. Communications of the ACM, 13(7):422–426, 1970.
- [7] Fabiano C Botelho, Rasmus Pagh, and Nivio Ziviani. Practical perfect hashing in nearly optimal space. Information Systems, 38(1):108–131, 2013.
- [8] C Titus Brown, Adina Howe, Qingpeng Zhang, Alexis B Pyrkosz, and Timothy H Brom. A reference-free algorithm for computational normalization of shotgun sequencing data. arXiv preprint arXiv:1203.4802, 2012.
- [9] Ting Chen, Martin Renqiang Min, and Yizhou Sun. Learning k-way d-dimensional discrete codes for compact embedding representations. In International Conference on Machine Learning, pages 854–863. PMLR, 2018.
- [10] Cody Coleman, Christopher Yeh, Stephen Mussmann, Baharan Mirzasoleiman, Peter Bailis, Percy Liang, Jure Leskovec, and Matei Zaharia. Selection via proxy: Efficient data selection for deep learning. arXiv preprint arXiv:1906.11829, 2019.
- [11] Shabnam Daghaghi, Tharun Medini, Nicholas Meisburger, Beidi Chen, Mengnan Zhao, and Anshumali Shrivastava. A tale of two efficient and informative negative sampling distributions. In International Conference on Machine Learning, pages 2319–2329. PMLR, 2021.
- [12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
- [13] Martin Dietzfelbinger and Rasmus Pagh. Succinct data structures for retrieval and approximate membership. In International Colloquium on Automata, Languages, and Programming, pages 385–396. Springer, 2008.
- [14] Yicun Duan and Xiangjun Peng. Succinct compression: Near-optimal and lossless compression of deep neural networks during inference runtime. 2021.
- [15] Olivier Dubois and Jacques Mandler. The 3-xorsat threshold. Comptes Rendus Mathematique, 335(11):963–966, 2002.
- [16] RA Leo Elworth, Qi Wang, Pavan K Kota, CJ Barberan, Benjamin Coleman, Advait Balaji, Gaurav Gupta, Richard G Baraniuk, Anshumali Shrivastava, and Todd J Treangen. To petabytes and beyond: recent advances in probabilistic and signal processing algorithms and their application to metagenomics. Nucleic acids research, 48(10):5217–5234, 2020.
- [17] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020.
- [18] Marco Genuzio, Giuseppe Ottaviano, and Sebastiano Vigna. Fast scalable construction of (minimal perfect hash) functions. In International Symposium on Experimental Algorithms, pages 339–352. Springer, 2016.
- [19] Marco Genuzio, Giuseppe Ottaviano, and Sebastiano Vigna. Fast scalable construction of ([compressed] static— minimal perfect hash) functions. Information and Computation, 273:104517, 2020.
- [20] Marco Genuzio and Sebastiano Vigna. Engineering compressed static functions. In 2018 Data Compression Conference, pages 52–61, 2018.
- [21] Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, and Sanjiv Kumar. Accelerating large-scale inference with anisotropic vector quantization. In International Conference on Machine Learning, pages 3887–3896. PMLR, 2020.
- [22] Gaurav Gupta, Minghao Yan, Benjamin Coleman, Bryce Kille, RA Leo Elworth, Tharun Medini, Todd Treangen, and Anshumali Shrivastava. Fast processing and querying of 170tb of genomics data via a repeated and merged bloom filter (rambo). In Proceedings of the 2021 International Conference on Management of Data, pages 2226–2234, 2021.
- [23] Jóhannes B Hreinsson, Morten Krøyer, and Rasmus Pagh. Storing a compressed function with constant time access. In European Symposium on Algorithms, pages 730–741. Springer, 2009.
- [24] Herve Jegou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search. IEEE transactions on pattern analysis and machine intelligence, 33(1):117–128, 2010.
- [25] Angela H Jiang, Daniel L-K Wong, Giulio Zhou, David G Andersen, Jeffrey Dean, Gregory R Ganger, Gauri Joshi, Michael Kaminksy, Michael Kozuch, Zachary C Lipton, et al. Accelerating deep learning by focusing on the biggest losers. arXiv preprint arXiv:1910.00762, 2019.
- [26] Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, 7(3):535–547, 2019.
- [27] Wang-Cheng Kang, Derek Zhiyuan Cheng, Ting Chen, Xinyang Yi, Dong Lin, Lichan Hong, and Ed H Chi. Learning multi-granular quantized embeddings for large-vocab categorical features in recommender systems. In Companion Proceedings of the Web Conference 2020, pages 562–566, 2020.
- [28] Angelos Katharopoulos and François Fleuret. Not all samples are created equal: Deep learning with importance sampling. In International conference on machine learning, pages 2525–2534. PMLR, 2018.
- [29] Anthony Ko, Iman Keivanloo, Vihan Lakshman, and Eric Schkufza. Low-precision quantization for efficient nearest neighbor search. arXiv preprint arXiv:2110.08919, 2021.
- [30] Taku Kudo and John Richardson. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226, 2018.
- [31] Rohit Kulkarni. A Million News Headlines. https://doi.org/10.7910/DVN/SYBGZL, 2018.
- [32] Chen Luo, Vihan Lakshman, Anshumali Shrivastava, Tianyu Cao, Sreyashi Nag, Rahul Goutam, Hanqing Lu, Yiwei Song, and Bing Yin. Rose: Robust caches for amazon product search. In The Web Conference 2022, 2022.
- [33] Chen Luo, Vihan Lakshman, Anshumali Shrivastava, Tianyu Cao, Sreyashi Nag, Rahul Goutam, Hanqing Lu, Yiwei Song, and Bing Yin. Rose: Robust caches for amazon product search. 2022.
- [34] Mohammad Saiful Islam Mamun, Mohammad Ahmad Rathore, Arash Habibi Lashkari, Natalia Stakhanova, and Ali A Ghorbani. Detecting malicious urls using lexical analysis. In International Conference on Network and System Security, pages 467–482. Springer, 2016.
- [35] Julian McAuley and Jure Leskovec. Hidden factors and hidden topics: understanding rating dimensions with review text. In Proceedings of the 7th ACM conference on Recommender systems, pages 165–172, 2013.
- [36] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
- [37] Maxim Naumov, Dheevatsa Mudigere, Hao-Jun Michael Shi, Jianyu Huang, Narayanan Sundaraman, Jongsoo Park, Xiaodong Wang, Udit Gupta, Carole-Jean Wu, Alisson G Azzolini, et al. Deep learning recommendation model for personalization and recommendation systems. arXiv preprint arXiv:1906.00091, 2019.
- [38] Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. Ms marco: A human generated machine reading comprehension dataset. In CoCo@ NIPS, 2016.
- [39] Priyanka Nigam, Yiwei Song, Vijai Mohan, Vihan Lakshman, Weitian Ding, Ankit Shingavi, Choon Hui Teo, Hao Gu, and Bing Yin. Semantic product search. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2876–2885, 2019.
- [40] Nuala A O’Leary, Mathew W Wright, J Rodney Brister, Stacy Ciufo, Diana Haddad, Rich McVeigh, Bhanu Rajput, Barbara Robbertse, Brian Smith-White, Danso Ako-Adjei, et al. Reference sequence (refseq) database at ncbi: current status, taxonomic expansion, and functional annotation. Nucleic acids research, 44(D1):D733–D745, 2016.
- [41] Michael A Olson, Keith Bostic, and Margo I Seltzer. Berkeley db. In USENIX Annual Technical Conference, FREENIX Track, pages 183–191, 1999.
- [42] Brian D Ondov, Todd J Treangen, Páll Melsted, Adam B Mallonee, Nicholas H Bergman, Sergey Koren, and Adam M Phillippy. Mash: fast genome and metagenome distance estimation using minhash. Genome biology, 17(1):1–14, 2016.
- [43] Greg Pass, Abdur Chowdhury, and Cayley Torgeson. A picture of search. In Proceedings of the 1st international conference on Scalable information systems, pages 1–es, 2006.
- [44] Ely Porat. An optimal bloom filter replacement based on matrix solving. In International Computer Science Symposium in Russia, pages 263–273. Springer, 2009.
- [45] Kim D Pruitt, Tatiana Tatusova, and Donna R Maglott. Ncbi reference sequences (refseq): a curated non-redundant sequence database of genomes, transcripts and proteins. Nucleic acids research, 35(suppl_1):D61–D65, 2006.
- [46] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019.
- [47] Falk Scholer, Hugh E Williams, John Yiannis, and Justin Zobel. Compression of inverted indexes for fast query evaluation. In Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pages 222–229, 2002.
- [48] Nicola Segata, Levi Waldron, Annalisa Ballarini, Vagheesh Narasimhan, Olivier Jousson, and Curtis Huttenhower. Metagenomic microbial community profiling using unique clade-specific marker genes. Nature methods, 9(8):811–814, 2012.
- [49] Yoshihiro Shibuya, Djamal Belazzougui, and Gregory Kucherov. Space-efficient representation of genomic k-mer count tables. Algorithms for Molecular Biology, 17(1):1–15, 2022.
- [50] Alex Shinn. Indeed mph: Fast and compact immutable key-value stores. https://engineering.indeedblog.com/blog/2018/02/indeed-mph/, 2018. Accessed: 2022-10-12.
- [51] Raphael Shu and Hideki Nakayama. Compressing word embeddings via deep compositional code learning. In International Conference on Learning Representations, 2018.
- [52] Manu Siddhartha. Kaggle Malicious URL Dataset. https://www.kaggle.com/sid321axn/malicious-urls-dataset, 2022.
- [53] Xinying Song, Alex Salcianu, Yang Song, Dave Dopson, and Denny Zhou. Linear-time wordpiece tokenization. arXiv e-prints, pages arXiv–2012, 2020.
- [54] Ben Thompson. integer-compression. https://github.com/invertedtomato/integer-compression, 2022.
- [55] Hugh E Williams and Justin Zobel. Compressing integers for fast file access. The Computer Journal, 42(3):193–201, 1999.
- [56] Ian H Witten, Ian H Witten, Alistair Moffat, Timothy C Bell, Timothy C Bell, Ed Fox, and Timothy C Bell. Managing gigabytes: compressing and indexing documents and images. Morgan Kaufmann, 1999.
- [57] Yandex. Yandex Text-To-Image 1B Dataset. https://research.yandex.com/datasets/biganns, 2021.
- [58] Tao Yang, Chen Luo, Hanqing Lu, Parth Gupta, Bing Yin, and Qingyao Ai. Can clicks be both labels and features? unbiased behavior feature collection and uncertainty-aware learning to rank. 2022.
- [59] Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. Advances in neural information processing systems, 28, 2015.
Appendix
Appendix A Proof of Main Theorem
In this section, we provide a proof of Theorem 5.2. We will use the optimality of Huffman codes, a well-known fact which we state below in Theorem A.1.
Theorem A.1.
Given frequencies of values and a Huffman code with lengths ,
for any other uniquely decodable code with lengths .
We will use this fact to prove our theorem, which we restate below for convenience.
Theorem A.2.
Given a set of key-value pairs , let be the normalized frequency of the most common value. If
Then . Moreover, the optimal filter length is bits (i.e. the filter should be designed to contain items with false positive rate )
Proof.
Recall from Theorem 2.3 that the cost to store the compressed static function of distinct values is
Suppose we construct a Bloom filter with false-positive rate . This introduces an overhead of bits per item in the filter [6] and causes the Huffman encoding to change from to
Here, denotes the number of keys with value that erroneously pass through the Bloom filter. It should be noted that is a random variable, where the randomness comes from the hash functions used in the Bloom filter construction. Because keys with value are fed into the CSF, the size of the resulting CSF is also a random variable.
We wish to show that when the condition in the theorem holds. We will do this by finding a condition on the distribution for which the following inequality holds in expectation. While we could simply state the condition in the theorem and show that the inequality holds, we will instead derive the condition from the inequality for clarity of presentation.
Using the optimality of Huffman codes, we have that
because the code designed for will have smaller average space than the code designed for . Therefore, we may satisfy the following inequality to ensure that , where all we have done is to replace the code lengths with the code lengths :
Subtracting the common terms from both sides,
We will also take the expectation of both sides of the inequality, noting that the only random variable is . We note that because the Bloom filter has probability to pass false positives from the set of distinct keys that map to value .
To proceed, we will prove the bound bits. Because we are using canonical Huffman codes, we only need to store the lengths of each code in and , not the codes themselves. Let and . Then
By replacing with , we increase the depth of the Huffman tree for by at most 1. Therefore, . Therefore,
From a practical perspective, it should be noted that this is a very loose bound and that if and only if [1]. Using this bound, we obtain the following inequality.
Put to obtain
Provided that , the term is and so we have
The value of that minimizes the left hand side of the inequality corresponds to the best parameter setting for the Bloom filter. By taking derivatives, we find that
minimizes the left hand side. To obtain the condition in the theorem, we substitute into the inequality and neglect the term.
∎