This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

kkNN-LM Does Not Improve Open-ended Text Generation

Shufan Wang1    Yixiao Song1    Andrew Drozdov1    
Aparna Garimella2    Varun Manjunatha2    Mohit Iyyer1
University of Massachusetts Amherst1    Adobe Research2    
{shufanwang, yixiaosong, adrozdov, miyyer}@umass.edu
{garimell,vmanjuna}@adobe.com
Abstract

In this paper, we study the generation quality of interpolation-based retrieval-augmented language models (LMs). These methods, best exemplified by the kkNN-LM (Khandelwal et al., 2020), interpolate the LM’s predicted distribution of the next word with a distribution formed from the most relevant retrievals for a given prefix. While the kkNN-LM and related methods yield impressive decreases in perplexity, we discover that they do not exhibit corresponding improvements in open-ended generation quality, as measured by both automatic evaluation metrics (e.g., MAUVE) and human evaluations. Digging deeper, we find that interpolating with a retrieval distribution actually increases perplexity compared to a baseline Transformer LM for the majority of tokens in the WikiText-103 test set, even though the overall perplexity is lower due to a smaller number of tokens for which perplexity dramatically decreases after interpolation. However, when decoding a long sequence at inference time, significant improvements on this smaller subset of tokens are washed out by slightly worse predictions on most tokens. Furthermore, we discover that the entropy of the retrieval distribution increases faster than that of the base LM as the generated sequence becomes longer, which indicates that retrieval is less reliable when using model-generated text as queries (i.e., is subject to exposure bias). We hope that our analysis spurs future work on improved decoding algorithms and interpolation strategies for retrieval-augmented language models.

1 Introduction

Retrieval-augmented language models, which integrate non-parametric dense retrieval with autoregressive next-token prediction, have been validated with strong empirical performance across a variety of tasks Metzler et al. (2022); Basu et al. (2022); Mialon et al. (2023) in addition to achieving low held-out perplexities on LM benchmarks. In this paper, we study interpolation-based LMs, a subtype of retrieval-augmented LMs that compute the probability of the next token by interpolating between the softmax distribution of the original LM and a token distribution formed by retrieving over an external datastore. These methods, perhaps best exemplified by the kkNN-LM Khandelwal et al. (2020), are particularly attractive because they allow any pretrained LM to be retrofitted with a retrieval module without further training.

Despite these advantages, there is limited understanding about the text generation quality of interpolation-based LMs. In this study, we evaluate the quality of generated text from two such methods, kkNN-LM and TRIME Zhong et al. (2022), against the output of baseline LMs that do not use retrieval. Our evaluations involves open-ended text completions generated using different decoding algorithms on the WikiText-103 dataset. We discover that interpolation-based LMs do not improve the quality of generated text, as measured by both automatic text generation metrics such as MAUVE (Pillutla et al., 2021) and human evaluation.

This result begs the question of why the text generation quality does not improve, as the perplexity of interpolation-based LMs is substantially lower than that of the baselines. Our analysis of the kkNN-LM model suggests two potential reasons for this lack of improvement:

  1. 1.

    kkNN-LM actually worsens the predictions of the majority of tokens in the WikiText-103 test set. On aggregate, perplexity improves because of significantly improved predictions on a smaller subset of tokens. However, when generating a long sequence of tokens, these improvements are washed out by the worsened predictions on other tokens.

  2. 2.

    The quality of the retrieval distribution deteriorates faster than that of the LM’s predicted distribution as the length of the generation increases; in other words, the retrieval distribution is more vulnerable to exposure bias and can be easily thrown off by artifacts presented in model-generated text.

Unlike previous works that rely on perplexity to evaluate language modeling or BLEU to evaluate machine translation quality of kkNN-LM-based models Khandelwal et al. (2021), our work specifically studies the open-ended text generation capability of kkNN-LMs with a range of automatic evaluation metrics as well as human evaluation. We demonstrate that, though they significantly lower perplexity, retrievers might also impair text generation performance of kkNN-LMs. This finding suggests potential future directions for using retrieval during text generation, such as developing more robust retrieval components or employing retriever mechanisms more selectively during decoding.

2 Related Work

We present the most extensive study of open-ended text generation111The kkNN-LM is also evaluated using MAUVE in Lan et al. (2023); however, our work has much more extensive analysis in the open-ended text generation setting. from interpolation-based LMs such as kkNN-LM Khandelwal et al. (2020). Our results reveal that although these methods are effective at reducing perplexity, they can also be detrimental to text generation. Previous work finds that retrieval LMs are improved by selectively incorporating retrieval when conditions are favorable He et al. (2021a); Alon et al. (2022); Drozdov et al. (2022); Mallen et al. (2023), although they only examine the teacher-forced setting or other tasks, e.g. question answering. The kkNN-MT Khandelwal et al. (2021) explores machine translation, which is a constrained task with short inputs, and thus not a good test of open-ended long-form generation.

The kkNN-LM effectively scales retrieval to billions of tokens using a token-level non-parametric interpolation technique first introduced by Grave et al. (2017). Alternative retrieval-augmented models experiment with training the retriever Zhong et al. (2022); Ram et al. (2023); Shi et al. (2023), interpolating vectors instead of token probabilities Yogatama et al. (2021), scaling to trillions of tokens Borgeaud et al. (2021), exploiting retrieval for strong few-shot learning Izacard et al. (2022), and so on Chen et al. (2017); Guu et al. (2020); Lewis et al. (2020); Izacard and Grave (2021); Rae et al. (2021); Wu et al. (2022); Trivedi et al. (2022); He et al. (2022). Among these, kkNN-LM stands out as a relatively simple and fundamental work. Our findings indicate important weaknesses of retrieval for text generation.

Reference-based metrics are not well suited to evaluate open-ended text generation Novikova et al. (2017). Instead, effective automated approaches compare the machine generated and human language text distributions using samples McCoy et al. (2021); Pillutla et al. (2021); Pimentel et al. (2023). Human evaluation remains the golden standard for natural language generation Hashimoto et al. (2019); Celikyilmaz et al. (2020); Krishna et al. (2023).

3 Experimental setup

Using a variety of commonly used text generation evaluation metrics, we evaluate the text generation capability of interpolation-based LMs and compare them to baseline LMs (i.e., without kk-nearest-neighbor retrieval from an external datastore). In this section, we describe our experimental setup, including models, automatic evaluation metrics, data selection, and hyperparameters.

3.1 Models

We experiment with two interpolation-based LMs: the kkNN-LM of Khandelwal et al. (2020), which augments an existing pretrained LM with a retrieval module without any additional training, and TRIME Zhong et al. (2022), a recent improvement over the kkNN-LM that trains the retriever and LM jointly to further decrease perplexity.

kkNN-LM:

The kkNN-LM is a pretrained language model that uses retrieval to improve word prediction. We follow the procedure from Khandelwal et al. (2020)222Alternative distance functions, token representations, and interpolation options for kkNN-LM are explored in Xu et al. (2023). We don’t expect those settings to impact the trends we observe, but as we mention in §6, tuning for text generation could be beneficial. and use the LM to encode token-level representations from a document collection (e.g., WikiText-103 training data) into a datastore where each token in document is converted into a key-value pair: a context vector kik_{i} representing the first n1n-1 words and a value viv_{i} which is the nn-th word. During evaluation, the model calculates Euclidean distances d(k,qj)d(k,q_{j}) between the query vector qjq_{j} and all the keys k1,k2,k|V|k_{1},k_{2},\dots k_{|V|} in the datastore. The values from the retrieved documents define a new distribution of the next word:

PKNN(wt|qt)(ki,vi)𝟙wt=viexp(d(ki,qt))P_{KNN}(w_{t}|q_{t})\propto\sum_{(k_{i},v_{i})}\mathbbm{1}_{w_{t}=v_{i}}\exp(-d(k_{i},q_{t})) (1)

The model interpolates the LM’s predicted distribution over the next token P(wt|qt)P(w_{t}|q_{t}) with the retrieval distribution with a tunable hyperparameter λ\lambda:

P(wt|qt)=λPKNN(wt|qt)+(1λ)PLM(wt|qt)P^{{}^{\prime}}(w_{t}|q_{t})=\lambda P_{KNN}(w_{t}|q_{t})+(1-\lambda)P_{LM}(w_{t}|q_{t}) (2)

To generate text from the kkNN-LM, we apply a decoding strategy (e.g., greedy decoding or truncated sampling algorithms) using the final interpolated probability distribution P(wt|qt)P^{{}^{\prime}}(w_{t}|q_{t}).

TRIME:

Note that in kkNN-LM, the LM is trained without retrieval; the retrieval component is bolted on after training. Zhong et al. (2022) note that this approach is suboptimal, as the LM does not understand how to best use the retrieval. Thus, they propose the TRIME model, which uses an efficient in-batch strategy to incorporate retrievals during training. While kkNN-LM relies on just one type of retrieval (from an external datastore), TRIME can retrieve from local and long-range context as well as external context. We use the TRIMEEXT{}_{\text{EXT}} configuration in all of our experiments, which also uses a linear interpolation between LM and retrieval distributions (as in Equation 2) to produce the final probability distribution. The baseline LM (no external retrieval) can still retrieve from example-level local and long context, but it has no access to a huge-scale external datastore.

3.2 Constructing an evaluation dataset

We sample from WikiText-103 (Merity et al., 2016) to construct an evaluation dataset. We choose WikiText-103 because it is the most commonly used dataset for evaluating interpolation-based LMs; indeed, the main experiments from both kkNN-LM and TRIME demonstrate that the retrieval component decreases held-out perplexity on this dataset compared to the baseline LM. Specifically, we randomly sample 5K examples333We choose 5K examples because this is the minimum recommended number of generations to obtain meaningful comparisons as per  Pillutla et al. (2021). from the validation and test set of WikiText-103, and we use the first 100 tokens of each example as a prefix that the model must condition on to generate a 150-token-long continuation. As some of our metrics requires reference text, we also store the ground-truth 150 tokens (gold suffix) that follow the prefix in each example.

3.3 Automatic evaluation metrics

For both kkNN-LM and TRIME, we compare the quality of text generated by the base LM with and without the kk-NN retrieval component over the external datastore. We measure quality via the following automatic metrics:

MAUVE:

MAUVE is an evaluation metric for open-ended text generation (Pillutla et al., 2021) that achieves high correlation with human judgments of text quality. It measures the distribution similarity between the generated text and the reference text. Higher MAUVE scores indicate closer distance between the distribution of the generated text and that of reference text.

RankGen:

Given a prefix and several possible continuations (suffixes), RankGen (Krishna et al., 2022) outputs a score for each suffix, measuring the relevance between the prefix and suffix. Higher RankGen scores indicate stronger relevance between generated suffix with the given prefix. We thus measure the RankGen score between prefix and generated suffix for each of the two models.

GPT-3 perplexity:

We also use GPT-3 (Brown et al., 2020), a large-scale pretrained language model, to compute the perplexity of text generated with and without interpolation conditioned on the same prefix. Lower GPT-3 perplexity indicates stronger relevance between the prefix and generated suffix and the better fluency of the generated suffix. We use the 6.7B gpt3-curie model via OpenAI’s API to measure perplexity.

Entity-F1:

Previous works (Nan et al., 2021; Lee et al., 2022) use the percentage of hallucinated named entities (entities that appear in the generated text but not in the reference text) or the ratio of named entity overlaps between the generated text and reference text to estimate the factuality of the generated text. In our work, we compute the F1 scores between the named entities from the generated text and reference text as a proxy for entity hallucination. Higher F1 scores may correlate to fewer instances of hallucinated entities.

Seq-Rep-1:

We follow Welleck et al. (2020) and use the percentage of unique unigrams (Seq-Rep-1) in the text as a metric for lexical diversity in the text. Higher Seq-Rep-1 scores indicate lower diversity (more repetition) in the generated text.

3.4 Model configurations and hyperparameters

In this work, we do not train our own interpolation-based LMs but rather leverage pretrained model and datastore checkpoints released by prior work.

Base LM details:

For kkNN-LM, we use the implementation from Alon et al. (2022), which relies on a backbone 117M-parameter GPT-2 small model (Radford et al., 2019) fine-tuned on the WikiText-103 training data. The external datastore is constructed by the same backbone model, and both the pretrained LM and datastore are publicly released by Alon et al. (2022).444See the gpt2-finetuned-wikitext103 model available here: https://github.com/neulab/knn-transformers. For TRIME, we use the 247M-parameter TRIMEext{}_{\text{ext}} model trained from scratch on WikiText-103 and publicly released by Zhong et al. (2022). Our “non-retrieval” baseline is the same model without external retrieval; in other words, it has access to only the local memory (recent tokens) and long-range memory (in-batch tokens). In both the kkNN-LM and TRIME setups, the external datastore is constructed using the training dataset of WikiText-103; the TRIME datastore size is 103M entries, while the kkNN-LM has 117M entries (the discrepancy is due to tokenization differences between the two models).

Perplexity improvements from retrieval:

Both models studied in this paper substantially decrease perplexity on WikiText-103’s validation set when interpolation is enabled. For kkNN-LM, the base GPT-2 perplexity is 14.8, and it decreases to 12.6 (-2.2) after interpolation. Meanwhile, TRIME decreases perplexity from 17.0 (no retrieval) to 15.5 (-1.5) after interpolation.

Hyperparameters:

To generate text, we use the hyperparameters recommended by the authors that yield low perplexities on the WikiText-103 test set. For the kkNN-LM, the softmax temperature is set to 1.01.0 and the interpolation coefficient between the LM distribution and the retrieval distribution λ\lambda is set to 0.250.25. For TRIME, the softmax temperature is set to 1.251.25 and the λ\lambda is 0.30.3. For most of our experiments (e.g., those in Table 1), unless otherwise specified, we decode the continuations using nucleus sampling (Holtzman et al., 2020) with p=0.8p=0.8.

4 Results

We find that despite incorporating the retrieval component and interpolating the information from the base-LM and the retrieval, these methods do not yield any significant improvement to text generation performance, and even worsen it by some metrics (Table 1). In this section, we provide an overview of our main results, perform more fine-grained analyses, and describe a human evaluation that supports the conclusions drawn from automatic metrics.

Model MAUVE\uparrow PPLGPT-3{}_{\text{GPT-3}}\downarrow RankGen\uparrow EntityF1\uparrow SeqRep_1\downarrow kkNN-LM with and without retrieval from Alon et al. (2022) GPT-2 small (no retrieval) 0.773 13.1 11.7 0.14 0.57 GPT-2 small (+ retrieval) 0.793 14.8 11.7 0.13 0.53 TRIMEEXT{}_{\text{EXT}} with and without external retrieval from Zhong et al. (2022) TRIME (no ext retrieval) 0.889 23.1 13.0 0.09 0.40 TRIME (+ ext retrieval) 0.885 24.7 12.3 0.08 0.39

Table 1: Automatic evaluation metrics do not show consistent improvement in generation quality for interpolation-based LMs—kkNN-LM (top), and TRIME (bottom)— compared to no-retrieval baseline LMs.

Interpolation-based LMs do not improve automatic text generation evaluation metrics:

We find that neither kkNN-LM nor TRIME significantly improve generation quality compared to the base LM, as shown by various evaluation metrics (Table 1). For kkNN-LM, while the MAUVE score improves by 2 points with retrieval, the perplexity of GPT-3 increases on retrieval-augmented generations, and the RankGen score is identical. For TRIME, the no-retrieval baseline is actually slightly better across MAUVE, GPT-3 perplexity, and RankGen. In other words, there is no convincing winner; furthermore, contrary to the expectation that kkNN-LMs may reduce hallucination by retrieving (and potentially copying) from the datastore, we also do not observe any improvement in the Entity F1 scores with the gold suffix. We observe a marginal (likely insignificant) improvement in lexical diversity of the generations (shown by the lower seq_rep_1 score).

These results hold across different decoding algorithms:

The results in Table 1 are all from nucleus sampling. What if we change the decoding algorithm? To investigate the impact of decoding algorithm on generation quality, we evaluate the kkNN-LM on three different decoding algorithms: greedy decoding, ancestral sampling, and beam search. We observe in Table 2 that none of these decoding algorithms changes the result: there is no clear winner between models with and without retrieval.

Model Nucleus Sampling Top-kk
Sampling
Beam Search
kkNN-LM with and without retrieval from Alon et al. (2022) GPT-2 small (no retrieval) 0.773 0.807 0.0363 GPT-2 small (+ retrieval) 0.793 0.793 0.0338

Table 2: The observation that kkNN-LM does not significantly improve text generation performance (measured here via MAUVE) is consistent across a variety of decoding algorithms: nucleus sampling, top-kk sampling (k=40k=40) and beam search (beam size =5=5). We note that beam search decoding often generates repetitive text and therefore scores poorly with MAUVE.

4.1 Human evaluation

Having found that interpolation-based LMs do not notably improve text generation quality according to automatic evaluation metrics, we turn next to human evaluation, which is known to be more reliable for generation tasks (Celikyilmaz et al., 2020; Krishna et al., 2021), to compare the text generated by the kkNN-LM vs. the baseline GPT-2 model. We hired three English teachers/editors on the freelance marketplace Upwork. The evaluation was conducted on the platform Label Studio (Tkachenko et al., 2020-2022).555https://www.upwork.com, https://labelstud.io/ The annotators were experienced in text generation evaluation and hired after careful selection.

The annotators were given a prefix and two continuations of the context (one generated by the baseline LM and one generated with retrieval). The presentation order of the two continuations were randomized. The evaluators’ task was to decide which continuation is better, indicate whether it was hard to choose between the two following Thai et al. (2022), and justify their choice in 3 to 4 sentences.666A screenshot of our evaluation platform can be found in Appendix A. The evaluation focused on whether the generated text is grammatical, fluent, consistent, and logical. Each evaluator evaluated 45 pairs of continuations generated by kkNN-LM and GPT-2. Each evaluator was paid $50 for their work.

Human evaluation shows no definitive winner between kkNN-LM and GPT-2 either:

On aggregate, baseline GPT-2 generations were preferred 51%51\% of the time, vs. 49%49\% for kkNN-LM. Additionally, the three annotators report that the decision was difficult for 37%37\% of all cases. Out of the 45 comparison pairs, the three annotators only agree on their choices in 17 instances (37.78%37.78\%), resulting in a Fleiss Kappa score 0.170.17 (slight agreement). Figure 1 presents the evaluator preference when comparing the kkNN-LM to GPT-2 generations. The light area shows the choices that were hard to make but the evaluator still chose the corresponding type. For Rater1 and Rater3, the rates of difficult to choose are as high as 42%42\% and 47%47\% while for Rater2 it is 22%22\%.

Refer to caption
Figure 1: The plot presents how many times each type of generations (kkNN-LM or GPT-2) is chosen by the evaluators. The dark area in each bar shows that the choices were made confidently. The light area represents the choices between kkNN-LM and GPT-2 that were hard but the evaluator still chose the corresponding type. Overall, annotators preferred GPT-2 baseline texts 51%51\% of the time compared to 49% for kkNN-LM.

Both models make catastrophic errors at similar rates:

A qualitative analysis of the free-form choice justifications from the evaluators reveals that both kkNN-LM and GPT-2 make catastrophic mistakes. Table 4 gives four examples of bad continuations, along with the evaluators’ comments and our categorization of the errors. In the first row of the table, Continuation A generated by the kkNN-LM contains repetitive content (i.e., ==ZAPU retreat==), and confuses ZAPA and ZIPRA at multiple places. The GPT-2 continuation in the second row states that a person was born in 1584 but was still alive in 1742; the generation in the third row by the kkNN-LM claims that U.S. Route 75 curves both northeast and northwest in the northbound direction. Furthermore, both the GPT-2 and kkNN-LM’s generations change topics abruptly as shown in the lower half of Table 4. Overall, the quantitative and qualitative analyses of the human evaluation results show that the kkNN-LM does not clearly improve over its base GPT-2 model despite its significant improvement in perplexity.

5 Why do kkNN-LMs fail to improve text generation quality?

Our evaluations (both human and automatic) do not show a significant quality increase when interpolating an LM’s predicted probability distribution with one formed via retrieval over a large external datastore. In this section, we try to understand why we do not observe an improvement by empirically analyzing the kkNN-LM. We come up with two reasons: (1) despite lowering the aggregate perplexity, kkNN-LMs only improve the perplexity of 42% of all test tokens, which suggests that the improved quality of a subset of tokens could be counter-balanced by worsened predictions on other tokens that do not benefit from the kkNN-LM. Moreover, we find the entropy of the retrieval distribution to increase at a faster rate compared to that of the baseline LM as the model generates longer sequences. This difference implies that the retriever distribution is getting noisier as more tokens are sampled, potentially due to the exposure bias stemming from the retriever having to rely on the sampled text as the query.

Refer to caption
Figure 2: Across all POS tags, we observe that kkNN-LM does not increase the probability of the majority of gold next token predictions. For verbs, pronouns, and adjectives, it only helps <40%<40\% of the time (i.e., it hurts the predictions of the majority of these tokens).

5.1 KNN-LMs only benefits a subset of tokens

Many studies have shown that kkNN-LMs decrease perplexity via retrieval interpolation (Khandelwal et al., 2020; Alon et al., 2022; Drozdov et al., 2022). Previous work (Drozdov et al., 2022; Zhong et al., 2022) has also suggested that kkNN-LMs benefit the inference of tokens of various part-of-speech (POS) tags to different degrees (by lowering the perplexity of the gold token). However, these works focus on aggregate perplexity averaged across tokens in the testing examples but do not look at individual tokens and the percentage of tokens that actually benefit from retrieval.

Using the dataset we selected from WikiText-103 for evaluating text generation, we compute the percentage of gold tokens from our test examples that are assigned lower perplexity (higher probability) by the kkNN-LM compared to the base LM. We find that only 42% of the tokens benefit from kkNN-LMs, while the remaining 58% of the tokens are adversely affected by the kkNN-LM (i.e., the kkNN-LM assigns a smaller probability to the gold token compared to the baseline LM). Moreover, we also calculate the percentage of gold tokens that benefit from kkNN-LM in each POS category (Figure 2) and consistently find the similar result that kkNN-LM only helps reduce the perplexity for a smaller subset of tokens. We show examples of kkNN-LM negatively impacting the next-token prediction (assigning the gold token with lower probability compared to the base-LM) in Table 3.

This means that despite lowering the aggregate perplexity across the test sets, the kkNN-LM is more likely to hurt, instead of help, the inference of each individual token. Therefore, we hypothesize that during text generation, as the model samples a sequence of tokens, the advantages brought by kkNN-LM to a smaller subset of tokens are offset by other tokens, for which kkNN-LM may even have a detrimental impact on the inference.

5.2 The retriever becomes less reliable with longer generated sequences

Additionally, we observe that as the model generates longer sequences of text, the retriever component from kkNN-LM becomes less confident and reliable in returning a high-quality next-token distribution. Since the kkNN-LM relies on interpolating the next-token distribution from the baseline LM and that from the retriever, a lower quality retriever distribution can compromise the resulting next-token distribution and adversely affect the text generation performance.

Refer to caption
Figure 3: We plot the ratio between the Shannon entropy of the retriever’s next-token distribution and that of the baseline LM softmax distribution, as the number of generated tokens increases. The ratio increases for longer model-generated sequences, indicating that the retriever becomes less confident than the baseline LM as decoding progresses.
Refer to caption
Figure 4: We plot the Jensen-Shannon divergence between the retriever’s next-token distribution and that of the baseline LM softmax distribution, as the number of generated tokens increases. The increasing divergence indicates more disagreement between the retriever and the baseline LM in selecting the next token to generate.

We plot the ratio of Shannon entropy (Shannon, 2001) between the retriever distribution and that of the baseline LM distribution on the next token (with respect to the index of the token generated) and find that the retriever’s entropy is increasing at a faster rate compared to that from the base-LM (Figure 3). Given a |V||V|-dimensional probability distribution pp, the entropy is computed as:

H(p)=i=1dpilog(pi)H(p)=-\sum_{i=1}^{d}p_{i}\log(p_{i})

A higher entropy indicates lower level of confidence (closer to a uniform distribution over all tokens) and suggests that the retriever, when sampling long sequences, may be less reliable in identifying the high-quality tokens to be retrieved.

Furthermore, we also plot the Jensen-Shannon probability distribution divergence between the retriever distribution and the baseline LM distribution over the next token, with respect to token indices. Given the retriever distribution pp and the baseline LM distribution qq (both |V||V|-dimensional), we calculate the Jensen-Shannon divergence (DJSD_{JS}) as,

DJS(p|q)=12(DKL(p|m)+DKL(q|m))D_{JS}(p|q)=\frac{1}{2}(D_{KL}(p|m)+D_{KL}(q|m))

where mm is the mean distribution 12(p+q)\frac{1}{2}(p+q) and DKL(m)D_{KL}(m) denotes the Kullback-Leibler divergence computed as i=1dpilog(piqi)\sum_{i=1}^{d}p_{i}\log(\frac{p_{i}}{q_{i}})

We observe that the probability distribution divergence between the retriever distribution and the base-LM distribution over the next-token widens as the sampled sequence becomes longer ( 4), which means that they exhibit increased disagreement as more tokens are generated.

We hypothesize that the worsened reliability of the retriever over longer sampled sequences is likely a result of the exposure bias during text generation (i.e., at test-time, the retriever has to rely on model-generated queries that may contain artifacts or other distributional differences from human-written text). The retriever in kkNN-LM is non-parametric since both the input prefix and the context from the datastore are encoded by the baseline LM (without any additional retrieval parameters), which has been adapted to the training corpus of WikiText-103. However, during text generation, as the model iteratively sample more tokens and append them to the input prefix, the input context is more likely to deviate from the available contexts from the training corpus and hence becomes more out-of-distribution and challenging for the retriever to accurately process.

6 Discussion

In addition to the limitations of interpolation-based LMs described in Section 5, we hypothesize that there are other potential factors that contribute to the shortcomings of kkNN-LM and TRIME for text generation. Specifically, it is possible that the interpolation may impede the language models’ ability for self-recovery, and also that integrating the retrieval distribution can potentially introduce additional burdens related to hyperparameter tuning, which may not be optimized for text generation. We discuss these potential issues here as they are interesting avenues to explore for future work.

Retrieval interpolation may damage the self-recovery ability of LMs:

Language models exhibit some degree of self-recovery abilities (He et al., 2021b), i.e., they can regain fluency and coherence even after previously generating poor-quality tokens. This self-recovery capability is attributed to the LM’s ability to pay close attention to recent context and ignore information from the long-range history of past context. However, we hypothesize that when interpolation-based LMs encounter artifacts (e.g., non-factual or disfluent text) in a distorted prefix q~t\tilde{q}_{t}, they may be less likely to recover than the baseline LMs, as the retrievals may further increase the probability of completions that resemble those artifacts. Furthermore, as we continuously sample tokens and append them to the prefix, which the retriever uses as the query to construct PKNN(wt|q~t)P_{KNN}(w_{t}|\tilde{q}_{t}), the retriever may encounter additional exposure bias as shown in Section 5.2, negatively impacting the quality of PKNN(wt|q~t)P_{KNN}(w_{t}|\tilde{q}_{t}). Consequently, even when the baseline LMs “recover” from distorted past context by producing a high-quality distribution over the next-token prediction PLM(wt|q~t)P_{LM}(w_{t}|\tilde{q}_{t}), the retriever may re-introduce the distortion by interpolating PLM(wt|q~t)P_{LM}(w_{t}|\tilde{q}_{t}) with PKNN(wt|q~t)P_{KNN}(w_{t}|\tilde{q}_{t}).

Hyperparameters introduced by kkNN-LM are not optimized for text generation:

The kkNN-LM introduces two important hyperparameters, namely the relative weight between the two distribution λ\lambda, as well as softmax temperature for the kkNN distribution τKNN\tau_{KNN}. Recent work (Xu et al., 2023) highlights the significance of tuning τKNN\tau_{KNN} for achieving optimal kkNN-LM performance, as measured by perplexity. Similarly, we hypothesize that the parameter λ\lambda plays a vital role as it controls the relative importance assigned to the kkNN retriever and the baseline LM, and instead of tuning λ\lambda for optimizing perplexity, we may want to consider context-dependent λ\lambda as in Drozdov et al. (2022) for generation (e.g., only use the retrieval distribution when it is very confident). Finally, the interpolation may warrant the design of new decoding algorithms that are specialized for retrieval-augmented generation.

7 Conclusion

In this work, we show that despite the significant perplexity improvement brought by interpolation-based retrieval-augmented LMs such as kkNN-LMs, such methods fail to improve the LMs’ text generation performance. The text generation quality between kkNN-LMs and baseline LMs without retrieval show no significant difference according to both automatic text generation evaluation metrics and human evaluation. Upon closer analysis, we identify flaws in using kkNN-LMs to perform autoregressive text generation: the method only benefits a minority of token predictions, and the retriever’s quality deteriorates when generating long-form text. We hope our findings can inspire future research to design better training and inference methods so that the impressive improvement of kNN-LMs in perplexity can better be translated into gains in text generation quality.

Limitations

Our work does not study all data, model, and evaluation configurations of interpolation-based LMs. We focus on Wikipedia text because it is the primary evaluation corpus for both kkNN-LM and TRIME. That said, it is unclear if our findings would be similar in other domains such as narrative or dialogue text, or in other languages. Additionally, we focus on the 100M token datastore size, although kNN-LM can scale effectively to datastores of 3B words. Using a larger datastore may lead to further perplexity decreases, but we do not think this contradicts our finding that text generation degrades as retrieval quality does. We focus exclusively on interpolation-based LMs in this work, but similar issues for other retrieval-augmented LMs such as RETRO (Borgeaud et al., 2021) may also exist and be worth investigating further. Finally, our human evaluation does not specifically account for diversity, although some dimensions of this are captured by our automated metrics. Due to the overall low quality of text generated by LMs with and without retrieval, reading their outputs results in high cognitive burden on annotators, which might be ameliorated by using stronger LMs than GPT-2.

References

  • Alon et al. (2022) Uri Alon, Frank Xu, Junxian He, Sudipta Sengupta, Dan Roth, and Graham Neubig. 2022. Neuro-symbolic language modeling with automaton-augmented retrieval. In International Conference on Machine Learning, pages 468–485. PMLR.
  • Basu et al. (2022) Soumya Sankar Basu, Ankit Singh Rawat, and Manzil Zaheer. 2022. Generalization properties of retrieval-based models. ArXiv, abs/2210.02617.
  • Borgeaud et al. (2021) Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, T. W. Hennigan, Saffron Huang, Lorenzo Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and L. Sifre. 2021. Improving language models by retrieving from trillions of tokens. In International Conference on Machine Learning.
  • Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc.
  • Celikyilmaz et al. (2020) Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. arXiv preprint arXiv:2006.14799.
  • Chen et al. (2017) Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870–1879, Vancouver, Canada. Association for Computational Linguistics.
  • Drozdov et al. (2022) Andrew Drozdov, Shufan Wang, Razieh Rahimi, Andrew McCallum, Hamed Zamani, and Mohit Iyyer. 2022. You can’t pick your neighbors, or can you? when and how to rely on retrieval in the kNN-LM. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2997–3007, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  • Grave et al. (2017) Edouard Grave, Armand Joulin, and Nicolas Usunier. 2017. Improving neural language models with a continuous cache. In International Conference on Learning Representations.
  • Guu et al. (2020) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: Retrieval-augmented language model pre-training. In International Conference on Machine Learning.
  • Hashimoto et al. (2019) Tatsunori B. Hashimoto, Hugh Zhang, and Percy Liang. 2019. Unifying human and statistical evaluation for natural language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1689–1701, Minneapolis, Minnesota. Association for Computational Linguistics.
  • He et al. (2022) Hangfeng He, Hongming Zhang, and Dan Roth. 2022. Rethinking with retrieval: Faithful large language model inference. ArXiv, abs/2301.00303.
  • He et al. (2021a) Junxian He, Graham Neubig, and Taylor Berg-Kirkpatrick. 2021a. Efficient nearest neighbor language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5703–5714, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  • He et al. (2021b) Tianxing He, Jingzhao Zhang, Zhiming Zhou, and James Glass. 2021b. Exposure bias versus self-recovery: Are distortions really incremental for autoregressive text generation? In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5087–5102, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  • Holtzman et al. (2020) Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations.
  • Izacard and Grave (2021) Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874–880, Online. Association for Computational Linguistics.
  • Izacard et al. (2022) Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot Learning with Retrieval Augmented Language Models.
  • Khandelwal et al. (2021) Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2021. Nearest neighbor machine translation. In International Conference on Learning Representations (ICLR).
  • Khandelwal et al. (2020) Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through Memorization: Nearest Neighbor Language Models. In International Conference on Learning Representations (ICLR).
  • Krishna et al. (2023) Kalpesh Krishna, Erin Bransom, Bailey Kuehl, Mohit Iyyer, Pradeep Dasigi, Arman Cohan, and Kyle Lo. 2023. Longeval: Guidelines for human evaluation of faithfulness in long-form summarization. In Conference of the European Chapter of the Association for Computational Linguistics.
  • Krishna et al. (2022) Kalpesh Krishna, Yapei Chang, John Wieting, and Mohit Iyyer. 2022. RankGen: Improving text generation with large ranking models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 199–232, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  • Krishna et al. (2021) Kalpesh Krishna, Aurko Roy, and Mohit Iyyer. 2021. Hurdles to progress in long-form question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4940–4957, Online. Association for Computational Linguistics.
  • Lan et al. (2023) Tian Lan, Deng Cai, Yan Wang, Heyan Huang, and Xian-Ling Mao. 2023. Copy is all you need. In The Eleventh International Conference on Learning Representations.
  • Lee et al. (2022) Nayeon Lee, Wei Ping, Peng Xu, Mostofa Patwary, Pascale Fung, Mohammad Shoeybi, and Bryan Catanzaro. 2022. Factuality enhanced language models for open-ended text generation. In Advances in Neural Information Processing Systems.
  • Lewis et al. (2020) Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. In Advances in Neural Information Processing Systems, volume 33, pages 9459–9474. Curran Associates, Inc.
  • Mallen et al. (2023) Alex Mallen, Akari Asai, Victor Zhong, Dajarshi Das, Hannaneh Hajishirzi, and Daniel Khashabi. 2023. When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories. In ACL.
  • McCoy et al. (2021) R. Thomas McCoy, Paul Smolensky, Tal Linzen, Jianfeng Gao, and Asli Celikyilmaz. 2021. How much do language models copy from their training data? evaluating linguistic novelty in text generation using raven. ArXiv, abs/2111.09509.
  • Merity et al. (2016) Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models.
  • Metzler et al. (2022) Don Metzler, Fernando Diaz, Hamed Zamani, Mike Bendersky, and Mostafa Dehghani. 2022. Retrieval enhanced machine learning. In SIGIR 2022: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Perspectives Track).
  • Mialon et al. (2023) Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ramakanth Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. 2023. Augmented language models: a survey. ArXiv, abs/2302.07842.
  • Nan et al. (2021) Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero Nogueira dos Santos, Henghui Zhu, Dejiao Zhang, Kathleen McKeown, and Bing Xiang. 2021. Entity-level factual consistency of abstractive text summarization. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2727–2733, Online. Association for Computational Linguistics.
  • Novikova et al. (2017) Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241–2252, Copenhagen, Denmark. Association for Computational Linguistics.
  • Pillutla et al. (2021) Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaïd Harchaoui. 2021. Mauve: Measuring the gap between neural text and human text using divergence frontiers. In Neural Information Processing Systems.
  • Pimentel et al. (2023) Tiago Pimentel, Clara Isabel Meister, and Ryan Cotterell. 2023. On the usefulness of embeddings, clusters and strings for text generation evaluation. In The Eleventh International Conference on Learning Representations.
  • Radford et al. (2019) Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
  • Rae et al. (2021) Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John F. J. Mellor, Irina Higgins, Antonia Creswell, Nathan McAleese, Amy Wu, Erich Elsen, Siddhant M. Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, L. Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, N. K. Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Tobias Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d’Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew G. Johnson, Blake A. Hechtman, Laura Weidinger, Iason Gabriel, William S. Isaac, Edward Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem W. Ayoub, Jeff Stanway, L. L. Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. 2021. Scaling language models: Methods, analysis & insights from training gopher. ArXiv, abs/2112.11446.
  • Ram et al. (2023) Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. In-context retrieval-augmented language models.
  • Shannon (2001) Claude Elwood Shannon. 2001. A mathematical theory of communication. ACM SIGMOBILE mobile computing and communications review, 5(1):3–55.
  • Shi et al. (2023) Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen tau Yih. 2023. Replug: Retrieval-augmented black-box language models. ArXiv, abs/2301.12652.
  • Thai et al. (2022) Katherine Thai, Marzena Karpinska, Kalpesh Krishna, Bill Ray, Moira Inghilleri, John Wieting, and Mohit Iyyer. 2022. Exploring document-level literary machine translation with parallel paragraphs from world literature. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9882–9902, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  • Tkachenko et al. (2020-2022) Maxim Tkachenko, Mikhail Malyuk, Andrey Holmanyuk, and Nikolai Liubimov. 2020-2022. Label Studio: Data labeling software. Open source software available from https://github.com/heartexlabs/label-studio.
  • Trivedi et al. (2022) H. Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions. ArXiv, abs/2212.10509.
  • Welleck et al. (2020) Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2020. Neural text generation with unlikelihood training. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.
  • Wu et al. (2022) Yuhuai Wu, Markus Norman Rabe, DeLesley Hutchins, and Christian Szegedy. 2022. Memorizing transformers. In International Conference on Learning Representations.
  • Xu et al. (2023) Frank F. Xu, Uri Alon, and Graham Neubig. 2023. Why do nearest neighbor language models work? ArXiv, abs/2301.02828.
  • Yogatama et al. (2021) Dani Yogatama, Cyprien de Masson d’Autume, and Lingpeng Kong. 2021. Adaptive semiparametric language models. Transactions of the Association for Computational Linguistics, 9:362–373.
  • Zhong et al. (2022) Zexuan Zhong, Tao Lei, and Danqi Chen. 2022. Training language models with memory augmentation. In Conference on Empirical Methods in Natural Language Processing.

Appendix A Appendix

Context Ground-truth Most Probable Tokens from base-LM vs kkNN-LM Analysis
The lyrics were inspired by a story …… To me , that ’s the way a great rock ’ n ’ roll concert should be : a place where everyone comes together … Maybe that ’s the dream of all art : to break down the barriers and the divisions between “people"
base-LM probability: 0.26
kkNN-LM probability: 0.23
base-LM:
“the"(0.20), “us"(0.09), “art"(0.03), “rock"(0.02)
kkNN-LM:
“the"(0.23), “us"(0.07), “good"(0.02), ‘art"(0.02)
In this example the base-LM predicts the ground-truth noun token “people" with the highest probability of all tokens (0.26). However, after interpolating with the retrieval distribution, the kkNN-LM decreases the probability of the ground-truth token.
Richmond finished the 1984 season 12th in points , with 11 …… In the Busch Series , he qualified at the pole position in the two races he entered , and won the Charlotte race . Richmond joined Hendrick Motorsports in 1986 , where he teamed up with veteran crew chief Harry Hyde . It took the team until the middle of the season’ “to"
base-LM probability: 0.78
kkNN-LM probability: 0.64
base-LM:
“,"(0.07), “for"(0.03), “when"(0.02), ‘that"(0.02)
kkNN-LM:
“,"(0.10), “for"(0.06), “."(0.04), “and"(0.02)
The ground-truth token to be predicted is the preposition “the", which the base-LM correctly predicts with very high probability. However, the kkNN-LM decreases the probability of the groud-truth token significantly compared to the base-LM.
Table 3: Examples where kkNN-LM hurts the inference of next-token (with different part-of-speech such as noun and preposition) by predicting of the gold token with a lower probability compared to the base-LM
Context Continuation Comment Error
[\ldots] = = = Shootdown = = = A group of ZIPRA guerrillas, armed with a Strela-2 launcher, waited in the bush beneath Flight 825’s flightpath, and Continuation A—kkNN-LM two soldiers opened fire on the guerrillas. [\ldots] The ZIPRA and ZIPRA units exchanged fire, and several ZIPRA members were shot. After the fire was over, the ZIPRA and ZIPRA guerrillas returned fire and began shooting back, and one of the ZIPRA unsuccessfully attacked a ZAPU base at the head of Flight 825, forcing the major to retreat. = = = = ZAPU retreat = = = = = = = = ZAPU retreat = = = = [\ldots] The ZIPRA moved up the hill and captured several ZIPRA members, forcing them to withdraw. [\ldots] Both of these continuations have problems. I chose A because its problems seems to influence the meaning less than the problems in B. [\ldots] Continuation A has some repeated words and in a few spots ZIPRA and ZAPU seem to get mixed up, leading to sentences like this: "The ZIPRA moved up the hill and captured several ZIPRA members, forcing them to withdraw." It was easier to recover from those confusions than from the ones in Continuation B. Repetitive content; contradiction
Hu was born in <unk> County, Anhui Province in 1584 or early 1585. [\ldots] By 1619, Hu had moved to Nanjing where he lived with his wife Wu. Continuation B—GPT-2 During this time he also became a leading scholar of the court, a position he held until 1624, [\ldots]. His last year at the academy was 1715. In 1725, Hu was appointed secretary of the government [\ldots]. In 1727, he was appointed as vice-president of the Academy, and he remained the academy’s secretary until 1742, [\ldots]. I selected that this was a difficult decision [\ldots]. However, Continuation B is an impossible choice as it has Hu being born in 1584 and still being alive in 1725 which of course is impossible. I was left with process of elimination in my opinion. Illogical; wrong chronological details
The lanes of US 75, which are on the bottom level, are 20 feet (6m) below ground level, giving the structure a total height of 140 feet from bottom to top As part of the project, I-635 was widened to include four dedicated high-occupancy vehicle lanes (HOV lanes) that are barrier-separated and reversible. The roads on the five levels are: Level I: US 75, an eight lane highway, four lanes going each way; Level II: The junction of the two six-lane frontage roads, each having three through Continuation A—kkNN-LM traffic lanes with one lane in each direction. Level III : In the northbound direction, US 75 curves to the northeast and US 75 curves to the northwest to follow a more southerly routing. The northbound lanes are two lanes in each direction, while the southbound lanes are two lanes. [\ldots] = = Major intersections = = [\ldots] = Battle of the Kalka River = The Battle of the Kalka River (Polish : Kalka, pronounced [\textprimstresska\textupsilon], or [\textprimstresska\textupsilon\textprimstressma\textupsilon]) was a decisive battle of the 15th and 16th centuries of the 15th century in Austrian occupied Poland. The battle was fought in the middle of the Russo Continuation A gives more information and is more specific than B. It has some lack of clarity – Level III has the road curving both northeast and northwest – but overall it’s more informative than just "the intersection of two frontage roads." I would have ruled out A because of the Battle of the Kalka River section at the bottom. It doesn’t really have a place in this text. But B also has the Battle of the Falkland Islands, so that’s a wash. A it is. Abrupt topic change; illogical
(Same as above) Continuation B—GPT-2 traffic lanes. Level III : The intersection of the two six-lane frontage roads, each having four through traffic lanes. Level IV: The intersection of the two six-lane frontage roads, each having two through traffic lanes. Level V: The intersection of the two six-lane frontage roads, each having two through traffic lanes = = = Construction = = = The project was approved in July 1997 and completed on December 17, 1997, with the groundbreaking ceremony taking place at the State University of New York (SUNY) campus in New York City. [\ldots] = = = Major intersections = = = [\ldots] = Battle of the Falkland Islands = (Same as above) Abrupt topic change; repetitive content
Table 4: Selected representative examples with evaluators’ comments. The error categories in the last column are ours. Contents are shortened for the sake of space.
Refer to caption
Figure 5: The interface of the human evaluation. Each task consists of a context text, two continuations, two choices, and a free-form justification text box.