Towards a Robust Retrieval-Based Summarization System
Abstract
This paper describes an investigation of the robustness of large language models (LLMs) for retrieval augmented generation (RAG)-based summarization tasks. While LLMs provide summarization capabilities, their performance in complex, real-world scenarios remains under-explored. Our first contribution is LogicSumm, an innovative evaluation framework incorporating realistic scenarios to assess LLM robustness during RAG-based summarization. Based on limitations identified by LogiSumm, we then developed SummRAG, a comprehensive system to create training dialogues and fine-tune a model to enhance robustness within LogicSumm’s scenarios. SummRAG is an example of our goal of defining structured methods to test the capabilities of an LLM, rather than addressing issues in a one-off fashion. Experimental results confirm the power of SummRAG, showcasing improved logical coherence and summarization quality. Data, corresponding model weights, and Python code are available online1.
1 Introduction
In the evolving landscape of automated text summarization, large language models (LLMs) have emerged as key players, demonstrating remarkable efficiency in distilling complex information into concise summaries. Pioneering works such as those by Goyal et al. (2022); Liu et al. (2022b, a) highlight the progress made in leveraging LLMs for this purpose. Related benchmarks underscore LLMs’ growing significance in this field Zhang et al. (2024). Despite these advances, LLMs encounter a critical bottleneck: their training datasets are static, making the integration of new information post-training a formidable challenge.
Retrieval Augmented Generation (RAG) was introduced to recognize this limitation. By integrating external knowledge sources, LLMs are empowered to dynamically incorporate up-to-date information in real-time during generation tasks Lewis et al. (2020); Izacard et al. (2022); Guu et al. (2020). RAGs promise to address the issue of a static knowledge base in an LLM, paving the way for more accurate and up-to-date summaries.
Although RAG integration with LLMs offers a promising avenue for more comprehensive and current summaries, research specifically focused on summarization using RAG and LLMs is under-explored. This gap manifests in two significant limitations: (1) Evaluation Pipeline. The absence of targeted evaluation pipelines for assessing this specific use case, and (2) Effective Methods. The scarcity of research directly discussing the application of RAG in conjunction with LLMs for summarization.
To address these gaps in summarization research using LLMs with RAG, we propose a novel evaluation pipeline LogicSumm. This pipeline is designed to systematically understand and benchmark the summarization capabilities of LLMs augmented with RAG. Our approach includes addressing the most commonly encountered scenarios during summarization, split into seven distinct cases and providing a comprehensive framework for evaluation. We conduct experiments using popular LLMs integrated with RAG across these cases.
Across our seven cases, we observed a significant performance decline in previous RAG-based summarization approaches for input that included documents that were irrelevant to the topic being summarized. This finding highlights a significant challenge: the difficulty in effectively identifying relevant documents for stable summarization. We develop a novel support system SummRAG that constructs data contextually to fine-tune a model and improve its robustness in all scenarios with minimal reliance on external datasets. This framework boosts the performance of public language models in summarization tasks, effectively narrowing the performance gap with more advanced but less accessible models like GPT-4. It also demonstrates one of our motivating objectives: developing structured, generalizable frameworks to address related classes of issues, rather than solving problems in a one-off fashion.
In summary, our paper provides the following novel contributions.
-
1.
We investigate the important but under-explored domain of RAG-based summarization with LLMs. To the best of our knowledge, we propose the first evaluation pipeline LogicSumm tested using seven summarization scenarios that thoroughly assess the summarization capabilities of LLMs under a range of common use cases.
-
2.
We present SummRAG, a comprehensive end-to-end framework that encompasses both dialogue generation and model fine-tuning to improve the robustness and overall performance of RAG-based summarization.
-
3.
We publish a new dataset from SummRAG that is model-agnostic and capable of enhancing public LLMs in scenarios pertinent to RAG-based summarization tasks.
2 Related Work
2.1 Large Language Model
The evolution of LLMs began with the advent of transformers Vaswani et al. (2017). This development significantly enhanced language models’ versatility across various tasks, a breakthrough prominently showcased by BERT Devlin et al. (2018). Following these advances, the focus shifted towards the development of larger-scale models informed by the scaling law Kaplan et al. (2020). This led to the creation of groundbreaking models like GPT Brown et al. (2020), LLaMA Touvron et al. (2023), PaLM Chowdhery et al. (2023), Jurassic Lieber et al. (2021), Mistral Jiang et al. (2023), and Claude, characterized by their tens of billions of parameters. These models unlocked advanced in-context learning and zero-shot performance across various tasks.
2.2 Retrieval Augmented Generation
Retrieval-augmented generation was introduced as a pivotal enhancement for language models, providing access to a wealth of additional knowledge by retrieving information from external databases Lewis et al. (2020); Guu et al. (2020); Borgeaud et al. (2022). When combined with LLMs, RAG significantly enhances the ability for up-to-date and accurate generation tasks such as open-domain QA Izacard & Grave (2020); Karpukhin et al. (2020); Guu et al. (2020), dialogue Cai et al. (2018), and code generation Parvez et al. (2021). Certain challenges have also been noted, however. Studies highlight that noise in the retrieved text can adversely affect the performance of the language model, potentially leading to misinformation or errors Chen et al. (2023); Xu et al. (2024, 2023). There is also the potential for conflict between user-provided text and information retrieved by RAG Jin et al. (2024). These issues underscore the necessity to develop a more refined framework that enhances both the robustness and consistency of LLM-based RAG systems.
2.3 Text Summarization
Text summarization involves condensing the core content of a text document into a concise summary, extracting and synthesizing key points to accurately represent the original article Nenkova et al. (2011); Chen & Bansal (2018). Traditional approaches have utilized methods based on word frequency to determine salience Nenkova et al. (2006) and explored discourse semantics Steinberger et al. (2007). Fine-tuning techniques have also been applied Liu & Lapata (2019); Lewis et al. (2019); Zhang et al. (2020a); Liu et al. (2022b). More recently, LLMs have emerged as a central component in text summarization, significantly impacting the development and effectiveness of summarization techniques Zhang et al. (2024); Tang et al. (2023); Van Veen et al. (2023).
3 LogicSumm

LogicSumm builds a structured foundation for testing by defining seven common summarization scenarios divided into four higher-level aspects. We begin by presenting the overall pipeline for evaluating a summarization task. Our framework is depicted in Figure 1.
Problem Formulation. For a given user query , the retriever is tasked with fetching the top- documents from a database of document vectors via a semantic similarity search mechanism. The LLMs’ generator produces a summary based on different source information. We formally define four aspects, where an aspect is a high-level query type that needs to answer, and a scenario is a particular type of sub-aspect with unique scenario properties.
Aspect 1: | |||
Aspect 2: | |||
Aspect 3: | |||
Aspect 4: |
where and is the string concatenation operator. Summarization quality is heavily influenced by the accuracy of the retriever and the quality of the document vector store from which information is sourced.
Aspect Scenarios. In each constructed scenario, we expect that LLMs will not only undertake actions with logical precision but also exhibit high-quality summarization capabilities.
-
•
Aspect 1: Scenarios , . The LLMs should discern the relevance of to query .
-
•
Aspect 2: Scenario . The LLMs must summarize the user’s provided text directly.
-
•
Aspect 3: Scenario . The LLMs are expected to indicate the lack of relevance of to the user’s text and suggest summarizing solely based on the user’s text.
-
•
Aspect 3: Scenario . The LLMs should recognize both the relevance and the absence of conflict between the user’s text and , then summarize both sources.
-
•
Aspect 3: Scenario . The LLMs must identify relevance coupled with an informational conflict between the user’s text and .
-
•
Aspect 4: Scenario . The LLMs are expected to recognize the relevance of to the user’s query and exclude any irrelevant documents from the summarization.
Motivating Observations: With the introduction of LogicSumm we are equipped to assess the real-world proficiency of LLMs leveraging RAGs to perform summarization tasks. We deploy the Mistral-7B Instruct model Jiang et al. (2023) for this evaluation.
3.1 Implementation and Evaluation Metrics
Our evaluation establishes baselines using a collection of autoregressive LLMs based on GPT: GPT-3.5, Claude 2, Jurassic, and LLaMa2-13B, conducted in a zero-shot manner where instructions were given to complete tasks within the LogicSumm framework. We also applied advanced prompting techniques for the Mistral-7B Instruct model in both zero-shot and one-shot “Chain of Thought” contexts Wei et al. (2022). Within each aspect, the same prompt is used for all scenarios it includes. The prompt details are described in Appendix A.3.
Our assessment criteria included not only LogicSumm’s logical accuracy but also the quality of the summaries evaluated using BertScore Zhang et al. (2020b) and Rouge 1/2/L Lin (2004). We employ GPT-4 Turbo to assess whether a model’s output maintains logical correctness. It is important to note that summary quality was assessed only in Scenarios 2 and 3, as the text retrieval in other scenarios was deemed irrelevant to the user’s text, rendering summary quality evaluation unnecessary. Evaluating logical accuracy in Scenario 3 is also unnecessary because it is a direct summarization of the user’s text. We follow a procedure to generate test data similar to the method employed to create training data. The gold summaries are derived from the outputs produced by GPT-4 Turbo. For the seven scenarios we generated 57, 48, 50, 36, 50, 43, and 98 samples, respectively.

To examine performance with multiple top-ranked documents where , we evaluated summary quality for and , simulating situations with five relevant documents. In instances where and we introduced three and five irrelevant documents to test our model’s resilience in handling irrelevant content within the top-ranked documents. We benchmark our method against other general RAG-based summarization frameworks including Stuff Summarization, Map-Reduce Summarization, and Refine Summarization utilizing Mistral-7B Instruct. Additionally, we provide explict instructions to disregard irrelevant documents. The specifics of the prompts can be found in Appendix A.4.
Our observations suggest that LogicSumm exhibits limitations when attempting to recognize the relevance between a user’s text while following a sequence of instructions: first assessing the relevance of the retrieval text, then determining whether to proceed with summarization. Testing prompts can be found in Appendix A.1. Detailed explanations are included in the Experiment Results section below. We defer this discussion until introducing SummRAG, since SummRAG performance is compared to existing, state-of-the-art approaches.
4 SummRAG
Initial findings from LogicSumm suggest that general-purpose LLMs may not be sufficiently robust for RAG-based summarization. This led to a complete system SummRAG that creates and fine-tunes dialogues and models with GPT-4 Turbo to produce more reliable LLMs for each situation tested with LogicSumm. We begin by creating special tokens embedded in the generated dialogue to ensure it has a proper format. We then focus on the top- document case (Aspects , and ) before moving to situations involving the top- documents, (Aspect 4.) We conclude by fine-tuning the model using the dialogues we produce.
Type | Definitions | Aspect |
[Retrieval], [No Retrieval] | Retrieval needed | 1–4 |
[Retrieval], [Irrelevant] | Retrieval text is relevant to the user’s text | 1–4 |
[Continue to use User’s Text] | Retrieval text is not relevant to the user’s text | 3 |
[Information Conflict] | Retrieval text is relevant to the user’s text but there is an information conflict between them | 3 |
[Augmenting User’s Text] | Retrieval text is relevant to the user’s text with no information conflict | 3 |
[Context], [/Context] | An intermediate summarization | 4 |
Count, /Count | Count documents left to summarize | 4 |
\SetRowgray9 [Topic] | Memorize the user’s topic | 4 |
Throughout this section we use D to represent the collection of document vectors, including datasets from CNN Daily Mail and XSum available in the HuggingFace repository. R is the retriever used in our framework, and t is the user’s text. Additionally, denotes the collection of retrieved documents ranked based on their semantic similarity. is a random document selected from D.
4.1 Logical Special Tokens
We insert logical special tokens whose meanings are introduced to GPT-4 during dialogue creation, providing clarity and compactness in the conversation while ensuring proper formatting. In the later stages of model fine-tuning we substitute these tokens with natural language text to avoid the extensive instruction data required to extend the LLMs to automatically manage the special tokens.
In addition to the tokens outlined in Table 1, we incorporate function-calling tokens within the generated dialogue Qin et al. (2023). This allows the LLMs to interface with our custom text mining APIs222https://go.ncsu.edu/social-media-viz that are capable of performing tasks such as analyzing the sentiment of summaries or accessing online news sources to generate insightful sentiment visualizations.
4.2 Dialogue Generation: Top- Document
The top- document scenario encompasses Aspects 1, 2, and 3 in LogiSumm. We apply GPT-4 Turbo to generate the dialogue by introducing the meaning of the special tokens and providing a one-shot demonstration in the prompts.


The utility of the special tokens defined in Table 1 is illustrated in Figure 3. To improve diversity in conversations, we instruct GPT-4 Turbo to incorporate variations for certain sentences, such as “I’d like to have a summary xxx”, in the user instruction component. For Aspect 1, Scenario 1 is based on a random document and its topic t extracted using GPT-3.5 Turbo for relevancy. Scenario 2 involves a randomly chosen unrelated topic to create irrelevancy. For Aspect 2, Scenario 3 uses to represent the user’s text. For Aspect 3, Scenario 4 utilizes two random documents and to introduce irrelevancy. In Scenario 5, GPT-4 Turbo is prompted to generate topics it can output as factual stories, then to create two documents on the same subtopic to ensure relevancy. Scenario 6 instructs GPT-4 Turbo to introduce information conflicts in , such as changes in numbers, factual reversals, and date alterations. This pair of documents, showcasing information conflict, represents the user’s input and the retrieved text, respectively.
The meanings of the special tokens, the one-shot demonstration, and the documents for each aspect described above are provided as prompts to GPT-4 Turbo to generate the intended dialogue. Details of these prompts are available in the Appendix A.2.
4.3 Dialogue Generation: Top- Documents
To transition from the top- document to the top- documents (Aspect 4) we introduce the notion of context . This concept represents a text segment that stores the intermediate state of multi-document summarization. It allows LLMs to adopt a Markov-like thought process for summarizing documents, where the summarization at each step relies solely on and the document retrieved at that particular step (see Figures 4, 5). This frees LLMs from storing all the documents in the input prompt.
The special tokens Count 0 documents left to summarize /Count serve as a stopping criteria indicating there are no more documents to summarize. At this point the LLM returns the final summarization to the user. We generate conversations for the top- scenario where . However, our experiments demonstrate that utilizing chat-based models with general instruction capabilities such as Mistral-7B Instruct does not limit the multi-document summarization to only five documents. This is achieved by strategically using the Count token to allow flexibility in the number of summarized documents. The step-by-step prompts are available in Appendix A.2.
4.4 Model Fine-Tuning

We begin by collecting a custom dialogue dataset to use to fine-tune the Mistral-7B Instruct model checkpoints. However, we encountered difficulties in teaching the model to understand the special token definitions. To address this, we convert the tokens into text using a transformation table (Appendix A.5). We also insert aspect-specific system prefixes to further guide the model’s learning process. Function-calling tokens , and [Argument] within the generated dialogue are changed to the text “Here is the API: ” and “The argument of the API: ”.
Next, we collect training data to fine-tune the Mistral-7B Instruct model. For Aspects 1, 2, and 3 we apply the chat template . For Aspect 4, rather than training on the entire dialogue, we focus on adjacent pairs of steps. Here, the previous step serves as the instruction and the subsequent step as the response. The retrieval text is masked to ensure it conforms to the correct format. Given a dialogue from the custom dataset where is the data distribution implicitly defined in the dialogue generation process, we train our model on using the standard next token objective:
(1) |
where is the instruction and retrieval text within and is the response. We use LoRA (Hu et al. (2021) to perform parameter-efficient tuning and store adapter weights.
4.5 Connection to Prior Work
SummRAG modifies and expands on Self-RAG Asai et al. (2023) to address the specific needs of summarization in the context of RAG, specifically:
-
1.
Shift the granularity of the critical thinking process from individual sentences to the full retrieval text.
-
2.
Utilize special tokens during dialogue generation with GPT-4 Turbo, then replace tokens with natural language expressions during fine-tuning of the model.
Rather than exploring question-answering knowledge conflicts Jin et al. (2024), our research concentrates on summarization, employing a comprehensive evaluation pipeline. We also create a curated dataset to address the challenges outlined in our evaluation framework.
5 Experiments
To conduct our evaluation, we establish baselines using autoregressive LLMs GPT-3.5, Claude 2, Jurassic, and LLaMa2-13B in a zero-shot manner where instructions complete tasks within the LogicSumm framework. We then applied advanced prompting techniques with the Mistral-7B Instruct model in both zero-shot and one-shot “Chain of Thought” contexts. Within each aspect, the same prompt was used for all scenarios (Appendix A.3).
5.1 Implementation and Evaluation Metrics
Scenario | ||||||
LLM | 1 | 2 | 4 | 5 | 6 | 7 |
Claude2 | 0.96 | 1.0 | 0.88 | 1.0 | 0.60 | – |
Jurassic | 0.98 | 0.58 | 0.84 | 1.0 | 0.26 | – |
Llama2 13B Chat | 0.88 | 1.0 | 0.84 | 0.86 | 0.56 | – |
GPT-3.5 Turbo | 0.96 | 1.0 | 1.0 | 1.0 | 0.52 | – |
[dotted] Minstral 7B Chat (explicit logical instructions) | 0.29 | 1.0 | 1.0 | 0.14 | 0.58 | – |
Minstral 7B Chat (zero-shot Chain of Thought) | 0.88 | 1.0 | 0.97 | 0.80 | – | – |
Minstral 7B Chat (one-shot Chain of Thought) | 1.0 | 0.19 | 0.91 | 0.88 | – | – |
[dotted] SummRAG | 1.0 | 1.0 | 0.97 | 1.0 | 0.79 | 0.86 |
Scenario 2 | Scenario 3 | |||
---|---|---|---|---|
LLM | BertScore | |||
precision, recall, F1 | Rogue | |||
1, 2, L | BertScore | |||
precision, recall, F1 | Rogue | |||
1, 2, L | ||||
GPT-3.5 Turbo | ||||
Llama 13B Chat | ||||
Mistral-7B Instruct | ||||
[dotted] SummRAG |
Our assessment criteria includes not only logical accuracy within the LogicSumm context but also the quality of the summaries evaluated using BertScore and Rouge 1, 2, L. GPT-4 Turbo assesses the model’s logical correctness. Note that summary quality was assessed only in Scenarios 2 and 3, as the text retrieval in the other scenarios was deemed irrelevant to the user’s text, rendering summary quality evaluation unnecessary. Evaluating logical accuracy in Scenario 3 is also irrelevant because it is a direct summarization of the user’s text. We follow the same procedure to generate test data that was used to create training data. The gold standard summaries are derived from the outputs produced by GPT-4 Turbo.
To examine our model’s performance with multiple top-ranked documents where we evaluated summary quality for and . For we generated five relevant documents. For and we introduced three and five irrelevant documents, respectively. We compared our method against other RAG-based summarization frameworks including Stuff Summarization, Map-Reduce Summarization, and Refine Summarization utilizing Mistral-7B Instruct as the LLM engine within these frameworks. Additionally, we provided explicit instructions to disregard irrelevant documents. The prompts are in Appendix A.4.
5.2 Results
Results from the LogicSumm scenarios led to two key findings. First, the logical accuracy of Mistral-7B Chat (Table 2) varies significantly based on the selected prompts. When explicit logical instructions guide Mistral-7B Chat, it demonstrates lower accuracy in Scenarios 1 and 5 and higher accuracy in Scenario 2, but with explicit guidance and one-shot Chain of Thought it returns lower accuracy in Scenario 2 and higher accuracy in Scenarios 1 and 5. In Scenario 6 the Chain of Thought prompting strategy struggles to identify information conflicts. This indicates that devising a prompting strategy that consistently maintains robust performance across different scenarios can be time-consuming and challenging.
Second, after fine-tuning Mistral-7B Chat on our curated training dataset, the logical accuracy across all aspects remains consistently high compared to other models. This underscores the effectiveness of SummRAG. SummRAG creates data without adding new knowledge to the model. This implies that the model possesses sufficient understanding of the logic required but benefits from instruction-tuning to guide its application of this knowledge.
SummRAG enhances robustness while maintaining the quality of summarization, as shown in Table 3. It produces results comparable to GPT-3.5 Turbo and slightly outperforms Llama 13B Chat and Mistral-7B Chat. Given that GPT-4 Turbo’s outputs serve as the gold standard for summarization, this indicates that during the instruction tuning phase SummRAG enables Mistral-7B Chat to match GPT-4 Turbo’s summarization capabilities.
In the multi-document setting, Table 4 demonstrates that as the count of irrelevant documents increases the performance metrics tend to decrease across other summarization frameworks. This indicates that simply using prompts to disregard irrelevant documents may not be a robust approach. In contrast, internalizing the concept of context demonstrates resilience against the presence of irrelevant documents (e.g., Scenario 7). This is achieved without significantly increasing inference costs, as the model at each inference step depends solely on and the text retrieved at that particular step.
Format | Summarization Score | |||||
---|---|---|---|---|---|---|
5 Documents | 8 Documents | 10 Documents | ||||
BertScore | ||||||
P, R, F1 | Rogue | |||||
1, 2, L | BertScore | |||||
P, R, F1 | Rogue | |||||
1, 2, L | BertScore | |||||
P, R, F1 | Rogue | |||||
1, 2, L | ||||||
Stuff | ||||||
Map-Reduce | ||||||
Refine | ||||||
SummRAG |
5.3 Supporting Analysis
To better understand the logical reasoning capabilities of an LLM trained with SummRAG, we examine distribution shift from the original to the fine-tuned model. One case involves a scenario with a user query: “I’d like a summary about the American stock market” and the text retrieved relates to Amazon’s stock information. The original model generates a summary based solely on Amazon stock details. In contrast, after being fine-tuned with SummRAG, our model responds by indicating, “The retrieved text does not offer insights into the overall performance of the American stock market but instead concentrates on future projections for Amazon’s stock price.” If the user’s query were changed to: “I’d like a summary about different companies’ performances in the American stock market” our fine-tuned model would recognize the relevance of the Amazon stock information to this broader query and proceed to summarize the text accordingly. Our fine-tuned model possesses a more nuanced understanding of relevance, assessing the content with greater depth and specificity.
In practical applications when segmenting text it is common to use overlapping chunks to prevent discontinuities in information flow. Notably, our model remains robust even when there is unrelated material in the retrieval text. Our model does not mistakenly consider an entire segment irrelevant due to a small amount of unrelated content.
6 Conclusion, Limitation, and Future Work
In this paper, we propose a new evaluation framework LogicSumm designed to assess the robustness of LLMs within the context of RAG-based summarization. Based on limitations identified with LogiSumm, we developed SummRAG, a comprehensive system that spans generating training dialogues to fine-tuning LLMs. SummRAG is designed to enhance robustness in LogicSumm’s scenarios. Experiments focusing on logical accuracy and sumnmarization quality confirm the effectiveness of SummRAG. Further improvements must still be considered, however. SummRAG’s performance is linked to the scenarios in LogicSumm, which may not encompass all possible real-life situations. This suggests a need for a more inclusive evaluation framework. Furthermore, the efficacy of our approach is influenced by the quality of the prompts used during dialogue generation, highlighting the potential advantages of developing a more automated strategy for prompt selection in future work.
References
- Asai et al. (2023) Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. Self-RAG: Learning to retrieve, generate, and critique through self-reflection. arXiv preprint arXiv:2310.11511, 2023.
- Borgeaud et al. (2022) Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. Improving language models by retrieving from trillions of tokens. In International Conference on Machine Learning, pp. 2206–2240. PMLR, 2022.
- Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in Neural Information Processing Systems, 33:1877–1901, 2020.
- Cai et al. (2018) Deng Cai, Yan Wang, Victoria Bi, Zhaopeng Tu, Xiaojiang Liu, Wai Lam, and Shuming Shi. Skeleton-to-response: Dialogue generation guided by retrieval memory. arXiv preprint arXiv:1809.05296, 2018.
- Chen et al. (2023) Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. Benchmarking large language models in retrieval-augmented generation. arXiv preprint arXiv:2309.01431, 2023.
- Chen & Bansal (2018) Yen-Chun Chen and Mohit Bansal. Fast abstractive summarization with reinforce-selected sentence rewriting. arXiv preprint arXiv:1805.11080, 2018.
- Chowdhery et al. (2023) Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1–113, 2023.
- Chung et al. (2022) Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.
- Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
- Goyal et al. (2022) Tanya Goyal, Junyi Jessy Li, and Greg Durrett. News summarization and evaluation in the era of GPT-3. arXiv preprint arXiv:2209.12356, 2022.
- Guu et al. (2020) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. Retrieval augmented language model pre-training. In International Conference on Machine Learning, pp. 3929–3938. PMLR, 2020.
- Hu et al. (2021) Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
- Izacard & Grave (2020) Gautier Izacard and Edouard Grave. Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282, 2020.
- Izacard et al. (2022) Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. Atlas: Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299, 2022.
- Jiang et al. (2023) Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7B. arXiv preprint arXiv:2310.06825, 2023.
- Jin et al. (2024) Zhuoran Jin, Pengfei Cao, Yubo Chen, Kang Liu, Xiaojian Jiang, Jiexin Xu, Qiuxia Li, and Jun Zhao. Tug-of-war between knowledge: Exploring and resolving knowledge conflicts in retrieval-augmented language models. arXiv preprint arXiv:2402.14409, 2024.
- Kaplan et al. (2020) Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.
- Karpukhin et al. (2020) Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906, 2020.
- Lewis et al. (2019) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
- Lewis et al. (2020) Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented generation for knowledgeiintensive NLP tasks. Advances in Neural Information Processing Systems, 33:9459–9474, 2020.
- Lieber et al. (2021) Opher Lieber, Or Sharir, Barak Lenz, and Yoav Shoham. Jurassic-1: Technical details and evaluation. White Paper, AI21 Labs, 1:9, 2021.
- Lin (2004) Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pp. 74–81, Barcelona, Spain, July 2004. Association for Computational Linguistics. URL https://aclanthology.org/W04-1013.
- Liu & Lapata (2019) Yang Liu and Mirella Lapata. Text summarization with pretrained encoders. arXiv preprint arXiv:1908.08345, 2019.
- Liu et al. (2022a) Yixin Liu, Alexander R Fabbri, Pengfei Liu, Yilun Zhao, Linyong Nan, Ruilin Han, Simeng Han, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, et al. Revisiting the gold standard: Grounding summarization evaluation with robust human evaluation. arXiv preprint arXiv:2212.07981, 2022a.
- Liu et al. (2022b) Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. BRIO: Bringing order to abstractive summarization. arXiv preprint arXiv:2203.16804, 2022b.
- Nenkova et al. (2006) Ani Nenkova, Lucy Vanderwende, and Kathleen McKeown. A compositional context sensitive multi-document summarizer: Exploring the factors that influence summarization. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 573–580, 2006.
- Nenkova et al. (2011) Ani Nenkova, Kathleen McKeown, et al. Automatic summarization. Foundations and Trends in Information Retrieval, 5(2–3):103–233, 2011.
- Parvez et al. (2021) Md Rizwan Parvez, Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. Retrieval augmented code generation and summarization. arXiv preprint arXiv:2108.11601, 2021.
- Qin et al. (2023) Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, and Maosong Sun. ToolLLM: Facilitating large language models to master 16000+ real-world APIs. arXiv preprint arXiv:2307.16789, 2023.
- Steinberger et al. (2007) Josef Steinberger, Massimo Poesio, Mijail A Kabadjov, and Karel Ježek. Two uses of anaphora resolution in summarization. Information Processing & Management, 43(6):1663–1680, 2007.
- Tang et al. (2023) Liyan Tang, Zhaoyi Sun, Betina Idnay, Jordan G Nestor, Ali Soroush, Pierre A Elias, Ziyang Xu, Ying Ding, Greg Durrett, Justin F Rousseau, et al. Evaluating large language models on medical evidence summarization. NPJ Digital Medicine, 6(1):158, 2023.
- Touvron et al. (2023) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. LLaMa: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
- Van Veen et al. (2023) Dave Van Veen, Cara Van Uden, Louis Blankemeier, Jean-Benoit Delbrouck, Asad Aali, Christian Bluethgen, Anuj Pareek, Malgorzata Polacin, Eduardo Pontes Reis, Anna Seehofnerova, et al. Clinical text summarization: Adapting large language models can outperform human experts. Research Square, 2023.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in Neural Information Processing Systems, 30, 2017.
- Wei et al. (2022) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-Thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35(4):24824–24837, 2022.
- Xu et al. (2023) Shicheng Xu, Liang Pang, Huawei Shen, Xueqi Cheng, and Tat-seng Chua. Search-in-the-chain: Towards the accurate, credible and traceable content generation for complex knowledge-intensive tasks. arXiv preprint arXiv:2304.14732, 2023.
- Xu et al. (2024) Shicheng Xu, Liang Pang, Jun Xu, Huawei Shen, and Xueqi Cheng. List-aware reranking-truncation joint model for search and retrieval-augmented generation. arXiv preprint arXiv:2402.02764, 2024.
- Zhang et al. (2020a) Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In International Conference on Machine Learning, pp. 11328–11339. PMLR, 2020a.
- Zhang et al. (2020b) Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. BERTScore: Evaluating text generation with BERT. arXiv preprint arXiv:1904.09675, 2020b.
- Zhang et al. (2024) Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, and Tatsunori B Hashimoto. Benchmarking large language models for news summarization. Transactions of the Association for Computational Linguistics, 12:39–57, 2024.
Appendix A Appendix
A.1 Motivating Examples and Testing Prompts
We test FlanT5-xxlChung et al. (2022) and Llama2 13B Chat with following prompt:
————————————-
User:
I’d like a summary regarding Peking Duck. Here is the retrieval text: [ChatGPT is similar to the automated chat services found on customer service websites, as people can ask it questions or request clarification to ChatGPT’s replies. The GPT stands for "Generative Pre-trained Transformer," which refers to how ChatGPT processes requests and formulates responses. ]Please give me the summary based on the retrieval text.
FlanT5-xxl:
ChatGPT is similar to the automated chat services found on customer service websites, as people can ask it questions or request clarification to ChatGPT’s replies. The GPT stands for ”Generative Pre-trained Transformer,” which refers to how ChatGPT processes requests and formulates responses.
————————————-
User:
I’d like a summary regarding ChatGPT application in Finance. Here is the retrieval text: [It’s easy to understand why educators feel threatened. ChatGPT is a freakishly capable tool that landed in their midst with no warning, and it performs reasonably well across a wide variety of tasks and academic subjects]Please give me the summary based on the retrieval text.
Llama2 13B Chat:
Certainly! Based on the retrieval text, here is a summary about the potential applications of ChatGPT in finance: ….
————————————-
The examples provided demonstrate that these two LLMs exhibit a lack of robustness in the scenarios we have presented.
A.2 Prompts to Generate Dialogue
Below, we show how we generate the dialogue for Scenario , , and (Prompts for other scenarios can be found in our code repository)
Scenario 1:
Here is the one-shot example to show GPT-4 Turbo:
["role": "user", "content": "I need a summary about the incident involving Chen Xing.", "role": "assistant", "content": "[Retrieval]paragraph OpenAI’s ChatGPT is an AI-powered chatbot with natural language processing capabilities, designed to engage in human-like dialogues. As a generative AI model, it employs unsupervised learning algorithms to produce textual, code, and digital image outputs. Despite not being the sole AI chatbot available, ChatGPT’s rapid content generation abilities have contributed significantly to its popularity. Utilizing appropriate prompts with ChatGPT facilitates the acquisition of requisite information. paragraph[Irrelevant] Unfortunately, the retrieved information does not pertain to the user’s inquiry.", "role": "user", "content": "Ok. If there is no related document in the database, could you collect the online comments and show me what people think now? Thanks!", "role": "assistant", "content": "No problem, I will initiate the API request for you. APICollect_OnlineAPI[Argument] Chen Xing[/Argument]"]
Here is the instruction to tell GPT-4 Turbo:
You are instructed to construct the conversation between the assistant and the user where the user requires the assistant to do summarization on some topic and some post-summarization analysis. Some special tokens need to be added to the conversation. You are required to follow the format of the provided example, including the position of special tokens. Here are special tokens:
[1] [No Retrieval] and [Retrieval] are used to decide whether you need to retrieve the text for summarization.
[2] [Irrelevant] and [Relevant] are used to check whether the retrieval text is related to the user query.
The retrieval text will be inside of paragraph and paragraph and you need to remove in the retrieval text. The summary will be appended after the [Relevant] or [Irrelevant]. Besides, there is one API called CollectOnline. It is used when the user wants to collect online news and know people’s attitude. The argument of this API is the topic.
Here is one example:
one-shot example
Now, I will only provide you with the topic and the retrieval text. You decide the rest of them and construct the conversation following STRICTLY the same format as the provided example. Note that the user may collect the online news. User will not say the name of the API in the query since they don’t know the existence of API. You need to always use [Retrieval] in this case.
Here is the topic: user topic
Here is the retrieval text: retrieval text
Scenario 4:
Here is the one-shot example to show GPT-4 Turbo:
["role": "user", "content": "Hi, could you summarize the following text for me? Besides, could you also retrieve some related text and see if it can improve the summarization and also check the information conflict", "role": "assistant", "content": "Sure, could you provide the text? ", "role": "user", "content": "user’s own text", "role": "assistant", "content": "[Retrieval] Irrelevant retrieval text[Irrelevant][Continuing to use User’s text]The summarization of user’s text"]
Here is the instruction to tell GPT-4 Turbo:
You are instructed to construct the conversation between the assistant and the user where the user requires the assistant to do summarization on some topic and some post-summarization analysis. Some special tokens need to be added to the conversation. You are required to follow the format of the provided example, including the position of special tokens. Here are special tokens:
[1] [No Retrieval] and [Retrieval] are used to decide whether you need to retrieve the text for summarization.
[2] [Irrelevant] and [Relevant] are used to check whether the retrieval text is related to the user query.
[3] [Continuing to use User’s text] is used to when the retrieval text is not related to user query and appended after [Irrelevant].
The retrieval text will be inside of paragraph and paragraph and you need to remove in the retrieval text. The summary will be appended after [Continuing to use User’s text] . Besides, there are total four APIs: SentimentAnalysis, CollectOnline, KnowledgeConstruction.
[1] SentimentAnalysis is used when user wants to know the sentiemnt of the summary. The argument is the summary.
[2] CollectOnline is used when user wants to collect online news and know people’s attitude. The argument is the topic
[3] KnowledgeConstruction is used when user wants to construct the knowledge graph based on the summary. The argument is the summary.
Here is one example:
one-shot example
Now, I will only provide you the user’s text and the retrieval text and you need to remove " in the user text. You decide the rest of them and construct DIVERSE conversation following the same format as the provided example. . [Continuing to use User’s text] MAKE SURE to IGNORE the RETRIEVAL TEXT when doing summarization. You need to always use [Retrieval] in this case You can use one, two or three APIs.
Here is the user’s text: user’s own text
Here is the retrieval text: retrieval text
Scenario 7:
It should be noted that the conversation for Scenario 7 is structured as a step-by-step construction due to the observation that GPT-4 Turbo is unable to complete the conversation following a sequential instruction within a single interaction. Based on the specific step of summarization you are in, we employ one of three designated prompts:
Scenario 7—start prompt:
Here is one example to show GPT-4 Turbo to initiate a conversation when the first document is irrelevant:
["role": "user", "content": "Could you give a summarization regarding ChatGPT application in Finance by summarizing the documents one-by-one? There are total 5 documents to summarize", "role": "assistant", "content": "In the process of summarizing documents one by one. [Retrieval] paragraph First Document paragraph [Irrelevant][Context] No context till now [/Context] The retrieval text is not relevant with the user’s topic 4 documents left to summarize [Topic] user’s topic"]
Here is one example to show GPT-4 Turbo to initiate a conversation when the first document is relevant:
["role": "user", "content": "Could you give a summarization regarding ChatGPT application in Finance by summarizing the documents one-by-one? There are total 5 documents to summarize", "role": "assistant", "content": "In the process of summarizing documents one by one. [Retrieval] paragraph First Document paragraph [Relevant][Context] No context till now [/Context] Summarization of First Document 4 documents left to summarize [Topic] user’s topic"]
Here is the instruction to tell GPT-4 Turbo to initiate a conversation:
You are instructed to construct the start of the conversation between the assistant and the user requires the assistant to do summarization document by document on some topic. Some special tokens need to be added to the conversation. You are required to follow the format of the provided example, including the position of special tokens. Here are special tokens:
[1] The information inside [Context] and [/Context] should be No context till now (You could diversify this sentence)
[2] [Relevant] and [Irrelevant] are used to check whether the retrieval text inside paragraph and paragraph are relevant with the user query.
[3] Content inside Count and Count is to check how many documents left to summarize.
[4] [Topic] are used to keep the topic of the user query.
The summarization should be appended after [/Context]. The retrieval text at each step should be inside of paragraph and paragraph.
Here is a relevant example: Relevant example shown above
Here is a not relevant example: Irrelevant example shown above
Now, you are instructed to follow the above examples to create the start of the convseration. There are total 5 documents, the topic is xx, and the first document is following:
Content of First Document
The response must only be a list of four dictionaries without saying any other things.
Scenario 7—mid prompt:
Here is one example to show GPT-4 Turbo to create the middle part of the conversation:
["role": "assistant", "content": "In the process of summarizing documents one by one. [Retrieval] paragraph First Document paragraph [Relevant][Context] No context till now [/Context] Summarization of First Document 4 documents left to summarize [Topic] user’s topic", "role": "assistant", "content": "In the process of summarizing documents one by one. [Retrieval] paragraph Second Document paragraph [Irelevant][Context] Summarization of First Document [/Context] Summarization of First Document 3 documents left to summarize [Topic] user’s topic"]
Here is the instruction to tell GPT-4 Turbo to create the middle part of the conversation
You are instructed to construct the conversation between the assistant itself and its goal is to do summarization document by document on some topic. Some sepcial tokens need to be added to the convseration. You are required to follow the format of the provided example, including the position of special tokens. Here are special tokens:
[1] The information inside [Context] and [/Context] is the context you need to rely on when you do the summarization by combining with the retrieval text.
[2] [Relevant] and [Irrelevant] are used to check whether the retrieval text inside paragraph and paragraph are relevant with the user query.
[3] Content inside Count and Count are to check how many documents left to summarize.
[4] [Topic] are used to keep the topic of the user query. Here is one example:
One-shot example shown above
Now, I will provide you with the first piece of the conversation. You need to keep it UNCHANGED. Here is the first piece of the convseration:
First Piece of the Conversation
and here is the new retrieval text:
New Retreivel text to be processed
Construct the new piece of the conversation: Context should keep unchanged if [Irrelevant] appears on the first piece of conversation and need to be changed to the summarization in the first piece if the [Relevant] appears in the first piece of conversation. If the new retrieval text is still irrelevant to the user query, the summarization should be same as the context; if it is relevant, then the summarization should consider both the content of context and the retrieval text (DO NOT LOSE ANY INFORMATION IN THE CONTEXT)
The position of summarization should be appended after Context !!!!(DO NOT LOSE ANY INFORMATION IN THE CONTEXT EVEN EXTENDING THE LENGTH OF THE SUMMARIZATION. IT IS VERY IMPORTANT)!!!! You MUST RETURN ME A LIST OF TWO DICTIONARIES WITHOUT SAYING ANY OTHER THINGS
Scenario 7—end prompt:
Here is one example to show GPT-4 Turbo to end a conversation:
["role": "assistant", "content": "In the process of summarizing documents one by one. [Retrieval] paragraph Second Document paragraph [Irelevant][Context] Summarization so far [/Context] Final Summarization 0 documents left to summarize [Topic] user’s topic", "role": "assistant", "content": "Here is the final summarization: Final Summarization"]
Here is the instruction to tell GPT-4 Turbo to end the conversation
You are instructed to construct the final step of the onversation between the assistant itself and its goal is to do summarization document by document on some topic. Some sepcial tokens need to be added to the convseration. You are required to follow the format of the provided example, including the position of special tokens. Here are special tokens:
[1] The information after [Context] should be the information you should not forget when you do the summarization.
[2] [Relevant] and [Irrelevant] are used to check whether the retrieval text inside <paragraph> and paragraph are relevant with the user query.
[3] Content inside Count and Count are to check how many documents left to summarize.
[4] [Topic] are used to keep the topic of the user query.
Here is one example:
One-shot example shown above
Now, I provide you with the first piece of convseration. You need to keep it UNCAHNGED. Here is the first piece of the conversation:
First piece of the conversation
All you need to do is to generate next piece of conversation. If [Irrelvant] appears in the first piece, the final summarization is the context; if [Relevant] appears, the final summarization is the summarization from last part, which is the content after Context
You need to return me both the first piece and your generated conversation. You MUST RETURN ME A LIST OF TWO DICTIONARIES WITHOUT SAYING ANY OTHER THINGS.
A.3 Evaluation Prompts for Benchmarking Other Large Language Models in Experiments
Zero-shot Prompts for Aspect 1:
[INST] You are given following task based on the given text. Here is the text: text. You first check if the text is relevant with the user_topic. Using following answer format:
(1) if it is relevant, return me using following format "The retrieval text is relevant with user’s topic. Start summarizing on it: content of the summarization"
(2) if it is not relevant, return me using following format "The retrieval text is not relevant with user’s topic"
Here is your reply: [/INST]
Chain of Thought Prompts for Aspect 1:
[INST] You are given following task based on the given text. Here is the text: text. You first check if the text is relevant with the user_topic. You are required to finish the task step by step:
The first step is to determine the relevancy of the retrieval text to the user topic.
Then the second step is based on the result of the relevancy:
(1) if it is relveant, return me using following format "The retrieval text is relevant with user’s topic. Start summarizing on it: content of the summarization"
(2) if it is not relevant, return me using following format "The retrieval text is not relevant with user’s topic"
Here is your reply: [/INST]
One-shot Chain of Thought Prompts for Aspect 1:
[INST] You are given following task based on the given text. Here is the text: text. You first check if the text is relevant with the user_topic. You are required to finish the task step by step:
The first step is to determine the relevancy of the retrieval text to the user topic.
Then the second step is based on the result of the relevancy:
(1) if it is relveant, return me using following format "The retrieval text is relevant with user’s topic. Start summarizing on it: content of the summarization"
(2) if it is not relevant, return me using following format "The retrieval text is not relevant with user’s topic"
Here is an example:
The user topic would like to know the summary about ChatGPT application in Finance. The retrieval text is ChatGPT is a chatbot developed by OpenAI and launched on November 30, 2022. Based on a large language model, it enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language . Then, you need to output it is not relevant since the user asks the specific finance application of ChatGPT but the retrieval text reflects the ChatGPT introduction.
Here is your reply: [/INST]
Zero-shot prompts for Aspect 2:
[INST] You are a summarization assistant to summarize following text and return ONLY the summary to me. Here is the text user_text [/INST]
Zero-shot Prompts for Aspect 3:
[INST] You are given two tasks based on the given two texts. Here is the user text: use_text. Here is the retrieval text: retrieval_text.You first check if the retrieval text is relevant with the user text, and if it is relevant, check if there is any information conflict between the retrieval text and the user text. Using following format:
(1) if they are not relevant, you should return to me: the user text is not relevant with the retrieval text. Start summarizing only on user text: content of the summarization
(2) if they are relevant but the retrieval text has information conflict with the user text, you only need to return "There is information conflict between the user text and the retrieval text"
(3) if they are relevant and there is no information conflict between them, you should return to me: the user text is relevant with the retrieval text and there is no information conflict. Start summarizing on both retrieval text and user text: content of the summarization. [/INST]
Chain of Thought Prompts for Aspect 3:
[INST] You are instructed to finish following text step by step. Here is the user text: user_text. Here is the retrieval text: retrieval_text. The first step is to check if the retrieval text is relevant with the user text. Based on the check result, you are ready to implement the following step. (1) if they are not relevant, you should return to me: the user text is not relevant with the retrieval text. Start summarizing only on user text: content of the summarization
(2) if they are relevant but the retrieval text has information conflict with the user text, you only need to return "There is information conflict between the user text and the retrieval text"
(3) if they are relevant and there is no information conflict between them, you should return to me: the user text is relevant with the retrieval text and there is no information conflict. Start summarizing on both retrieval text and user text: content of the summarization. [/INST]
One-shot Chain of Thought Prompts for Aspect 3:
[INST] You are instructed to finish following text step by step. Here is the user text: user_text. Here is the retrieval text: retrieval_text. The first step is to check if the retrieval text is relevant with the user text. Based on the check result, you are ready to implement the following step. (1) if they are not relevant, you should return to me: the user text is not relevant with the retrieval text. Start summarizing only on user text: content of the summarization
(2) if they are relevant but the retrieval text has information conflict with the user text, you only need to return "There is information conflict between the user text and the retrieval text"
(3) if they are relevant and there is no information conflict between them, you should return to me: the user text is relevant with the retrieval text and there is no information conflict. Start summarizing on both retrieval text and user text: content of the summarization.
Here is one example: The user text is The Ragdoll is a breed of cat with a distinct colorpoint coat and blue eyes. Its morphology is large and weighty, and it has a semi-long and silky soft coat. American breeder Ann Baker developed Ragdolls in the 1960s. They are best known for their docile, placid temperament and affectionate nature.
The retrieval text isA domestic short-haired cat is a cat possessing a coat of short fur, not belonging to any particular recognised cat breed. In the United Kingdom, they are colloquially called moggies. Then, in this example, your reply should be: The user text is not relevant with the retrieval text. Start summarizing only on user text: Ragdolls are large, gentle cats with colorpoint coats and blue eyes.[/INST]
A.4 Evaluation Prompts for Benchmarking Other Summarization Frameworks
—————————————
Prompts for Stuff summarization
Write a summary of the following text regarding topic topic and skip irrelevant text with respect to this topic.
Here is the text: text
—————————————
Prompts for Map-Reduce summarization
Map prompt:
Write a summary of this chunk of text regarding topic topic that includes the main points and any important details (skip irrelevant text with respect to this topic.)
Here is the text: text
Reduce prompt:
Write a concise summary of the following text delimited by triplet backquotes. “‘text“‘
Here is your summary:
—————————————
Prompts for Refine summarization
Question prompt:
Provide a summary of the following text with respect to topic topic (skip irrelevant text with respect to the topic):
TEXT: text
SUMMARY:
Refine prompt:
Write a concise summary of the following text delimited by triple backquotes.
“‘text“‘
SUMMARY:
A.5 Transformation Text for Special Tokens and System Prefixes
Special Tokens | Natural Text Alternatives |
---|---|
[Retrieval] [No Retrieval] | Here is the retrieval text to be used for summarization There is no need to retrieve since user provides its own text |
[Relevant] [Irrelevant] | The retrieval text is relevant irrelevant with the user’s text |
[Continue to Use User’s Text] | The retrieval text is not relevant with the user’s text. Ignore it and use the user’s text to form the summarization as follows: |
[Information Conflict] | Although the retrieval text is relevant with user’s text, there is an information conflict between user’s text and the retrieved text. |
[Augmenting User’s Text] | The retrieval text is relevant with user’s text. MUST COMBINE user’s text and retrieved text to write the final summarization. |
[Context] | Context to be used for the summarization |
[/Context] | If the retrieval text is not relevant with the user’s topic, keep the summarization at this step same as the context; |
[/Context] | If the retrieval text is relevant with the user’s topic, combine retrieval text with context information. Here is the summarization at this step: |
Count, /Count | Start counting how many documents left to summarize. Current summarization step you are at: |
[Topic] | Here is the topic to be kept to check if retrieval text is relevant with the user’s query: |
Aspect | System Prefix |
1 | You are a summarization assistant to retrieve the text based on user’s topic and then do the summarization. |
2 | You are a summarization assistant to do the summarization based on user’s text. |
3 | You are a summarization assistant to decide if combining the retrieval text with user’s text to do the summarization based on its relevancy: |
4 | You are a summarization assistant to summarize the documents one by one. |