This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: The Hong Kong University of Science and Technology, Hong Kong, China
11email: [email protected], {wwangbw, yqsong}@cse.ust.hk
22institutetext: The Hong Kong University of Science and Technology (Guangzhou),
Guangzhou, China
22email: [email protected], [email protected]

Chain-of-Choice Hierarchical Policy Learning for Conversational Recommendation

Wei Fan 11    Weijia Zhang 22    Weiqi Wang 11    Yangqiu Song 11    Hao Liu(✉) 22
Abstract

Conversational Recommender Systems (CRS) illuminate user preferences via multi-round interactive dialogues, ultimately navigating towards precise and satisfactory recommendations. However, contemporary CRS are limited to inquiring binary or multi-choice questions based on a single attribute type (e.g., color) per round, which causes excessive rounds of interaction and diminishes the user’s experience. To address this, we propose a more realistic and efficient conversational recommendation problem setting, called Multi-Type-Attribute Multi-round Conversational Recommendation (MTAMCR), which enables CRS to inquire about multi-choice questions covering multiple types of attributes in each round, thereby improving interactive efficiency. Moreover, by formulating MTAMCR as a hierarchical reinforcement learning task, we propose a Chain-of-Choice Hierarchical Policy Learning (CoCHPL) framework to enhance both the questioning efficiency and recommendation effectiveness in MTAMCR. Specifically, a long-term policy over options (i.e., ask or recommend) determines the action type, while two short-term intra-option policies sequentially generate the chain of attributes or items through multi-step reasoning and selection, optimizing the diversity and interdependence of questioning attributes. Finally, extensive experiments on four benchmarks demonstrate the superior performance of CoCHPL over prevailing state-of-the-art methods.

Keywords:
Conversational Recommendation Hierarchical Reinforcement Learning Graph Representation Learning

Refer to caption

(a) Binary choices

Refer to caption

(b) Single-Type attributes

Refer to caption

(c) Multi-Type attributes

Figure 1: Example of different conversational recommendation settings.

1 Introduction

Compared to traditional recommender systems, conversational recommender system (CRS)  [9] serves as an online salesperson who elicits user preferences [14, 8] through engaging question-and-answer dialogues  [6, 19] to provide tailored recommendations. This conversational approach allows for explicit preference reasoning by actively asking questions about attributes and making desirable recommendations based on the user’s real-time feedback.

Recent research has explored different settings to create more realistic conversational recommendation scenarios. Existing studies allow CRS to ask either binary (yes/no) questions for each selected attribute [9, 6, 7, 4] or multiple choice questions falling into one selected attribute type (e.g., brand, color) in each round [18]. However, existing CRS using binary choices [4] or single-type attribute questions [18] can be repetitive and time-consuming, causing frustration for the user and decreasing the overall recommendation success rate. Consider an example illustrated in Figure 1, where the user wants shoes with the attributes “Basketball,” “Nike,” and “Black.” In Figure 1(a) and Figure 1(b), the question choices are restricted to one attribute type per round, necessitating at least three rounds to extract all the preferred attributes. Our work studies a more realistic scenario, namely Multi-Type-Attribute Multi-round Conversational Recommendation (MTAMCR) as depicted in Figure 1(c), which enables CRS to generate questions encompassing diverse yet dependent attribute types. By optimizing the combination of attribute types, we can enhance the efficiency of reasoning and questioning, ultimately increasing the success rate of recommendations.

Previous works [9, 6, 7, 4, 18, 20] have formulated conversational recommendation as a Markov Decision Process (MDP) and introduced reinforcement learning (RL) to choose the attributes and recommended items. However, these approaches encounter two main limitations when apply them to the new MTAMCR scenario: 1) Diversity of attributes: Previous works select the attribute instances within a single attribute type, allowing for exploring only one type of user’s attribute preference (e.g., color) per round. Thus, these methods need more rounds to discover and capture all the attributes of the user’s target item. 2) Dependency between attributes: Existing RL-based approaches directly select one or multiple attribute instances based on the top-K Q-values without considering the influence of previous attribute choices on subsequent attribute selections. As a result, such approaches usually neglect the dependency between selections, leading to suboptimal combinations of attributes.

To address the two limitations, we propose the chain-of-choice generation approach, as illustrated in Figure 2. This approach infers the next choice, considering the previous choices by predicting the user feedback. For example, if the user wants basketball shoes and the first choice in a given turn is “Nike,” selecting the color “Black” as the next choice is more effective than presenting a brand selection between “Nike” and “Adidas.” This is because we predict that the user would accept the attribute “Nike” since many individuals highly prefer Nike basketball shoes. Therefore, it is strategically advantageous to prioritize exploring color preferences rather than spending more time deliberating over brand preferences. With the chain of choices, we first proposed the Chain-of-Choice Hierarchical Policy Learning (CoCHPL) framework, which formulates MTAMCR as a hierarchical reinforcement learning task. In each turn, CoCHPL employs a long-term policy to select the option (i.e., ask or recommend) and generates the chain of choices (i.e., attributes or items) step by step via different short intra-option policies. To model the transitions of the choice chain and determine the optimal moment for terminating the current choice generation, we utilize learnable functions that predict feedback for the ongoing choice chain and infer the next state for the next choice selection.

Our major contributions can be summarized as follows:

  1. 1)

    We introduce the MTAMCR scenario, which allows the CRS agent to ask questions under the same or different attribute types in each round, enabling more realistic and effective user interaction.

  2. 2)

    We propose the CoCHPL framework for the MTAMCR scenario by formulating MTAMCR as a hierarchical RL task, which includes an option selection task and chain-of-choice generation tasks for different options (action types).

  3. 3)

    We conduct extensive experiments on four benchmark datasets, and the results demonstrate a substantial improvement in both performance and generative capacity with all the comparison methods.

Refer to caption

Figure 2: During each turn, the agent engages in a decision-making process where it selects a choice and then predicts the user’s feedback in order to infer the subsequent state. With the predicted state, the agent selects the next choice, continually repeating this process until the round eventually reaches its termination point.

2 Related Work

2.1 Multi-round Conversational Recommendation

Multi-round conversational recommendation (MCR) is a prevalent setting in conversational recommendation, characterized by iterative questioning and recommendations until user satisfaction or a dialogue round limit is reached [4, 7, 15]. Early models like CRM [9] utilized reinforcement learning in single-round settings, later extended to MCR by EAR [6]. SCPR [7] considers MCR as interactive path reasoning on a graph, while UNICORN [4] proposes a unified framework based on a dynamic weighted graph. DAHCR [20] introduces a hierarchical approach for action type and attribute/item selection. MCMIPL [18] proposes Multiple-Choice MCR to enable multiple-choice questions within an attribute type per turn. Instead of single-type attribute queries per turn, our work introduces a more realistic setting, Multi-Type-Attribute Multi-round Conversational Recommendation.

2.2 The Options Framework

The options framework by Sutton et al.[11] extends the conventional reinforcement learning paradigm [16] by incorporating temporally extended actions, or options. Each option ωΩ\omega\in\Omega comprises an intra-option policy πω\pi_{\omega} for action selection, a state transition function 𝒯ω\mathcal{T}_{\omega}, and a termination function βω\beta_{\omega} to decide when to terminate the current option. This hierarchical structure aids in efficient exploration and task decomposition. The option-critic architecture, introduced by [1], integrates policy gradient methods with the options framework[10], enabling concurrent learning of intra-option policies and termination conditions, and facilitating subgoal identification without additional reward schemes.

3 Preliminary

3.0.1 Multi-Type-Attribute Multi-round Conversational Recommendation.

Generally [6, 7, 18], CRS engages in a dialogue with users, asking questions about attributes or providing recommendations in each round. The users are assumed to have definite preferences toward specific attributes and items, and they can express preferences by accepting or rejecting the attributes or items mentioned by CRS. The conversation continues until a successful recommendation is made or a pre-set round limit is reached. Our focus is on a more realistic CRS paradigm, Multi-Type-Attribute Multi-round Conversational Recommendation, where the CRS can question the user about multiple types of attributes per turn.

In MTAMCR, we denote the user set by 𝒰\mathcal{U}, the item set by 𝒱\mathcal{V}, and the attribute set by 𝒫\mathcal{P}. Each item v𝒱v\in\mathcal{V} is associated with attributes 𝒫v\mathcal{P}_{v}, with each attribute p𝒫vp\in\mathcal{P}_{v} corresponding to a specific type. In each episode, there is a target item vv that is acceptable to the user u𝒰u\in\mathcal{U}. At the start of each session, a user u𝒰u\in\mathcal{U} targets an item vv and indicates a preferred attribute p0𝒫vp_{0}\in\mathcal{P}_{v} to the CRS. Then, in each turn TT, the CRS is free to query the user for attribute preference or recommend items from the candidates through multiple choice questions:

CT={a1,a2,,an},C_{T}=\{a_{1},a_{2},...,a_{n}\}, (1)

where aa represents a choice (i.e., attribute or item), and CTC_{T} may comprise multiple attributes {p1,p2,,pn}\{p_{1},p_{2},...,p_{n}\} or items {v1,v2,,vn}\{v_{1},v_{2},...,v_{n}\}. The user uu responds to each choice in CTC_{T} presented by the CRS, either accepting or rejecting it.

Refer to caption

Figure 3: The overview of Chain-of-Choice Hierarchical Policy Learning.

4 Chain-of-Choice Hierarchical Policy Learning

In this section, we present a detailed description of our method, CoCHPL, as illustrated in Figure 3, to explain the overview of the chain-of-choice generation. CoCHPL consists of four key components: an option-based MDP Environment to provide the dynamic state with the graph structure and candidate options and choices for the agent, a Dynamic-Graph State Representation module that encodes the graph-based state into a latent space to capture the key information and relationships, a Long Policy Over Options to determine the option (i.e., ask or recommend) per round, and a Short Intra-Option Policy to generate the chain of choices via multi-step reasoning and selection.

4.1 Option-based MDP Environment

By formulating the MTAMCR as a hierarchical reinforcement learning task, we model not only the low-level actions (choices) but also the high-level actions (options) within each turn, which helps the CRS learn different dimensions of temporal abstractions. The original environment consists of a state space SS, an action space 𝒜\mathcal{A}, a transition function 𝒯:𝒮×𝒜𝒮\mathcal{T}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S} and a reward funtion :𝒮×𝒜\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}. The Option-Based MDP environment extends the MDP environment to the tuple (𝒮,Ω,𝒜,𝒯,)(\mathcal{S},\Omega,\mathcal{A},\mathcal{T},\mathcal{R}) by introducing options Ω={ωask,ωrec}\Omega=\{\omega_{ask},\omega_{rec}\}, which are alternative action types that the agent can choose from at each turn.

To model the state transitions between the multiple choices within the same turn, we assume that the user answers the question choices one by one and introduces the timestep tt to describe the intrinsic state of turn TT as follows:

sT={st,st+1,,st+n},sT+1={st+n,st+n+1,,st+n+m},s_{T}=\{s_{t},s_{t+1},...,s_{t+n}\},s_{T+1}=\{s_{t+n},s_{t+n+1},...,s_{t+n+m}\}, (2)

where nn, mm is the choice number in turn TT and T+1T+1, respectively. If the user responds to the first choice ata_{t} at turn TT, the state will transition from sts_{t} to st+1s_{t+1}.

4.1.1 State-Option Space

Before each turn TT, the CRS selects an option ωT\omega_{T} from Ω\Omega to decide the action type and extend the state space. The state-option pair (st,ωT)(s_{t},\omega_{T}) at timestep tt in turn TT contains {u,𝒱(t),𝒫t,ωT}\{u,\mathcal{V}^{(t)},\mathcal{P}^{t},\omega_{T}\}, where 𝒱(t)=𝒱rej(t)𝒱cand(t)\mathcal{V}^{(t)}=\mathcal{V}^{(t)}_{rej}\cup\mathcal{V}^{(t)}_{cand}, and 𝒫(t)=𝒫acc(t)𝒫rej(t)𝒫cand(t)\mathcal{P}^{(t)}=\mathcal{P}^{(t)}_{acc}\cup\mathcal{P}^{(t)}_{rej}\cup\mathcal{P}^{(t)}_{cand}. 𝒫acc(t)\mathcal{P}^{(t)}_{acc} denotes all the accepted attributes, 𝒱rej(t)\mathcal{V}^{(t)}_{rej} and 𝒫rej(t)\mathcal{P}^{(t)}_{rej} denotes all the rejected items and attributes, 𝒱cand(t)\mathcal{V}^{(t)}_{cand} and 𝒫cand(t)\mathcal{P}^{(t)}_{cand} denotes the candidate items and attributes.

4.1.2 Action Space.

For (st,ωrec)(s_{t},\omega_{rec}) at timestep tt, the candidate action space 𝒜t\mathcal{A}_{t} is 𝒱cand(t)\mathcal{V}^{(t)}_{cand} for the option ωrec\omega_{rec}. We have 𝒱cand(t)=𝒱𝒫acc(t)\𝒱rej(t)\mathcal{V}_{cand}^{(t)}=\mathcal{V}_{\mathcal{P}_{acc}^{(t)}}\backslash\mathcal{V}_{rej}^{(t)} where 𝒱𝒫acc(t)\mathcal{V}_{\mathcal{P}_{acc}^{(t)}} denotes all the items which contain all the accepted attributes. For another pair (st,ωask)(s_{t},\omega_{ask}), the candidate attribute space 𝒫cand(t)=𝒫𝒱cand(t)\(𝒫acc(t)𝒫rej(t))\mathcal{P}_{cand}^{(t)}=\mathcal{P}_{\mathcal{V}_{cand}^{(t)}}\backslash(\mathcal{P}_{acc}^{(t)}\cup\mathcal{P}_{rej}^{(t)}) and 𝒫𝒱cand(t)\mathcal{P}_{\mathcal{V}_{cand}^{(t)}} denotes the attributes of all the candidate items.

4.1.3 Transition.

For the state-option pair (st,ωask)(s_{t},\omega_{ask}), after the selection of ata_{t} at timestep tt, if the CRS predicts that the user accepts ata_{t}, then sts_{t} will transiton to st+1s_{t+1} and 𝒫acc(t+1)=𝒫acc(t)at\mathcal{P}_{acc}^{(t+1)}=\mathcal{P}_{acc}^{(t)}\cup a_{t}. If rejected, 𝒫rej(t+1)\mathcal{P}_{rej}^{(t+1)} are updated to 𝒫rej(t)at\mathcal{P}_{rej}^{(t)}\cup a_{t}. The next candidate action space is updated to 𝒫cand(t+1)=𝒫cand(t)\at\mathcal{P}_{cand}^{(t+1)}=\mathcal{P}_{cand}^{(t)}\backslash a_{t}. The transition is similar for another pair (st,ωrec)(s_{t},\omega_{rec}).

4.1.4 Reward.

To promote a universal and uniform distribution of rewards \mathcal{R}, a simplification of the reward structure is implemented as (1) rask_sucr_{ask\_suc}: a slight reward when the user accepts the asked attribute, (2) rask_failr_{ask\_fail}: a slight penalty when the user rejects the asked attribute, (3) rrec_failr_{rec\_fail}: a slight penalty when the user rejects the recommended item, (4) rrec_sucr_{rec\_suc}: a strong reward when the user accepts the recommended item.

4.2 Dynamic-Graph State Representation Learning

We explicitly employ a dynamic-graph state representation in the option-based MDP environment to incorporate user-preferred attributes and candidate items. This representation captures both the current conversation and its historical context, facilitating the modeling of conversational recommendation as an interactive path reasoning problem on a graph.

4.2.1 Graph Construction.

We denote the dynamic undirected graph as Gu=(𝐕,𝐀){G}_{u}=(\mathbf{V},\mathbf{A}). The node set 𝐕\mathbf{V} consists of the related user, items, and attributes. And 𝐀\mathbf{A} is a n×nn\times n adjacency matrix representing each node’s edge weight in 𝐕\mathbf{V}. During each timestep in the conversation, GuG_{u} dynamically changes the node and adjacency matrix as follows:

𝐕(t)={u}𝒫acc(t)𝒫cand(t)𝒫rej(t)𝒱cand(t)𝒱rej(t),\mathbf{V}^{(t)}=\{u\}\cup\mathcal{P}_{acc}^{(t)}\cup\mathcal{P}_{cand}^{(t)}\cup\mathcal{P}_{{rej}}^{(t)}\cup\mathcal{V}_{cand}^{(t)}\cup\mathcal{V}_{{rej}}^{(t)}, (3)
𝐀i,j(t)={wv(t), if ni=u,nj𝒱cand(t),1, if ni𝒱cand(t),nj𝒫,0, otherwise ,\mathbf{A}_{i,j}^{(t)}=\left\{\begin{array}[]{ll}w_{v}^{(t)},&\text{ if }n_{i}=u,n_{j}\in\mathcal{V}_{{cand}}^{(t)},\\ 1,&\text{ if }n_{i}\in\mathcal{V}_{{cand}}^{(t)},n_{j}\in\mathcal{P},\\ 0,&\text{ otherwise },\end{array}\right. (4)

where wv(t)w_{v}^{(t)} denotes the user’s preference score for the item vv, calculated as:

wv(t)=σ(euev+p𝒫acc(t)evepp𝒫rej(t)evep),w_{v}^{(t)}=\sigma(e_{u}^{\top}e_{v}+\sum\nolimits_{p\in\mathcal{P}_{acc}^{(t)}}e_{v}^{\top}e_{p}-\sum\nolimits_{p\in\mathcal{P}_{rej}^{(t)}}e_{v}^{\top}e_{p}), (5)

where σ()\sigma(\cdot) is the sigmoid function and eu,ev,ep{e_{u},e_{v},e_{p}} are pre-trained embeddings initialized as [2]. We can also calculate the preference score for candidate attributes at the current state: wp(t)=σ(euep+p𝒫acc(t)e¯pepp𝒫rej(t)e¯pep)w_{p}^{(t)}=\sigma(e_{u}^{\top}e_{p}+\sum\nolimits_{p\in\mathcal{P}_{acc}^{(t)}}\overline{e}_{p}^{\top}e_{p}-\sum\nolimits_{p\in\mathcal{P}_{rej}^{(t)}}\overline{e}_{p}^{\top}e_{p}). With the calculated weights, we can reduce the action space size for different options by selecting the top-KK candidates in the action selection section 4.3, 4.4.

4.2.2 State Representation.

By incorporating both the current conversation and the historical conversation data, we can create a comprehensive global knowledge graph denoted as GG. This knowledge graph captures the evolving preferences of users over time. To effectively leverage the correlation information among users, items, and attributes represented in the graph, we implement a two-layer graph convolutional network (GCN) [5, 17] as follows:

es(l+1)=ReLU(i𝒩sD12𝑨D12Wlei(l)+B(l)es(l)),e_{s}^{(l+1)}=\operatorname{ReLU}(\sum\nolimits_{i\in\mathcal{N}_{s}}D^{-\frac{1}{2}}\bm{A}D^{-\frac{1}{2}}W^{l}e_{i}^{(l)}+B^{(l)}e_{s}^{(l)}), (6)

where ll is the layer number, 𝒩s\mathcal{N}_{s} denotes the neighbors of nsn_{s} and DD denotes the degree matrix with Dii=j𝑨i,jD_{ii}=\sum\nolimits_{j}\bm{A}_{i,j}. B(l)B^{(l)} and W(l)W^{(l)} are both trainable network parameters. To further extract the sequential information of the current conversation, we use a single-layer multi-head attention [12] network and calculate the final state representation by an average pooling:

𝒫acc=LayerNorm(FNN(MultiHead(𝒫acc(t))+𝒫acc(t)))\mathcal{P}_{acc}^{*}=\operatorname{LayerNorm}(\operatorname{FNN}(\operatorname{MultiHead}(\mathcal{P}_{acc}^{(t)})+\mathcal{P}_{acc}^{(t)}))\vspace{-1mm} (7)
fθS(st)=AvgPool(𝒫acc).f_{\theta_{S}}(s_{t})=\operatorname{AvgPool}(\mathcal{P}_{acc}^{*}). (8)

After the sequential learning, the final state representation is generated, and θS\theta_{S} denotes all parameters for the dynamic-graph state representation module.

Parameter initialization: θQ\theta_{Q}, θβ\theta_{\beta}, θ𝒯\theta_{\mathcal{T}}, θS\theta_{S}
for episode=1episode=1 to KK do
       Initialization: User u𝒰u\in\mathcal{U}, Target item v𝒱uv\in\mathcal{V}_{u}, Initial attribute p0𝒫vp_{0}\in\mathcal{P}_{v}
       Initialization: Timestep t0t\leftarrow 0, Turn number T0T\leftarrow 0
       Initialization: Choice number n0n\leftarrow 0, Initial state st+ns0s_{t+n}\leftarrow s_{0}
       Choose an action option ωT\omega_{T} via ϵ\epsilon-soft long policy πΩ(st+n)\pi_{\Omega}(s_{t+n})
       repeat
            
             1. Chain-of-Choice Generation:
             Initialization: choice chain CT{}{C}_{T}\leftarrow\{\}, candidate action space 𝒜t\mathcal{A}_{t}
            
            repeat
                   Choose an choice at+na_{t+n} via short intra-option policy πωT(st+n)\pi_{\omega_{T}}(s_{t+n})
                   if at training stage then
                         Take choice at+na_{t+n}, receive true reward rt+nr_{t+n}
                         Observe true st+n+1s_{t+n+1}, get candidate action space 𝒜t+n+1\mathcal{A}_{t+n+1}
                         Store (st+n,at+n,rt+n,st+n+1,𝒜t+n+1,ωT)(s_{t+n},a_{t+n},r_{t+n},s_{t+n+1},\mathcal{A}_{t+n+1},\omega_{T})
                   else
                         Predict user feedback and reward via 𝒯ω(st+n,at+n)\mathcal{T}_{\omega}(s_{t+n},a_{t+n})
                         Update the next state st+n+1s_{t+n+1} based on predicted feedback
                        
                   end if
                  Update CTCT+at+nC_{T}\leftarrow C_{T}+a_{t+n}, nn+1\,n\leftarrow n+1
            until βωT(st+n)\beta_{\omega_{T}}(s_{t+n}) terminates or CT{C}_{T} reach the maximum length;
            
             2. User Interaction:
             Take chain of choice CTC_{T}, observe st+ns_{t+n}
             Choose new option ωT+1\omega_{T+1} via ϵ\epsilon-soft πΩ(st+n)\pi_{\Omega}(s_{t+n})
             Update tt+nt\leftarrow t+n, n0n\leftarrow 0, TT+1T\leftarrow T+1
            
             3. Policy Optimization:
             Sample mini-batch of (st+n,at+n,rt+n,st+n+1,At+n+1,ωT)(s_{t+n},a_{t+n},r_{t+n},s_{t+n+1},{A}_{t+n+1},\omega_{T})
             Update θQ,θS\theta_{Q},\theta_{S} w.r.t. Eq. (14)
             Update θβ\theta_{\beta} w.r.t. Eq. (15)
             Update θ𝒯\theta_{\mathcal{T}} w.r.t. Eq. (16)
            
      until recommended successfully or T>TmaxT>T_{max};
end for
Algorithm 1 CoCHPL in the MTAMCR scenario

4.3 Long Policy Leaning Over Options

After encoding the state with the dynamic graph, we propose a long-term policy denoted as πΩ\pi_{\Omega} to select the option ω\omega at each turn. Our objective is to optimize the discounted return at turn TT directly. To achieve this, we express the expected Q-value of each option as follows:

QΩ(st,ωT)=at𝒜tπωT(atst)QU(st,ωT,at),Q_{\Omega}(s_{t},\omega_{T})=\sum\nolimits_{a_{t}\in\mathcal{A}_{t}}\pi_{\omega_{T}}(a_{t}\mid s_{t})Q_{U}(s_{t},\omega_{T},a_{t}), (9)

where πωT\pi_{\omega_{T}} is the intra-option policy and QU:𝒮×Ω×𝒜Q_{U}:\mathcal{S}\times\Omega\times\mathcal{A}\rightarrow\mathbb{R} is the estimated Q-value of executing an action in the context of a state-option pair which is introduced in section 4.4.

4.4 Short Intra-Option Policy Learning

With the state-option pair, we need a short-term policy to generate the choice chain at each turn. We propose that each option ω\omega consists of a triple (πω,βω,𝒯ω)(\pi_{\omega},\beta_{\omega},\mathcal{T}_{\omega}) in which πω{πωask,πωrec}\pi_{\omega}\in\{\pi_{\omega_{ask}},\pi_{\omega_{rec}}\} is a short-term intra-option policy to select the attribute or item at each timestep, 𝒯ω:𝒮×𝒜[0,1]\mathcal{T}_{\omega}:\mathcal{S}\times\mathcal{A}\rightarrow[0,1] is a feedback prediction function to predict user response for inferring the next timestep state, and βω:𝒮[0,1]\beta_{\omega}:\mathcal{S}\rightarrow[0,1] is a termination function to determine when to exit the generation process of the current option.

As shown in Algorithm 1, during each timestep tt in turn TT, πω\pi_{\omega} select a item or attribute as the choice ata_{t}, and then predict the feedback (i.e., acc/rej) via 𝒯ω(st,at)=𝟙(at is accpeted)\mathcal{T}_{\omega}(s_{t},a_{t})=\mathds{1}(a_{t}\text{ is accpeted}) for inferring the next state st+1s_{t+1}. If βωT(st+1)\beta_{\omega_{T}}(s_{t+1}) determines to terminate the choice chain generation, the CRS will present the chain-of-choice question to the user. Otherwise, the generation will continue.

To optimize intra-option policy πω\pi_{\omega}, we estimate the Q-value of selecting a choice in the context of a state-option pair (s,ω)(s,\omega), which can be calculated by:

QU(st,ωT,at)=r(st,at)+γU(ωT,st+1),Q_{U}(s_{t},\omega_{T},a_{t})=r(s_{t},a_{t})+\gamma U(\omega_{T},s_{t+1}), (10)

where r:𝒮×𝒜r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R} is the reward function and U:Ω×𝒮U:\Omega\times\mathcal{S}\rightarrow\mathbb{R} denotes the value function of executing option ωT\omega_{T} upon entering the next state st+1s_{t+1} as:

U(ωT,st+1)=(1βωT(st+1))QΩ(st+1,ωT)+βωT(st+1)maxω¯QΩ(st+1,ω¯),U(\omega_{T},s_{t+1})=(1-\beta_{\omega_{T}}(s_{t+1}))Q_{\Omega}(s_{t+1},\omega_{T})+\beta_{\omega_{T}}(s_{t+1})\max\limits_{\overline{\omega}}Q_{\Omega}(s_{t+1},\overline{\omega}), (11)

where the next state value is estimated by a weighted sum of the value of continuing the current option and the maximum value in the next state, with the weights determined by the termination value.

4.5 Model Training

As illustrated in Algorithm 1, in MTAMCR, the CRS needs to generate a complete chain of choices before interacting with the user and obtaining feedback. However, during the training stage, we can receive real feedback for each choice ata_{t} and reward rtr_{t} immediately from users at each timestep tt, and accurately infer the next state st+1s_{t+1}, which speeds up the convergence of training. Each option ω\omega has its own experience replay buffer, denoted by 𝒟ω\mathcal{D}{\omega}, and the experience dt=(st,at,rt,st+1,𝒜t+1,ωT)d_{t}=(s_{t},a_{t},r_{t},s_{t+1},\mathcal{A}_{t+1},\omega_{T}) is stored in the replay buffer 𝒟ωT\mathcal{D}{\omega_{T}}. We sample a mini-batch of experiences from each 𝒟ωT\mathcal{D}_{\omega_{T}} after every turn and optimize the parameters of the whole network.

In this work, we implement the long policy over options πΩ\pi_{\Omega} and short intra-option policies πω\pi_{\omega} to sample the probability distribution of the discrete action space with the Q-value function as follows:

πΩ(st)=argmaxωQΩ(st,ω),πωT(atst)=exp(QU(st,ωT,at))a𝒜texp(QU(st,ωT,a)),\pi_{\Omega}(s_{t})=\mathop{\arg\max}_{\omega}Q_{\Omega}(s_{t},\omega),\pi_{\omega_{T}}(a_{t}\mid s_{t})=\frac{\exp(Q_{U}(s_{t},\omega_{T},a_{t}))}{\sum_{a\in\mathcal{A}_{t}}\exp(Q_{U}(s_{t},\omega_{T},a))}, (12)

where both πΩ\pi_{\Omega} and πω\pi_{\omega} freeze the parameters for the stable training optimization and only need a learnable QUQ_{U} function to represent πΩ\pi_{\Omega}, πω\pi_{\omega} and QΩQ_{\Omega}.We devise a dueling deep Q-network [13] to estimate the expected value of the triple (st,ωt,at)(s_{t},\omega_{t},a_{t}) at timestep tt and the Q-value of the tuple can be estimated by:

QU(st,ωT,at;θQ,θS)=fθV(fθS(st))+fθA(fθS(st),ωt,at),\begin{split}Q_{U}(s_{t},\omega_{T},a_{t};\theta_{Q},\theta_{S})=f_{\theta_{V}}(f_{\theta_{S}}(s_{t}))+f_{\theta_{A}}(f_{\theta_{S}}(s_{t}),\omega_{t},a_{t}),\end{split} (13)

where θV\theta_{V} and θA\theta_{A} represent the parameters of the value and advantage functions, respectively, simplified as θQ={θV,θA}\theta_{Q}=\{\theta_{V},\theta_{A}\}. The target value yty_{t} of QUQ_{U} during training can be obtained as Eq. (10), and we can optimize the parameters of the Q network by minimizing the MSE loss as follows:

(θQ,θS)=𝔼dt𝒟[(ytQU(st,ωT,at;θQ,θS))2].\mathcal{L}(\theta_{Q},\theta_{S})=\mathbb{E}_{d_{t}\sim\mathcal{D}}[(y_{t}-Q_{U}(s_{t},\omega_{T},a_{t};\theta_{Q},\theta_{S}))^{2}]. (14)

Based on the Termination Gradient Theorem proposed by [1], the parameters θβ\theta_{\beta} of termination network βω\beta_{\omega} can be optimized by increasing the gradient or minimizing the loss:

(θβ)=𝔼dt𝒟[βω(st+1;θβ)(QΩ(st+1,ωT)V(st+1))],\mathcal{L}(\theta_{\beta})=\mathbb{E}_{d_{t}\sim\mathcal{D}}[\beta_{\omega}(s_{t+1};\theta_{\beta})(Q_{\Omega}(s_{t+1},\omega_{T})-V(s_{t+1}))], (15)

where V(st+1)=fθV(fθS(st+1)V(s_{t+1})=f_{\theta_{V}}(f_{\theta_{S}}(s_{t+1}) is the estimated value of the next state. Intuitively, when the value of selecting the current option exceeds the estimated value, the probability of terminating the option is reduced.

For the feedback prediction function, we simply implement it by a multilayer perceptron (MLP). With the ground-truth state and reward, we optimize it by minimizing the following loss to minimize the squared difference between the predicted feedback for action ata_{t} and the ground-truth user feedback:

(θ𝒯)=𝔼dt𝒟[(𝟙(at is accepted)𝒯ωt(st,at;θ𝒯))2],\mathcal{L}(\theta_{\mathcal{T}})=\mathbb{E}_{d_{t}\sim\mathcal{D}}[(\mathds{1}(a_{t}\text{ is accepted})-\mathcal{T}_{\omega_{t}}(s_{t},a_{t};\theta_{\mathcal{T}}))^{2}], (16)

where 𝟙()\mathds{1}(\cdot) takes the value of 1 when user accepts ata_{t}, and 0 otherwise.

5 Experiments

To demonstrate the superiority of CoCHPL in the MTAMCR scenario, we conduct experiments aimed at validating the following three research questions (RQs):

  • RQ1: With the comparison of state-of-the-art methods, how much does our model CoCHPL improve in overall performance?

  • RQ2: How do different components contribute to the performance?

  • RQ3: Does CoCHPL enhance attribute diversity and dependency within each conversational turn?

5.1 Experimental Settings

5.1.1 Datasets.

To fairly evaluate CoCHPL, we utilized four widely recognized benchmark datasets [18] that are commonly used for conversational recommendation tasks involving multiple choice questions, namely: LastFM, Yelp, Amazon-Book and MovieLens. The statistics of datasets are shown in Table 1.

  • LastFM is a popular online platform that focuses on music recommendations and social networking. The original attributes in LastFM were treated as attribute instances, and a clustering method was utilized to identify 34 attribute types.

  • Yelp serves for business recommendations. In the context of Yelp, a two-layer taxonomy was created by [6], comprising 29 attribute types and 590 attribute instances. The attribute types in the taxonomy represent different categories or aspects of businesses, such as price range, ambiance, service quality, and more.

  • Amazon-Book is widely used for book recommendations. Users and items with a minimum of 10 interaction records are retained. Entities and relations in the knowledge graph are considered as attribute instances and attribute types, respectively.

  • MovieLens is a rating dataset that contains user ratings for movies. Interactions with ratings above three are retained, and the knowledge graph entities and relations are considered as attribute instances and attribute types.

Table 1: Statistics of four benchmark datasets.
LastFM Yelp Amazon-Book MovieLens
#Users 1,801 27,675 30,291 20,892
#Items 7,432 70,311 17,739 16,482
#Attributes 8,438 590 988 1,498
#Types 34 29 40 24
#Entities 17,671 98,576 49,018 38,872
#Relations 4 3 2 2
#Interactions 76,693 1,368,606 478,099 454,011
#Triples 228,217 2,533,827 565,068 380,016

5.1.2 Baselines.

To benchmark our model’s performance, we compared it against eight established baselines:

  • Abs Greedy [3] only recommends items in each turn until the user accepts the recommendation or the total turn number exceeds the maximum limit.

  • Max Entropy [6] asks or recommends the top-scored instances with the maximum entropy with a certain probability.

  • CRM [9] introduces a policy that selects the action and terminates the conversation when the recommendation is rejected.

  • EAR [6] proposes a three-stage pipeline called Estimation–Action-Reflection for selecting actions and optimizing the model.

  • SCPR [7] employs graph-based path reasoning to retrieve actions and graph embedding to represent user preferences dynamically.

  • UNICORN [4] offers a comprehensive approach for selecting optimal actions, integrating both the conversation and recommendation components.

  • MCMIPL [18] focuses on single-type attribute selection in multiple-choice question settings and uses a multi-interest extractor for preference capture.

  • DAHCR [20] employ a director to select the action type and an actor to choose the asked attribute or recommended item.

To ensure a fair comparison in the MTAMCR scenario, we made two kinds of modifications to UNICORN, MCMIPL, and DAHCR:

  • Single-Type attributes (-S): Also known as attribue type-based multiple choice [18]. In each round, CRS selected one attribute type and then chose multiple attribute instances within that type.

  • Multi-Type attributes (-M): In each round, we selected the top-K attribute instances with the highest Q-values, which could be from different or the same attribute types.

Table 2: Overall performance evaluated by SR@15, AT, and hDCG across four datasets.
Model LastFM Yelp Amazon-Book MovieLens
SR@15 AT hDCG SR@15 AT hDCG SR@15 AT hDCG SR@15 AT hDCG
Abs Greedy[3] 0.635 8.66 0.267 0.189 13.43 0.089 0.193 13.90 0.087 0.724 5.24 0.465
Max Entropy[6] 0.669 9.33 0.269 0.398 13.42 0.121 0.382 12.43 0.149 0.655 6.87 0.412
CRM[9] 0.580 10.79 0.224 0.177 13.69 0.070 0.354 12.81 0.127 0.626 7.43 0.433
EAR[6] 0.595 10.51 0.230 0.182 13.63 0.079 0.411 11.62 0.153 0.749 6.28 0.459
SCPR[7] 0.709 8.43 0.317 0.489 12.62 0.159 0.432 11.03 0.148 0.823 4.94 0.514
UNICORN[4] 0.788 7.58 0.349 0.520 11.31 0.203 0.498 10.42 0.179 0.863 4.62 0.523
DAHCR[20] 0.925 6.31 0.431 0.626 11.02 0.192 0.522 10.14 0.188 0.875 4.34 0.535
UNICORN-S 0.884 6.07 0.403 0.593 10.57 0.212 0.538 10.01 0.253 0.883 4.05 0.585
MCMIPL-S 0.942 5.44 0.431 0.631 10.25 0.225 0.556 9.57 0.264 0.879 3.92 0.581
DAHCR-S 0.933 5.67 0.429 0.668 10.15 0.232 0.572 9.45 0.272 0.895 3.89 0.603
UNICORN-M 0.892 5.96 0.426 0.634 10.35 0.233 0.584 9.69 0.261 0.887 4.10 0.587
MCMIPL-M 0.973 4.59 0.466 0.675 10.11 0.256 0.667 8.98 0.293 0.891 3.68 0.609
DAHCR-M 0.965 4.61 0.451 0.681 10.23 0.241 0.689 8.45 0.301 0.874 3.98 0.593
CoCHPL 0.991 3.75 0.493 0.846 9.36 0.267 0.775 7.61 0.330 0.908 2.85 0.628

5.1.3 Evaluation Metrics.

We evaluate the performance of our MCR recommendation system using three key metrics: (1) Success rate (SR@t) measures the cumulative ratio of successful recommendations within a maximum of tt turns. (2) Average turn (AT) evaluates the average number of terminated conversation turns for all sessions. Lower values indicate higher efficiency in the recommendation process. (3) hDCG@(T, K) [4] assesses the ranking performance of recommended items by considering the target item’s rank in the recommendation list, which extends the normalized discounted cumulative gain metric.

5.1.4 Training Details.

We set the maximum turns TT to 15, and the agent can recommend up to kv=10k_{v}=10 items or ask kp=3k_{p}=3 attribute-related questions at every turn. We pre-train the graph nodes with TransE [2], and train each dataset for 10,00010,000 episodes online. Adam optimizer with the learning rate 1e41e-4 and discount factor γ\gamma is set to be 0.999. For our method CoCHPL, the reward settings were as follows: rask_acc=1e2r_{ask\_acc}=1e-2, rask_rej=1e4r_{ask\_rej}=-1e-4, rrec_rej=1e4r_{rec\_rej}=-1e-4, and rrec_suc=1r_{rec\_suc}=1. We found that the effect of retrieving accepted attributes was minimal in MovieLens, and then the reward for rask_accr_{ask\_acc} was adjusted to 1e51e-5 for it. The rewards for all state-of-the-art baselines remain unchanged as [18].

5.2 Experimental Results

5.2.1 Overall Performance (RQ1).

As detailed in Table 2, CoCHPL surpasses eight baselines across SR@15, AT, and hDCG metrics. Baselines with binary yes/no question settings exhibit lower efficiency and longer turn requirements for successful recommendations (Figure 1(a)). In single-type attributes scenarios, UNICORN-S, MCMIPL-S, and DAHCR-S show modest improvements, while in multi-type attributes contexts, UNICORN-M, MCMIPL-M, and DAHCR-M display enhanced success rates and reduced average turns, which indicates the effectiveness of our MTAMCR approach in preference capture. CoCHPL’s chain-of-choice generation leads to marked improvements in success rates and efficiency across benchmark datasets, notably reducing the number of turns for successful recommendations. In critical datasets like Yelp and Amazon-Book, CoCHPL demonstrates significant advancements over top methods with relative increases of 18.3%18.3\%, 8.6%8.6\%, and 6.9%6.9\% in SR@15, AT, and hDCG, respectively. For a clear comparison, we benchmarked against MCMIPL-M’s SR, with differences in SR among methods serving as a measure of relative success rate (Figure 4) and the comparison highlights the superior performance of CoCHPL, especially in Yelp and Amazon-Book, where extensive attribute preference extraction is essential.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 4: Comparisons at different conversation turns on four datasets.
Table 3: Ablation Study evaluated across Yelp and Amazon-Book datasets.
Yelp Amazon-Book
SR@15 AT hDCG SR@15 AT hDCG
CoCHPL 0.846 9.36 0.267 0.775 7.61 0.330
w/o long policy over options 0.693 10.21 0.225 0.649 9.12 0.278
w/o short intra-option policy (ask) 0.655 10.29 0.221 0.612 9.54 0.270
w/o short intra-option policy (rec) 0.756 10.05 0.245 0.685 8.99 0.285
w/o termination function 0.798 9.67 0.253 0.744 8.34 0.302
w/o feedback prediction function 0.782 9.95 0.248 0.732 8.12 0.311

5.2.2 Ablation Study (RQ2).

We performed an ablation study on the Yelp and Amazon-Book datasets to assess the impact of different components on system performance as follows: (1) w/o policy over options: The CRS randomly selects options (action types) in each turn. (2) w/o intra-option policy (ask): The CRS chooses the top-KK attribute instances with the highest Q-value at every turn. (3) w/o intra-option policy (recommend): The CRS selected the top-KK items with the highest Q-value at each turn. (4) w/o termination function: The CRS asks for or recommends the maximum number of instances in each turn without appropriate termination. (5) w/o feedback prediction function: The CRS randomly predicts whether the user would accept or reject the previous choice. From Table 3, we can observe that short intra-option policy (ask) and long policy over options play crucial roles in performance improvement, which suggests that learning how to select options and effectively generate multiple-choice questions about attributes is essential in the MTAMCR scenario. Additionally, the significance of the termination and feedback prediction functions underscores the importance of learning appropriate termination conditions and the necessity of simulating user behavior during the chain-of-choice generation process.

Refer to caption

Refer to caption

Figure 5: Performance comparisons of different asked attribute instances numbers in Yelp and Amazon-Book datasets.

5.2.3 Effect of Attribute Instances Number and Case Study (RQ3).

Our study, using Yelp and Amazon-Book datasets, assessed the reasoning capabilities of our model, CoCHPL, against three baselines: UNICRON-M, MCMIPL-M, and DAHCR-M. We varied the number of attribute instances requested from 1 to 5 per turn as shown in Figure 5. We noted that while the success rate of baseline models improved with an increased number of attribute instances, the marginal gains diminished over time, approaching a plateau. This trend suggests that these models tend to select numerous but less effective attribute instances, leading to suboptimal exploitation of user preferences. In contrast, CoCHPL demonstrated a nearly linear performance improvement in both datasets. A case study depicted in Figure 6 further reinforces CoCHPL’s superiority. Unlike DAHCR, which selects the top-22 attribute instances based on Q-values, often resulting in homogenous choices due to similar embeddings, CoCHPL dynamically predicts user feedback and recalculates Q-values after each selection, prioritizing the top-11 attribute for subsequent choices. Both evaluations demonstrate that CoCHPL effectively diversifies and deepens the dependency among attributes, significantly enhancing the overall recommendation quality.

Refer to caption

(a) DAHCR-M

Refer to caption

(b) CoCHPL

Figure 6: Conversations generated by DAHCR-M and CoCHPL, and the process of choosing choices in each turn. By utilizing prediction and inference stages, CoCHPL is capable of generating a more coherent and logical sequence of choices.

6 Conclusion

In this paper, we propose a more realistic CRS setting called Multi-Type-Attribute Multi-round Conversational Recommendation (MTAMCR). To optimize the questioning efficiency, we develop a novel framework, namely Chain-of-Choice Hierarchical Policy Learning (CoCHPL), for the MTAMCR scenario. Specifically, by formulating MTAMCR as a hierarchical RL task, we employed a long-term policy over action options to decide when to ask or recommend and two short-term intra-option policies to sequentially generate the optimal multi-type attribute and item chains with the assistance of the learnable termination and feedback prediction functions. Experimental results demonstrate that CoCHPL significantly outperforms state-of-the-art CRS methods in terms of both the reasoning ability in chains of choice and the success rate of recommendations.

6.0.1 Acknowledgements

This research was supported in part by the National Natural Science Foundation of China under Grant No. 62102110, 92370204, the Guangzhou Basic and Applied Basic Research Program under Grant No. 2024A04J3279, and the Education Bureau of Guangzhou Municipality.

References

  • [1] Bacon, P.L., Harb, J., Precup, D.: The option-critic architecture. In: AAAI. p. 1726–1734 (2017)
  • [2] Bordes, A., Usunier, N., Garcia-Durán, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: NeurIPS. pp. 2787–2795 (2013)
  • [3] Christakopoulou, K., Radlinski, F., Hofmann, K.: Towards conversational recommender systems. In: SIGKDD. pp. 815–824 (2016)
  • [4] Deng, Y., Li, Y., Sun, F., Ding, B., Lam, W.: Unified conversational recommendation policy learning via graph-based reinforcement learning. In: SIGIR. pp. 1431–1441 (2021)
  • [5] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR (2017)
  • [6] Lei, W., He, X., Miao, Y., Wu, Q., Hong, R., Kan, M.Y., Chua, T.S.: Estimation-action-reflection: Towards deep interaction between conversational and recommender systems. In: WSDM. pp. 304–312 (2020)
  • [7] Lei, W., Zhang, G., He, X., Miao, Y., Wang, X., Chen, L., Chua, T.S.: Interactive path reasoning on graph for conversational recommendation. In: SIGKDD. pp. 2073–2083 (2020)
  • [8] Lin, A., Zhu, Z., Wang, J., Caverlee, J.: Enhancing user personalization in conversational recommenders. In: WWW. pp. 770–778 (2023)
  • [9] Sun, Y., Zhang, Y.: Conversational recommender system. In: SIGIR. pp. 235–244 (2018)
  • [10] Sutton, R.S., McAllester, D., Singh, S., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: NeurIPS (1999)
  • [11] Sutton, R.S., Precup, D., Singh, S.: Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence 112(1-2), 181–211 (1999)
  • [12] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: NeurIPS. pp. 6000–6010 (2017)
  • [13] Wang, Z., Schaul, T., Hessel, M., Hasselt, H., Lanctot, M., Freitas, N.: Dueling network architectures for deep reinforcement learning. In: ICML (2016)
  • [14] Xie, Z., Yu, T., Zhao, C., Li, S.: Comparison-based conversational recommender system with relative bandit feedback. In: SIGIR. pp. 1400–1409 (2021)
  • [15] Xu, K., Yang, J., Xu, J., Gao, S., Guo, J., Wen, J.R.: Adapting user preference to online feedback in multi-round conversational recommendation. In: WSDM. pp. 364–372 (2021)
  • [16] Zhang, W., Liu, H., Han, J., Ge, Y., Xiong, H.: Multi-agent graph convolutional reinforcement learning for dynamic electric vehicle charging pricing. In: SIGKDD. pp. 2471–2481 (2022)
  • [17] Zhang, W., Liu, H., Liu, Y., Zhou, J., Xiong, H.: Semi-supervised hierarchical recurrent graph neural network for city-wide parking availability prediction. In: AAAI. pp. 1186–1193 (2020)
  • [18] Zhang, Y., Wu, L., Shen, Q., Pang, Y., Wei, Z., Xu, F., Long, B., Pei, J.: Multiple choice questions based multi-interest policy learning for conversational recommendation. In: WWW. pp. 2153–2162 (2022)
  • [19] Zhang, Y., Chen, X., Ai, Q., Yang, L., Croft, W.B.: Towards conversational search and recommendation: System ask, user respond. In: CIKM. pp. 177–186 (2018)
  • [20] Zhao, S., Wei, W., Liu, Y., Wang, Z., Li, W., Mao, X.L., Zhu, S., Yang, M., Wen, Z.: Towards hierarchical policy learning for conversational recommendation with hypergraph-based reinforcement learning. In: IJCAI. pp. 2459–2467 (2023)