This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Adaptive Querying for Reward Learning from Human Feedback

Yashwanthi Anand1 and Sandhya Saisubramanian1 1 All authors are with Oregon State University, Corvallis OR 97331, USA {anandy, sandhya.sai}@oregonstate.edu
Abstract

Learning from human feedback is a popular approach to train robots to adapt to user preferences and improve safety. Existing approaches typically consider a single querying (interaction) format when seeking human feedback and do not leverage multiple modes of user interaction with a robot. We examine how to learn a penalty function associated with unsafe behaviors, such as side effects, using multiple forms of human feedback, by optimizing the query state and feedback format. Our framework for adaptive feedback selection enables querying for feedback in critical states in the most informative format, while accounting for the cost and probability of receiving feedback in a certain format. We employ an iterative, two-phase approach which first selects critical states for querying, and then uses information gain to select a feedback format for querying across the sampled critical states. Our evaluation in simulation demonstrates the sample efficiency of our approach.

I INTRODUCTION

A key factor affecting an autonomous agent’s behavior is its reward function. Due to the complexity of real-world environments and the practical challenges in reward design, agents often operate with incomplete reward functions corresponding to underspecified objectives, which can lead to unintended and undesirable behaviors such as negative side effects (NSEs) [1, 2, 3]. For example, an indoor robot that optimizes distance to goal may break a vase as a side effect if its reward function does not model the undesirability of breaking a vase while navigating to the goal [4] (Figure 1).

Several prior works have examined learning from various forms of human feedback to improve robot performance, including avoiding side effects [5, 6, 7, 8, 9, 10, 11]. In many real-world settings, the human can provide feedback in many forms, ranging from binary signals indicating action approval to correcting robot actions, each varying in the granularity of information revealed to the robot and the human effort required to provide it. To efficiently balance the trade-off between seeking feedback in a format that accelerates robot learning and reducing human effort involved, it is beneficial to seek detailed feedback sparingly in certain states and complement it with feedback types that require less human effort in other states. Such an approach could also reduce the sampling biases associated with learning from any one format, thereby improving learning performance [12]. In fact, a recent study indicates that users are generally willing to engage with the robot in more than one feedback format [13]. Existing approaches utilize a single feedback format throughout the learning process and do not support gathering feedback in different formats in different regions of the state space [14, 15].

How can a robot identify when to query and in what format, while accounting for the cost and availability of different forms of feedback? We present a framework for adaptive feedback selection (AFS) that enables a robot to seek feedback in multiple formats in its learning phase, such that its information gain is maximized. In the interest of clarity, AFS is introduced in the context of NSEs but the framework is general and can be applied broadly.

Refer to caption
Figure 1: An illustration of adaptive feedback selection. Agent learns to navigate to the goal without breaking vases, by querying the human in different format across the state space. Red circles show critical states along with queries, and orange speech bubbles show user feedback.

The information gain of a feedback format is measured as the Kullback–Leibler (KL) divergence between the true NSE distribution, revealed to the robot via human feedback collected so far, and the robot’s current knowledge of NSEs based on the feedback it has received. In each querying cycle, the robot selects a feedback format that maximizes its information gain, given its current knowledge of NSEs.

When collecting feedback in every state is infeasible, the robot must prioritize querying in critical states—states where human feedback is crucial for learning an association of state features and NSEs, i.e., a predictive model of NSE severity. Querying in critical states maximizes information gain about NSEs, compared to other states. Prior works, however, query for feedback in states randomly sampled or along the shortest path to the goal, which may not result in a faithful NSE model [2, 11].

Refer to caption
Figure 2: Solution approach overview. The critical states Ω\Omega for querying are selected by clustering the states. A feedback format ff^{*} that maximizes information gain is selected for querying the user across Ω\Omega. The NSE model is iteratively refined based on feedback. An updated policy is calculated using a penalty function R^N\hat{R}_{N}, derived from the learned NSE model.

We use an iterative approach to gather NSE information under a limited query budget (Figure 2). The key steps are: (1) states are partitioned into clusters, with a cluster weight proportional to the number of NSEs discovered in it; (2) a critical states set is formed by sampling from each cluster based on its weight; (3) a feedback format that maximizes the information gain in critical states is identified, while accounting for the cost and uncertainty in receiving a feedback, using the human feedback preference model; and (4) cluster weights and information gain are updated, and a new set of critical states are sampled to learn about NSEs, until the querying budget expires. The learned NSE information is mapped to a penalty function and augmented to the robot’s model to compute an NSE-minimizing policy to complete its task. Empirical evaluation on four domains in simulation demonstrate the effectiveness of our approach in learning to mitigate NSEs from explicit and implicit feedback types.

II BACKGROUND

Markov Decision Processes (MDPs) are a popular framework to model sequential decision making problems. An MDP is defined by the tuple M=S,A,T,R,γM\!=\!\langle S,A,T,R,\gamma\rangle, where SS is the set of states, AA is the set of actions, T(s,a,s)T(s,a,s^{\prime}) is the probability of reaching state sSs^{\prime}\!\in\!S after taking an action aAa\!\in\!A from a state sSs\!\in\!S and R(s,a)R(s,a) is the reward for taking action aa in state ss. An optimal deterministic policy π:SA\pi^{*}:\!S\!\rightarrow\!A is one that maximizes the expected reward. When the objective or reward function is incomplete, even an optimal policy can produce unsafe behaviors such as side effects.

Negative Side Effects (NSEs) are immediate, undesired, unmodeled effects of an agent’s actions on the environment [16, 17, 3]. We focus on NSEs arising due to incomplete reward function [2], which we mitigate by learning a penalty function using human feedback.

Learning from Human Feedback is a widely used technique to train agents when reward functions are unavailable or incomplete [18, 9, 19], including to improve safety [20, 21, 22, 23, 11, 2]. Feedback can take various forms such as demonstrations [24, 25], corrections [26, 27, 28], critiques [5, 2], ranking trajectories [29], or may be implicit in the form of facial expressions and gestures [6, 30]. Existing approaches focus on learning from a single feedback type, limiting learning efficiency. Recent studies consider combinations such as demonstrations and preferences [31, 32], but assume a fixed order and do not scale to multiple formats. Another recent work examines feedback format selection by estimating the human’s ability to provide feedback in a certain format [33]. Unlike these approaches, we dynamically select the most informative feedback without any pre-processing.

The information gain associated with a feedback quantifies the effect of a feedback in improving the agent’s understanding of the underlying reward function, often measured using Kullback-Leibler (KL) Divergence [33, 34], DKL(PQ)=xP(x)logP(x)QxD_{KL}(P\|Q)=\sum_{x}P(x)\log\frac{P(x)}{Q{x}} where PP is the prior distribution and QQ is the posterior distribution after observing evidence.

III PROBLEM FORMULATION

Setting: Consider a robot operating in an environment modeled as a Markov Decision Process (MDP), using its acquired model M=S,A,T,RTM=\langle S,A,T,R_{T}\rangle. The robot optimizes the completion of its assigned task, which is its primary objective described by reward RTR_{T}. A primary policy, πM\pi^{M}, is an optimal policy for the robot’s primary objective.

Assumption 1. Similar to [2], we assume that the agent’s model MM has all the necessary information for the robot to successfully complete its assigned task but lacks other superfluous details that are unrelated to the task.

Since the model is incomplete in ways unrelated to the primary objective, executing the primary policy produces negative side effects (NSEs) that are difficult to identify at design time. Following [2], we define NSEs as immediate, undesired, unmodeled effects of a robot’s actions on the environment. We focus on settings where the robot has no prior knowledge about the NSEs of its actions or the underlying true NSE penalty function RNR_{N}. It learns to avoid NSEs by learning a penalty function R^N\hat{R}_{N} from human feedback that is consistent with RNR_{N}.

We target settings where the human can provide feedback in multiple ways and the robot can seek feedback in a specific format such as approval or corrections. This represents a significant shift from traditional active learning methods, which typically gather feedback only in a single format [23, 2, 10]. Using the learned R^N\hat{R}_{N}, the robot computes an NSE-minimizing policy to complete its task by optimizing: R(s,a)=θ1RT(s,a)+θ2R^N(s,a),R(s,a)=\theta_{1}R_{T}(s,a)+\theta_{2}\hat{R}_{N}(s,a), where θ1\theta_{1} and θ2\theta_{2} are fixed, tunable weights denoting priority over objectives.

Refer to caption
Figure 3: Visualization of reward learned using different feedback types. (Row 1) Black arrows indicate queries, and feedback is in speech bubbles. (Row 2) Refer to caption denotes high, Refer to caption mild, and Refer to caption zero penalty. Outer box is the true reward, and inner box shows the learned reward. Mismatches between the outer and inner box colors indicate incorrect learned model.

Human’s Feedback Preference Model: The feedback format selection must account for the cost and human preferences in providing feedback in a certain format. The user’s feedback preference model is denoted by D=,ψ,CD=\langle\mathcal{F},\psi,C\rangle where,

  • \mathcal{F} is a predefined set of feedback formats the human can provide, such as demonstrations and corrections;

  • ψ:[0,1]\psi:\mathcal{F}\rightarrow[0,1] is the probability of receiving feedback in a format ff, denoted as ψ(f)\psi(f); and

  • C:C:\mathcal{F}\rightarrow\mathbb{R} is a cost function that assigns a cost to each feedback format ff, representing the human’s time or cognitive effort required to provide that feedback.

This work assumes the robot has access to the user’s feedback preference model DD—either handcrafted by an expert or learned from user interactions prior to robot querying. Abstracting user feedback preferences into probabilities and costs enables generalizing the preferences across similar tasks. We take the pragmatic stance that ψ\psi is independent of time and state, denoting the user’s preference about a format, such as not preferring formats that require constant supervision of robot performance. While this can be relaxed and the approach can be extended to account for state-dependent preferences, getting an accurate state-dependent ψ\psi could be challenging in practice.

Assumption 2. Human feedback is immediate and accurate, when available.

Below, we describe how a robot can learn a penalty function associated with NSEs (R^N\hat{R}_{N}), given data from different types of feedback. In Section IV, we describe when and how to seek feedback, given feedback preference model DD.

III-A Learning R^N\hat{R}_{N} from multiple forms of feedback

Since the agent has no prior knowledge about NSEs, it assumes none of its actions produce NSEs. We examine learning an NSE penalty function R^N\hat{R}_{N} using the following popular feedback formats and their annotated (richer) versions. In our settings, an action in a state may cause either mild, severe, or no NSEs. In practice, any number of NSE categories can be considered, provided the feedback formats align with them.

Approval (App):  The robot randomly selects NN state-action pairs from all possible actions in critical states and queries the human for approval or disapproval. Approved actions are labeled as acceptable, while disapproved actions are labeled as unacceptable.

Annotated Approval (Ann. App):  An extension of Approval, where the human specifies the NSE severity (or category) for each disapproved action in the critical states.

Corrections (Corr):  The robot performs a trajectory of its primary policy in the critical states, under human supervision. If the robot’s action is unacceptable, then the human intervenes with an acceptable action in these states. If all actions in a state lead to NSE, the human specifies an action with the least NSE. When interrupted, the robot assumes all actions except the correction are unacceptable in that state.

Annotated Corrections (Ann. Corr):  An extension of Corrections, where the human specifies the severity of NSEs caused by the robot’s unacceptable action in critical states.

Rank:  The robot randomly selects NN ranking queries of the form state,action 1,action 2\langle\textit{state},\textit{action 1},\textit{action 2}\rangle, by sampling two actions for each critical state. The human selects the safer action among the two options. If both are safe or unsafe, one of them is selected at random. The selected action is marked as acceptable and the other is treated as unacceptable.

Demo-Action Mismatch (DAM):  The human demonstrates a safe action in each critical state, which the robot compares with its policy. All mismatched robot’s actions are labeled as unacceptable. Matched actions are labeled as acceptable.

Gaze:  In this implicit feedback format, the robot requests to collect gaze data of the user and compares its action outcomes with the gaze positions of the user [10]. Actions with outcomes aligning with the average gaze direction are labeled as acceptable, and unacceptable otherwise.

Each of the above format provides different levels of detail, thereby resulting in different learned reward models. Figure 3 shows the interaction format and learned reward values, using different feedback types in isolation, on the vase domain. Breaking a vase on a carpet is a mild NSE. Breaking a vase on a hard surface is a severe NSE.

NSE Model Learning: We use lml_{m}, lhl_{h}, and lal_{a} to denote labels corresponding to mild, severe and no NSEs respectively. An acceptable action in a state is mapped to label lal_{a}, (s,a)la(s,a)\rightarrow l_{a}, while unacceptable action is mapped to label lhl_{h}. If the NSE severity of unacceptable actions are known, then actions with mild NSE are mapped to lml_{m} and those with severe NSEs mapped to lhl_{h}. Mapping feedback to these labels provides a consistent representation of NSE severity for learning under various feedback types. The NSE severity labels, derived from the gathered feedback, are generalized to unseen states by training a random forest classifier (RF) model to predict NSE severity of an action in a state. Any classifier can be used in practice. Hyperparameters for training are determined by a randomized search in the RF parameter space, using three-fold cross validation and selecting parameters with the least mean squared error for training and subsequently, determining the NSE severity. The label for each state-action pair is then mapped to its corresponding penalty value, yielding R^N(s,a)\hat{R}_{N}(s,a). In our experiments, the penalties for lal_{a}, lml_{m}, and lhl_{h} are 0, +5+5, and +10+10 respectively.

IV ADAPTIVE FEEDBACK SELECTION

Given an agent’s decision making model MM and the human’s feedback preference model DD, adaptive feedback selection (AFS) enables the agent to query for feedback in critical states in a format that maximizes its information gain.

Let pp^{*} be the true underlying NSE distribution, unknown to the agent but known to the human. pp^{*} deterministically maps state-action pairs to an NSE severity level (i.e., no NSE, mild NSE or severe NSE). Human feedback, when available, is sampled from this distribution. Let ppp\sim p^{*} denote the distribution of state-action pairs causing NSE of varying severity, aggregated from all feedback received thus far. In other words, pp represents the accumulated NSE information known to the agent, based on human feedback. Let qq denote the agent’s learned NSE distribution, based on all feedback received up to that point i.e., from pp.

Below we describe an approach to select critical states, followed by an approach for feedback format selection, based on the KL divergence between pp and qq.

IV-A Critical States Selection

Intuitively, when the budget for querying a human is limited, it is useful to query in states with a high learning gap—the divergence between the agent’s knowledge of NSE distribution and the underlying NSE distribution, given feedback data collected so far. States with a high learning gap are called critical states (Ω\Omega) and querying in these states can reduce the learning gap. The learning gap at iteration tt is measured as the KL divergence between the information gathered so far (ptp^{t}) and the agent’s learned NSE distribution (qt1q^{t-1}): DKL(ptqt1)D_{KL}(p^{t}\|q^{t-1}).

We compare ptp^{t} with qt1q^{t-1} since we want to identify states where the agent’s knowledge of NSE was incorrect, thereby guiding the selection of next batch of critical states. While DKL(ptqt)D_{KL}(p^{t}\|q^{t}) may seem to be a better choice to guide critical states selection, this measure only shows how well the agent learned using the feedback at tt but does not reveal states where the agent was incorrect about NSEs. Algorithm 1 outlines our approach for selecting critical states at each learning iteration, with the following three key steps.

1. Clustering states: Since NSEs are typically correlated with specific state features and do not occur at random, we cluster the states SS into 𝒦\mathcal{K} number of clusters so as to group states with similar NSE severity [8]. In our experiments, we use KMeans clustering algorithm with Jaccard distance to measure the distance between states based on their features. In practice, any clustering algorithm can be used, including manual clustering. The goal is to create meaningful partitions of the state space to guide critical states selection for querying the user.

2. Estimating information gain: We define the information gain of sampling from a cluster kKk\!\in\!K, based on the learning gap discussed earlier,

IG(k)t\displaystyle IG(k)^{t}\! =1|Ωkt1|sΩkt1DKL(ptqt1)\displaystyle=\!\frac{1}{|\Omega_{k}^{t-1}|}\sum_{s\in\Omega_{k}^{t-1}}D_{KL}(p^{t}\|q^{t-1}) (1)
=1|Ωkt1|sΩkt1aApt(a|s)log(pt(a|s)qt1(a|s))\displaystyle=\!\frac{1}{|\Omega_{k}^{t-1}|}\sum_{s\in\Omega_{k}^{t-1}}\sum_{a\in A}p^{t}(a|s)\cdot\log\left(\frac{p^{t}(a|s)}{q^{t-1}(a|s)}\right) (2)

where Ωkt1\Omega_{k}^{t-1} denotes the set of states sampled for querying from cluster kk at iteration t1t-1. pt(a|s)p^{t}(a|s) and qt1(a|s)q^{t-1}(a|s) denote the NSE severity labels of action aa in ss, as provided in the feedback data and in the agent’s learned model, respectively.

3. Sampling critical states: At each learning iteration tt, the agent assigns a weight wkw_{k} to each cluster kKk\!\in\!K, proportional to the new information on NSEs revealed by the most informative feedback format identified at t1t-1, using Eqn. 2. Clusters are given equal weights when there is no prior feedback (Line 4). We sample critical states in batches but they can also be sampled sequentially. When sampling in batches of NN states, the number of states nkn_{k} to be sampled from each cluster is determined by its assigned weight. At least one state is sampled from each cluster to ensure sufficient information for calculating the information gain for every cluster (Line 5). The agent randomly samples nkn_{k} states from corresponding cluster and adds them to a set of critical states Ω\Omega (Lines 6, 7). If the total number of critical states sampled is less than NN due to rounding, then the remaining NrN_{r} states are sampled from the cluster with the highest weight and added to Ω\Omega (Lines 9-12).

Algorithm 1 Critical States Selection
0:  NN: #critical states; 𝒦\mathcal{K}:#clusters
1:  Ω\Omega\leftarrow\emptyset
2:  Cluster states into 𝒦\mathcal{K} clusters, K={k1,,k𝒦}K=\{k_{1},\ldots,k_{\mathcal{K}}\}
3:  for each cluster kKk\in K do
4:     Wk{1𝒦, if no feedback received in any iterationIG(k)kKIG(k), if feedback receivedW_{k}\leftarrow\begin{cases}\frac{1}{\mathcal{K}},\text{ if no feedback received in any iteration}\\ \frac{IG(k)}{\sum_{k\in K}IG(k)},\text{ if feedback received}\end{cases}
5:     nkmax(1,WkN)n_{k}\leftarrow\max(1,\lfloor W_{k}\cdot N\rfloor)
6:     Sample nkn_{k} states at random, ΩkSample(k,nk)\Omega_{k}\leftarrow\text{Sample}(k,n_{k})
7:     ΩΩΩk\Omega\leftarrow\Omega\cup\Omega_{k}
8:  end for
9:  NrN|Ω|N_{r}\leftarrow N-|\Omega|
10:  if Nr>0N_{r}>0 then
11:     kargmaxkKWkk^{\prime}\leftarrow\arg\max_{k\in K}W_{k}
12:     ΩΩSample(k,Nr)\Omega\leftarrow\Omega\cup\text{Sample}(k^{\prime},N_{r})
13:  end if
14:  return  Set of selected critical states Ω\Omega

IV-B Feedback Format Selection

A feedback format ff^{*} that maximizes the information gain is selected to query across Ω\Omega. The information gain of a feedback format ff at iteration tt, for N=|Ω|N\!=\!|\Omega| critical states, is calculated using the KL divergence between ptp^{t} and qtq^{t}:

𝒱f\displaystyle\mathcal{V}_{f} =1NsΩDKL(ptqt)𝕀[f=fHt]+𝒱f(1𝕀[f=fHt]).\displaystyle=\frac{1}{N}\sum_{s\in\Omega}D_{KL}(p^{t}\|q^{t})\cdot\mathbb{I}[f\!=\!f_{H}^{t}]+\mathcal{V}_{f}\cdot(1-\mathbb{I}[f\!=\!f_{H}^{t}]). (3)

where, 𝕀[f=fHt]\mathbb{I}[f\!=\!f_{H}^{t}] is an indicator function that checks whether the format in which the human provided feedback, fHtf_{H}^{t}, matches the requested format ff. If no feedback is provided, the information gain of that format remains unchanged.

The following equation is used to select the feedback format ff^{*}, accounting for feedback cost and user preferences:

f=argmaxfψ(f)𝒱fC(f)+logtnf+ϵFeedback utility of ff^{*}=\operatorname*{argmax}_{f\in\mathcal{F}}\underbrace{\frac{\psi(f)}{\mathcal{V}_{f}\cdot\text{C}(f)}+\sqrt{\frac{\log t}{n_{f}+\epsilon}}}_{\Large\text{Feedback utility of $f$}} (4)

where ψ(f)\psi(f) is the probability of receiving a feedback in format ff and C(f)C(f) is the feedback cost, determined using the human preference model DD. tt denotes the current learning iteration, nfn_{f} is the number of times ff was received, and ϵ\epsilon is a small value added for numeric stability. Note that ff^{*} is the current most informative feedback, based on the formats previously used. This may change as the agent explores and incorporates feedback of other formats. Thus, our feedback selection approach effectively manages the trade-off between selecting feedback formats that were previously used to gather information and examining unexplored formats.

Algorithm 2 outlines our feedback format selection approach. Since the agent has no prior knowledge of NSEs, all state-action pairs in pp and qq are initialized to safe action label (Line 2). This mapping will be refined based on the feedback received from the human, as learning progresses. It samples NN critical states using Algorithm 1 (Line 4). A feedback format ff^{*} is selected using Eqn. 4. The agent queries the human for feedback in ff^{*} (Line 5). The human provides feedback in format ff^{*} with probability ψ(f)\psi(f^{*}). Upon receiving the feedback, the agent updates the distribution, ptp^{t} based on the new NSE information, and trains an NSE prediction model, 𝒫\mathcal{P} (Lines 6-8). The agent’s belief of NSE distribution qtq^{t}, is updated for all state-action pairs, using 𝒫\mathcal{P}. The information gain 𝒱f\mathcal{V}_{f^{*}} is updated using Eqn. 3, and nfn_{f^{*}} is incremented (Lines 9-11). This process repeats until the querying budget is exhausted, producing a predictive model of NSEs. Figure 4 illustrates the critical states and the most informative feedback formats selected at each iteration in the vase domain using our approach.

Refer to caption
Figure 4: Left: Feedback utility of different formats across iterations. Right: An instance of Vase domain. States marked with a circled number represent the iteration in which the state was identified as a critical state. The color of the circle denotes the feedback format chosen for querying during that iteration.
Algorithm 2 Feedback Selection for NSE Learning
0:  B, Querying budget
0:  DD, Human preference model
1:  t1t\leftarrow 1; 𝒱f0\mathcal{V}_{f}\leftarrow 0 and nf0,fn_{f}\leftarrow 0,\ \forall f\in\mathcal{F}
2:  Initialize pp and qq assuming all actions are safe: aA,sS:p(s,a)la\forall a\in A,\forall s\in S:p(s,a)\leftarrow l_{a}, q(s,a)laq(s,a)\leftarrow l_{a}
3:  while B>0B>0 do
4:     Sample critical states using Algorithm 1
5:     Query user with feedback format ff^{*}, selected using using Eqn. 4, across sampled Ω\Omega
6:     if feedback received in format ff^{*}  then
7:        ptp^{t}\leftarrow Update distribution based on the feedback received in format ff^{*}
8:        𝒫\mathcal{P}\leftarrow TrainClassifier(pt)(p^{t})
9:        qt{𝒫(s,a),aA,sS}q^{t}\leftarrow\{\mathcal{P}(s,a),\forall a\in A,\forall s\in S\}
10:        Update 𝒱f\mathcal{V}_{f^{*}}, using Eqn. 3
11:        nfnf+1n_{f^{*}}\leftarrow n_{f^{*}}+1
12:     end if
13:     BBC(f)B\leftarrow B-C(f^{*}); tt+1t\leftarrow t+1
14:  end while
15:  return  NSE classifier model, 𝒫\mathcal{P}
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
(a) Navigation: Unavoidable NSE
Refer to caption
(b) Vase: Unavoidable NSE
Refer to caption
(c) Safety-gym Push
Refer to caption
(d) ATARI Freeway
Figure 5: Top row: average penalty incurred when querying with different feedback selection techniques. Bottom row: illustrations of domains used for evaluation. Red box denotes the agent and the goal location is in green.

V EXPERIMENTS

Baselines (i) Naive Agent: The agent naively executes its primary policy without learning about NSEs, providing an upper bound on the NSE penalty incurred. (ii) Oracle: The agent has complete knowledge about RTR_{T} and RNR_{N}, providing a lower bound on the NSE penalty incurred. (iii) Reward Inference with β\beta Modeling (RI) [33]: The agent selects a feedback format that maximizes information gain according to the human’s inferred rationality β\beta. (iv) Cost-Sensitive Approach: The agent selects a feedback method with the least cost, according to the preference model DD. (v) Most-Probable Feedback: The agent selects a feedback format that the human is most likely to provide, based on DD. (vi) Random Critical States: The agent uses our AFS framework to learn about NSEs, except the critical states are sampled randomly from the entire state space. We use θ1=1\theta_{1}\!=\!1 and θ2=1\theta_{2}\!=\!1 for all our experiments. AFS uses learned R^N\hat{R}_{N}. Code will be made public after paper acceptance.

Domains, Metrics and Feedback Formats We evaluate the performance of various techniques on four domains in simulation: outdoor navigation, vase, safety-gym’s push, and Atari freeway. We simulate the feedback for a state-action pair using a softmax action selection [33, 35]: the probability of choosing an action aa^{\prime} from a set of all safe actions AA^{*} in state ss is, Pr(a|s)=eQ(s,a)aAeQ(s,a)\Pr(a^{\prime}|s)=\frac{e^{Q(s,a^{\prime})}}{\sum\limits_{a\in A^{*}}e^{Q(s,a)}}. We optimize costs (negations of rewards) and compare techniques using average NSE penalty and average cost to goal, averaged over 100 trials. For navigation, vase and push, we simulate explicit human feedback formats. For Atari we use both explicit (demonstration) and implicit (gaze) feedback from the Atari-HEAD dataset [36]. The penalties for lal_{a}, lml_{m}, and lhl_{h} are 0, +5+5, and +10+10 respectively.

Navigation: In this ROS-based city environment, the robot optimizes the shortest path to the goal location. A state is represented as x,y,f,p\langle x,y,f,p\rangle, where, xx and yy are robot coordinates, ff is the surface type (concrete or grass), and pp indicates the presence of a puddle. The robot can move in all four directions and each costs +1+1. Navigating over grass damages the grass and is a mild NSE. Navigating over grass with puddles is a severe NSE. Features used for training are f,p\langle f,p\rangle. Here, NSEs are unavoidable.

Vase: In this domain, the robot must quickly reach the goal, while minimizing breaking a vase as a side effect [4]. A state is represented as x,y,v,c\langle x,y,v,c\rangle where, xx and yy are robot’s coordinates. vv indicates the presence of a vase and cc indicates if the floor is carpeted. The robot moves in all four directions and each costs +1+1. Actions succeed with probability 0.80.8. Breaking a vase placed on a carpet is a mild NSE and breaking a vase on the hard surface is a severe NSE. v,c\langle v,c\rangle are used for training. All instances have unavoidable NSEs.

Push: In this safety-gymnasium domain, the robot aims to push a box quickly to a goal state [37]. Pushing a box on a hazard zone (blue circles) produces NSEs. We modify the domain such that in addition to the existing actions, the agent can also wrap the box that costs +1+1. The NSEs can be avoided by pushing a wrapped box. A state is represented as x,y,b,w,h\langle x,y,b,w,h\rangle where, x,yx,y are the robot’s coordinates, bb indicates carrying a box, ww indicates if box is wrapped and hh denotes if it is a hazard area. b,w,h\langle b,w,h\rangle are used for training.

Atari Freeway In this Atari game, the robot (a chicken) navigates ten cars moving at varying speeds to reach the destination quickly while avoiding being hit. Being hit by a car moves the robot back to its previous position, and is a severe NSE. A game state is defined by coordinates (x1,y1)(x_{1},y_{1}) and (x2,y2)(x_{2},y_{2}), i.e., the top left and bottom right corners of the robot and cars, extracted from the Atari-HEAD dataset [36]. Similar to [10], only car coordinates within a specific range of the robot are considered. The robot can move up, down or stay in place, with unit cost and deterministic transitions.

V-A Results and Discussion

Refer to caption
Refer to caption
(a) Navigation: Unavoidable NSE
Refer to caption
(b) Vase: Unavoidable NSE
Refer to caption
(c) Safety-gym Push
Refer to caption
Refer to caption
(d) Navigation: Unavoidable NSE
Refer to caption
(e) Vase: Unavoidable NSE
Refer to caption
(f) Safety-gym Push
Refer to caption
Refer to caption
(g) Navigation: Unavoidable NSE
Refer to caption
(h) Vase: Unavoidable NSE
Refer to caption
(i) Safety-gym Push
Refer to caption
Refer to caption
(j) Navigation: Unavoidable NSE
Refer to caption
(k) Vase: Unavoidable NSE
Refer to caption
(l) Safety-gym Push
Figure 6: Average penalty incurred, along with standard error, when learning from a single feedback (a-c), and using combinations of two formats (d-l).
TABLE I: Avg. cost and standard error at task completion.
Method Navigation: unavoidable NSE Vase: unavoidable NSE Boxpushing: avoidable NSE Freeway: avoidable NSE
Oracle 51.37±2.6951.37\pm 2.69 54.46±6.7054.46\pm 6.70 44.62±9.9744.62\pm 9.97 3759.8±0.03759.8\pm 0.0
Naive 36.11±1.3936.11\pm 1.39 36.0±2.8936.0\pm 2.89 39.82±5.4439.82\pm 5.44 61661.0±0.061661.0\pm 0.0
RI 40.10±0.6940.10\pm 0.69 37.42±1.0137.42\pm 1.01 42.15±2.4442.15\pm 2.44 71716.6±0.071716.6\pm 0.0
Ours 49.18±13.6749.18\pm 13.67 63.0±0.7363.0\pm 0.73 46.17±0.8646.17\pm 0.86 1726.5±0.01726.5\pm 0.0

Effect of learning using AFS We first examine the benefit of querying using AFS, by comparing the resulting average NSE penalties and the cost for task completion, across domains and query budget. Figure 5 shows the average NSE penalties when operating based on an NSE model learned using different querying approaches. Clusters for critical state selection were generated using KMeans clustering algorithm with K=3K\!=\!3 for navigation, vase and safety-gym’s push domains (Figure 6 (a-c)) and K=5K\!=\!5 for the Atari Freeway domain (Figure 6 (d)). The results show that our approach consistently performs similar to or better than the baselines.

There is a trade-off between optimizing task completion and mitigating NSEs, especially when NSEs are unavoidable. While some techniques are better at mitigating NSEs, they significantly impact task performance. Table I shows the average cost for task completion. Lower values are better for both NSEs and task completion cost. While the Naive Agent has a lower cost for task completion, it incurs the highest NSE penalty as it has no knowledge of RNR_{N}. RI causes more NSEs, especially when they are unavoidable, as its reward function does not fully model the penalties for mild and severe NSEs. Overall, the results show that our approach consistently mitigates avoidable and unavoidable NSEs, without affecting the task performance substantially.

Refer to caption
(a) Navigation: Unavoidable NSE
Refer to caption
(b) Vase: Unavoidable NSE
Refer to caption
(c) Safety-gym Push
Refer to caption
(d) ATARI Freeway
Figure 7: Average penalty incurred using our approach (AFS) with KMeans and KCenters clustering algorithm, evaluated across varying number of clusters (KK).

Effect of learning from multiple feedback formats To better understand the need for and benefits of AFS, we investigate the benefits of learning from more than one feedback type in general. Figure 6 shows the results comparing the average NSE penalties of learning from a single feedback and multiple feedback formats, with varying budget for querying across domains. In the single feedback case(Figure 6(a-c)), Corrections format successfully mitigates NSEs with fewer feedback across domains. However, its reliance on constant human guidance is a limitation. While Demo-Action Mismatch requires less human guidance, it is less effective in avoiding NSEs. The effectiveness of Demo-Action Mismatch improves significantly depending on its position within a sequence of feedback formats (Figure 6(d-f)). For instance, using Demo-Action Mismatch before Corrections, in safety-gym’s push domain, results in a lower average penalty with a smaller budget. However, in the vase domain with unavoidable NSEs, the agent performs better when Demo-Action Mismatch follows Corrections. On the other hand, Approval and Annotated Approval have a similar performance across domains and require higher samples to learn the true distribution of NSEs. However, when combined with Corrections or Annotated Corrections, the performance improves considerably (Figure 6(g-i)). Learning the underlying NSE severities demands a significantly higher number of samples when using a combination of Approval and Annotated Approval formats. Likewise, while Ranking is less effective in mitigating NSEs, it is more effective when used in combination with Corrections or Annotated Corrections (Figure 6(j-l)).

These results show that learning from more than one feedback format is generally useful but the benefits depend on the formats considered together and the order in which they are combined. Identifying the right ordering of feedback formats a priori is often practically infeasible. Our approach enables the agent to identify when and how to query, without any additional pre-processing.

Clustering Figure 7 shows the average penalty incurred using our approach (AFS) with the KMeans and KCenters clustering algorithms for varying numbers of clusters (K={2,3}K\!=\{2,3\} in the navigation, vase and push domains, and K={3,5}K\!=\{3,5\} in the Freeway domain). We restrict our evaluation to these KK values since the maximum number of distinct clusters in each domain is determined by number of unique combinations of state features. In the navigation domain, features used for clustering states are f,p\langle f,p\rangle. The valid unique combinations are f=concrete,p=no puddle\langle f=\text{concrete},p=\text{no puddle}\rangle, f=grass,p=no puddle\langle f=\text{grass},p=\text{no puddle}\rangle, and f=grass,p=puddle\langle f=\text{grass},p=\text{puddle}\rangle. Hence, having K>3K>3 will not produce unique clusters. Similarly, in the vase domain, features used for clustering are v,c\langle v,c\rangle, where the unique, valid combinations are no vase, no carpet,vase, no carpet,vase, carpet\langle\text{no vase, no carpet}\rangle,\langle\text{vase, no carpet}\rangle,\langle\text{vase, carpet}\rangle. For the push domain, the features used for clustering are b,w,h\langle b,w,h\rangle, with valid unique combinations including no box,not wrapped,hazard\langle\text{no box},\text{not wrapped},\text{hazard}\rangle, box,not wrapped,hazard\langle\text{box},\text{not wrapped},\text{hazard}\rangle, no box,not wrapped,no hazard\langle\text{no box},\text{not wrapped},\text{no hazard}\rangle, and box,wrapped,no hazard\langle\text{box},\text{wrapped},\text{no hazard}\rangle. In the Freeway domain, the coordinates of the ten cars are used to generate clusters.

The results in Figure 7 demonstrate that increasing KK generally improves the performance of our approach, with both clustering methods. A higher number of clusters allows for a more refined grouping of states based on distinct state features, enabling the agent to query the human for feedback across a more diverse set of states. This diversity enhances the agent’s ability to accurately learn and mitigate NSEs.

VI Conclusion

The proposed Adaptive Feedback Selection (AFS) facilitates querying a human in different formats in different regions of the state space, to effectively learn a reward function. Our approach uses information gain to identify critical states for querying, and the most informative feedback format to query in these states, while accounting for the cost and uncertainty of receiving feedback in each format. Our empirical evaluations using four domains in simulation demonstrate the effectiveness and sample efficiency of our approach in mitigating avoidable and unavoidable negative side effects (NSEs), based on explicit and implicit feedback formats. In the future, we aim to validate our assumptions and results using user studies and extend our approach to continuous settings.

ACKNOWLEDGMENT

This work was supported in part by National Science Foundation grant number 2416459.

References

  • [1] D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Mané, “Concrete problems in AI safety,” arXiv preprint arXiv:1606.06565, 2016.
  • [2] S. Saisubramanian, E. Kamar, and S. Zilberstein, “A multi-objective approach to mitigate negative side effects,” in Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, (IJCAI), 2021.
  • [3] A. Srivastava, S. Saisubramanian, P. Paruchuri, A. Kumar, and S. Zilberstein, “Planning and learning for non-markovian negative side effects using finite state controllers,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2023.
  • [4] V. Krakovna, L. Orseau, R. Ngo, M. Martic, and S. Legg, “Avoiding side effects by considering future tasks,” in Advances in Neural Information Processing Systems (NeurIPS), vol. 33, 2020.
  • [5] Y. Cui and S. Niekum, “Active reward learning from critiques,” in 2018 IEEE international conference on robotics and automation (ICRA).   IEEE, 2018.
  • [6] Y. Cui, Q. Zhang, B. Knox, A. Allievi, P. Stone, and S. Niekum, “The empathic framework for task learning from implicit human feedback,” in Conference on Robot Learning (CoRL).   PMLR, 2021.
  • [7] D. Hadfield-Menell, S. J. Russell, P. Abbeel, and A. Dragan, “Cooperative inverse reinforcement learning,” Advances in Neural Information Processing Systems (NeurIPS), vol. 29, 2016.
  • [8] H. Lakkaraju, E. Kamar, R. Caruana, and E. Horvitz, “Identifying unknown unknowns in the open world: Representations and policies for guided exploration,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), vol. 31, no. 1, 2017.
  • [9] A. Y. Ng, S. Russell, et al., “Algorithms for inverse reinforcement learning.” in Proceedings of the Seventeenth International Conference on Machine Learning (ICML), 2000.
  • [10] A. Saran, R. Zhang, E. S. Short, and S. Niekum, “Efficiently guiding imitation learning agents with human gaze,” in International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2021.
  • [11] S. Zhang, E. Durfee, and S. Singh, “Querying to find a safe policy under uncertain safety constraints in markov decision processes,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2020.
  • [12] S. Saisubramanian, E. Kamar, and S. Zilberstein, “Avoiding negative side effects of autonomous systems in the open world,” Journal of Artificial Intelligence Research (JAIR), vol. 74, 2022.
  • [13] S. Saisubramanian, S. C. Roberts, and S. Zilberstein, “Understanding user attitudes towards negative side effects of AI systems,” in Extended Abstracts of the 2021 (CHI) Conference on Human Factors in Computing Systems, 2021.
  • [14] Y. Cui, P. Koppol, H. Admoni, S. Niekum, R. Simmons, A. Steinfeld, and T. Fitzgerald, “Understanding the relationship between interactions and outcomes in human-in-the-loop machine learning,” in International Joint Conference on Artificial Intelligence (IJCAI), 2021.
  • [15] B. Settles, “Active learning literature survey,” Science, vol. 10, no. 3, 1995.
  • [16] V. Krakovna, L. Orseau, M. Martic, and S. Legg, “Measuring and avoiding side effects using relative reachability,” arXiv preprint arXiv:1806.01186, 2018.
  • [17] S. Saisubramanian and S. Zilberstein, “Mitigating negative side effects via environment shaping,” in International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2021.
  • [18] P. Abbeel and A. Y. Ng, “Apprenticeship learning via inverse reinforcement learning,” in Machine Learning, Proceedings of the Twenty-first International Conference (ICML), 2004.
  • [19] S. Ross, G. Gordon, and D. Bagnell, “A reduction of imitation learning and structured prediction to no-regret online learning,” in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, (AISTATS), ser. JMLR Proceedings, 2011.
  • [20] D. Brown, S. Niekum, and M. Petrik, “Bayesian robust optimization for imitation learning,” Advances in Neural Information Processing Systems (NeurIPS), 2020.
  • [21] D. S. Brown, Y. Cui, and S. Niekum, “Risk-aware active inverse reinforcement learning,” in Proceedings of The 2nd Conference on Robot Learning (CoRL), vol. 87.   PMLR, 2018, pp. 362–372.
  • [22] D. Hadfield-Menell, S. Milli, P. Abbeel, S. J. Russell, and A. Dragan, “Inverse reward design,” Advances in Neural Information Processing Systems (NeurIPS), vol. 30, 2017.
  • [23] R. Ramakrishnan, E. Kamar, D. Dey, E. Horvitz, and J. Shah, “Blind spot detection for safe sim-to-real transfer,” Journal of Artificial Intelligence Research (JAIR), vol. 67, 2020.
  • [24] D. Ramachandran and E. Amir, “Bayesian inverse reinforcement learning,” in Proceedings of the 20th International Joint Conference on Artifical Intelligence (IJCAI), 2007.
  • [25] D. Brown and S. Niekum, “Efficient probabilistic performance bounds for inverse reinforcement learning,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2018.
  • [26] D. P. Losey and M. K. O’Malley, “Including uncertainty when learning from human corrections,” in Conference on Robot Learning (CoRL).   PMLR, 2018.
  • [27] A. Bobu, M. Wiggert, C. Tomlin, and A. D. Dragan, “Feature expansive reward learning: Rethinking human input,” in Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2021.
  • [28] Y. Cui, S. Karamcheti, R. Palleti, N. Shivakumar, P. Liang, and D. Sadigh, “No, to the right – online language corrections for robotic manipulation via shared autonomy,” in Proceedings of the 2023 ACM/IEEE Conference on Human-Robot Interaction (HRI), 2023.
  • [29] D. Brown, R. Coleman, R. Srinivasan, and S. Niekum, “Safe imitation learning via fast bayesian reward inference from preferences,” in International Conference on Machine Learning (ICML).   PMLR, 2020.
  • [30] D. Xu, M. Agarwal, F. Fekri, and R. Sivakumar, “Playing games with implicit human feedback,” in Workshop on Reinforcement Learning in Games, (AAAI), vol. 6, 2020.
  • [31] E. Bıyık, D. P. Losey, M. Palan, N. C. Landolfi, G. Shevchuk, and D. Sadigh, “Learning reward functions from diverse sources of human feedback: Optimally integrating demonstrations and preferences,” The International Journal of Robotics Research (IJRR), 2022.
  • [32] B. Ibarz, J. Leike, T. Pohlen, G. Irving, S. Legg, and D. Amodei, “Reward learning from human preferences and demonstrations in atari,” Advances in Neural Information Processing Systems (NeurIPS), vol. 31, 2018.
  • [33] G. R. Ghosal, M. Zurek, D. S. Brown, and A. D. Dragan, “The effect of modeling human rationality level on learning rewards from multiple feedback types,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2023.
  • [34] J. Tien, J. Z.-Y. He, Z. Erickson, A. Dragan, and D. S. Brown, “Causal confusion and reward misidentification in preference-based reward learning,” in The Eleventh International Conference on Learning Representations (ICLR), 2023.
  • [35] H. J. Jeon, S. Milli, and A. Dragan, “Reward-rational (implicit) choice: A unifying formalism for reward learning,” Advances in Neural Information Processing Systems (NeurIPS), vol. 33, 2020.
  • [36] R. Zhang, C. Walshe, Z. Liu, L. Guan, K. Muller, J. Whritner, L. Zhang, M. Hayhoe, and D. Ballard, “Atari-head: Atari human eye-tracking and demonstration dataset,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), vol. 34, 2020.
  • [37] J. Ji, B. Zhang, J. Zhou, X. Pan, W. Huang, R. Sun, Y. Geng, Y. Zhong, J. Dai, and Y. Yang, “Safety gymnasium: A unified safe reinforcement learning benchmark,” in Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track (NeurIPS), 2023.