Epistemic Exploration for Generalizable Planning and Learning in Non-Stationary Settings
Abstract
This paper introduces a new approach for continual planning and model learning in relational, non-stationary stochastic environments. Such capabilities are essential for the deployment of sequential decision-making systems in the uncertain and constantly evolving real world. Working in such practical settings with unknown (and non-stationary) transition systems and changing tasks, the proposed framework models gaps in the agent’s current state of knowledge and uses them to conduct focused, investigative explorations. Data collected using these explorations is used for learning generalizable probabilistic models for solving the current task despite continual changes in the environment dynamics. Empirical evaluations on several non-stationary benchmark domains show that this approach significantly outperforms planning and RL baselines in terms of sample complexity. Theoretical results show that the system exhibits desirable convergence properties when stationarity holds.
1 Introduction
This paper addresses the problem of planning in non-stationary stochastic settings with unknown domain dynamics. In particular, we consider problems where a goal-oriented agent is not given a closed-form model of the probabilities of states that may result upon execution of an action. Furthermore, these probabilities can change at potentially unknown time steps as the agent is executing in the environment. Such settings are commonly encountered by planning systems in the real-world. For example, an autonomous warehouse robot would be expected to continue achieving goals through different paths when some corridors get blocked due to spills or when the layouts of storage racks change to accommodate changing inventory profiles. Currently, such changes require renewed modeling by domain experts thus limiting the scope and deployability of automated planning methods.
These settings are technically challenging due to the need to correctly model uncertainty about the agent’s knowledge when a discrepancy is detected, and to conduct focused exploration that can improve the agent’s knowledge for subsequent planning. Prior work on the problem investigates the role of randomized exploration for addressing non-stationarity. E.g., if the rate of novelty events inducing non-stationarity are sufficiently low compared to the timesteps available for learning in each epoch of stationary dynamics, Reinforcement Learning (RL) techniques such as Q-Learning with variations of -greedy exploration can be guaranteed to successively converge to optimal policies. However, these methods are likely to be sample-inefficient as the collection of new data is not easily focused towards parts of the environment that changed.
We present a new framework for continual learning and planning under non-stationarity for such settings (Sec. 3.1), develop solution algorithms for this paradigm (Sec. 3.3) and evaluate their performance across various forms of the problem, depending on whether the change in dynamics is known to the agent and whether the agent conducts comprehensive re-learning or need-based learning (Sec. 4).
Our approach addresses the challenges discussed above with autonomous processes for deliberative data gathering, planning, and model learning. It starts with the inputs available to a standard RL agent (a simulator, action names, and a reward generator), but instead of learning a policy, it interacts with the environment to first learn a relational probabilistic planning model geared towards solving the current goal, and then uses it to compute solution policies. When a discrepancy is detected, it flags aspects of the currently learned model that are no longer accurate, and conducts investigative exploration with auto-generated epistemically-guided policies to re-learn aspects that may have changed. The problem of computing useful investigative policies is non-trivial. This is reduced to a fully-observable non-deterministic (FOND) (Cimatti, Roveri, and Traverso 1998) planning problem and solved without interacting with the simulator. The computed investigative policies are then executed and the resulting data is used to learn more accurate models. Although these executions are not focused on policy learning for the current task, they are used to learn and maintain relational Probabilistic Planning Domain Description Language (PPDDL) style models. We show that (i) this significantly increases transferability and generalizability of learning, and (ii) the resulting paradigm vastly outperforms SOTA RL and existing model-based RL paradigms.
Our main contribution is the first known approach for using information about epistemic uncertainty of a logic-based internal probabilistic model to create exploration strategies, learn better models, and then compute plans even as transition systems change. Additionally, this is also the first approach to interleave active learning with epistemic exploration to discover a stochastic symbolic model suited for task transfer in non-stationary environments. Empirical analysis on non-stationary versions of benchmark domains show that in such settings our approach (i) significantly reduces the sample complexity compared to SOTA baselines; (ii) can quickly adapt to changes in environment dynamics; and (iii) performs very close to an oracle that has access to all the information about changes in the environment apriori.
2 Background
Relational Markov Decision Processes (RMDPs) We model tasks as RMDPs expressed in PPDDL (Younes et al. 2005). An RMDP environment or domain is a tuple consisting of a set of parameterized predicates and actions . Here, contains predicates of the form , and contains actions of the form , where are the parameters. We use to specify lifted predicates and actions with variables as arguments and omit the parameterization when it is clear from context. A grounded RMDP task (or problem) is defined as a tuple where is a set of objects. A literal represents a grounded predicate parameterized with objects . Formally, predicates are grounded by computing a mapping between their parameters to the objects, , where , . Similarly, can also be used to lift grounded predicates and actions. We refer to as the set of all possible grounded predicates derivable using and . For clarity, we use the notation to denote whether an entity is lifted and use otherwise.
A state is a complete valuation of all possible predicates . Following the closed-world assumption, predicates whose values are false are omitted from the state representation. The set of all possible subsets of predicates forms the state space of the RMDP . Similarly, the action space of is formed by grounding each action . is the transition function and is implemented by a simulator. For a given transition , specifies the probability of executing action in a state and reaching a state . Naturally, for any and .
The simulator is a function that returns a state on executing in by sampling over . Executing an action constitutes one step on the simulator. represents the total steps executed by the simulator and indicates the simulator step budget after which the simulator cannot be used. is the initial state and is a conjunctive first-order logic goal formula obtained using and . A goal state is a state such that . is the reward function and indicates the reward obtained for executing action in state . For all , we set for any goal state and otherwise. is the discount factor. Execution begins in the initial state and terminates when a goal state is reached or when a horizon has been exceeded. An RMDP task is accomplished whenever execution terminates in a goal state.
Running Example Consider a robot that is deployed to assist in a warehouse. The robot is equipped with sensors and actuators (e.g., camera, wheels, grippers, etc.) that can help it perform a variety of tasks such as cleaning floors, restocking shelves, etc. Such tasks could be specified by using a domain with . would consist of actions such as etc. with their transition function implemented by a simulator.
Example RMDP task Consider an environment with one robot , two locations , and one box . An RMDP task of moving to and parking anywhere could be modeled as where , , and .
A solution to an RMDP is a deterministic policy that maps states to actions. The value of a state when following a policy is defined as the expected cumulative reward obtained when executing in and following thereafter, i.e., The objective of an RMDP is to compute an optimal policy that maximizes the expected reward obtained by following it.111Without loss of generality, we focus on optimal policies that are optimal w.r.t. the initial state . Model-based RMDP algorithms compute by solving the Bellman Optimality Equation iteratively starting from (Sutton and Barto 1998):
(1) |
The above equation requires access to closed-form knowledge of the transition function . When such information is unavailable, RL-based RMDP algorithms use sample estimates of Q-values instead. Given a policy , the Q-value of a state when executing action is defined as the expected reward obtained when executing in and following thereafter, i.e. . The Q-Learning Equation (Watkins 1989) can be written as: as:
where is the learning rate. It employs an exploration strategy such as -greedy wherein a random action is selected with probability and selecting the greedy action otherwise. Q-Learning has been shown to converge to the optimal policy (Sutton and Barto 1998).
PPDDL transition models Our approach learns lifted PPDDL models that can be used for stochastic planning using Eqn. 1. We note that the simulator’s implementation of the transition function could be arbitrary and does not need to be a PPDDL model. Given an RMDP , a PPDDL model for an action is a tuple . We omit the subscript when it is clear from context. Pre represents the precondition and is expressed as a conjunctive formula of predicates where . Prob is a list of probabilities such that . Eff is a list of effects. Each effect is a tuple both of which are sets composed of predicates .
An action is applicable in a state iff . An effect when applied to a state results in a state . Applying an action to a state results in exactly one effect being applied with probability if the action is applicable else the state remains unchanged. A PPDDL transition model translates to a closed-form specification of the transition function of , i.e., . A lifted (grounded) PPDDL model can be easily obtained from using . As is the case with RMDP domains, several RMDP tasks from a single domain can also share the same lifted PPDDL model .
Example The action described in the running example could be modeled as a PPDDL model with precondition
to indicate that the action is applicable only when the robot is not holding anything is at the same location as the box. The effects could be modeled as
to indicate that the robot successfully picked up the box and is currently holding it. Similarly, another effect with could be used to model a slippery gripper with a 10% chance to fail to pick-up the box.
Definition 2.1 (-Consistent Transition).
Given a PPDDL model and an action of an RMDP , a transition where is said to be -consistent, , iff when or such that and whenever .
A lifted PPDDL model is implicitly converted to a grounded PPDDL model when checking for -consistency w.r.t. a transition .
PPDDL Model-Learning Given a dataset that is composed of a set of transitions obtained from an RMDP task, the PPDDL model-learning problem is to compute a model s.t. for any . The two major techniques of model learning are active and passive learning. Active learners interactively explore the state space to generate for learning the model whereas passive learners require to be provided as input. We use active learning as it has been shown to work well for deterministic, non-stationary settings (Nayyar, Verma, and Srivastava 2022).
Definition 2.2 (Policy Trace).
Given an RMDP and simulator , a policy trace of a policy is a sequence of states and actions where s.t. and .
Definition 2.3 (-distinguishing policies).
Given an RMDP , a predicate , policies , and a simulator , and are -distinguishing policies iff s.t. for policy traces and , and .
Active Query-based Model Learning (AQML) AQML is an epistemic method that seeks to prune the space of models under consideration by guiding exploration towards states that can help update the model. The key observation is that for any given , a predicate can appear as a positive precondition, a negative precondition, or not appear at all in . Similarly, could appear in any of these modes in any of the effect lists of . This induces an exponentially large number of models over which a model-learner must search. We can prune this search space by selecting a predicate and generating candidate models
where appears in a positive , negative , or absent mode in the preconditions or effects respectively. Ignoring probabilities, AQML uses a combination of any two pairs of these models, and reduces query synthesis to a Fully Observable Non-Deterministic (FOND) problem. The central idea behind this reduction is that the two models being used correspond to two separate copies of each predicate in the FOND problem, and a solution is found when a state is reached such that the two copies of predicates do not match.
This problem can be passed to off-the-shelf solvers and the solution to these FOND problems are policies that AQML uses as queries to the planning agent. Due to the nature of these models where only a single predicate is changed, solution policies of any pair of these models are guaranteed to be -distinguishing or unsolvable. AQML then checks which model of the predicate is consistent with the simulator and updates appropriately (either in preconditions or one of the effects). The process repeats for the next predicate with the difference being that the learned information about can now be considered by the FOND planner in the subsequent learning process.
Example Upon identifying that is an effect of the action, AQML can generate distinguishing queries by using a FOND planner to resolve other uncertainties such as whether is a precondition of . AQML does this by generating two abstract models, one with predicate in the precondition of , and another where it is absent. As part of the policy generated by the FOND planner it would be ensured
that is true in the state before executing the put-down action (possibly by executing a pick-up action).
The key insight is that unlike other methods, this learning methodology does not wait for random exploration to generate -distinguishing policies but rather actively encourages exploration by utilizing information about parts of the model that are inaccurate. We discuss how such components are annotated in Sec. 3.1. This leads to improved sample efficiency in converging to a model , i.e., translates to a closed-form specification of the transition function . Once a -distinguishing policy is identified, probabilities can be estimated using Maximum Likelihood Estimation (MLE) by executing the policy times where is a configurable hyperparameter that represents the sampling frequency.
There are two difficulties with vanilla AQML. Firstly, complete models are learned in a single pass in order to guarantee correctness. Secondly, this framework assumes stationarity of the simulator and the query synthesis process is not resilient to changing environment dynamics during the model-learning loop. As a result, AQML cannot efficiently use learned information to update the model when only small parts of the transition system change.
3 Our Approach: Adaptive Model Learning
We use an active learning approach as it can cope with non-stationarity. Existing approaches using active learning are sample inefficient since they are comprehensive learners that relearn from scratch. Building upon the Active Query-based Model Learning framework (AQML) (Verma, Karia, and Srivastava 2023), we develop a paradigm that can work in the presence of non-stationarity. We now begin by describing the problem that we address, followed by a detailed overview of our approach.
Definition 3.1 (RMDP equivalence).
Given a domain and RMDP tasks and derived using , we define iff their objects are the same , the initial state and goals are equal and , and the transition systems are equivalent .
Definition 3.2 (Continual Planning under Non-Stationarity).
Given a stream of RMDP tasks where , a simulator with budget per task, and with the simulators transition system changing at arbitrary intervals, the objective is to maximize the total tasks accomplished within .
The above problem setting captures many real-world scenarios where environment dynamics often change in situ, i.e., while the agent is actively solving a stream of tasks and without informing the agent. E.g., events like liquid spills on the gripper affecting its friction, navigation pathways being blocked, etc. are outside the robot’s control and can arbitrarily change at any given moment. Implicitly, this translates to the agent indirectly optimizing a new RMDP task with the same goal but different transition system. The overall objective is to enable solving all tasks in a sample-efficient fashion thus making it essential to learn-and-transfer knowledge. An agent that learns a fixed model of the environment or one that is incapable of detecting such change can thus perform quite poorly or dangerously.
We consider the following taxonomy of the methods for continual planning under non-stationarity; (a) Adaptive vs. Non-adaptive learners where adaptive learners can automatically adapt to unknown changes in the transition system, whereas the latter cannot and needs to be informed that a change has occurred and in some cases need to be informed of exactly what changed; (b) Comprehensive vs. Need-based learners where the former completely learn a new model from scratch whereas the latter only perform updates to fix the model w.r.t. transitions that are not -consistent.
We integrate planning and learning by continually learning and updating a PPDDL model of the environment and using it to accomplish tasks. We develop an active, need-based learner that automatically detects and adapts to changes in the transition system. Our approach actively monitors simulator execution and performs sample efficient active learning via directed exploration when transitions are inconsistent w.r.t. the current model. We now describe the components that facilitate continual learning for planning.
3.1 Non-stationarity Aware Model Learning
We significantly alter the AQML framework so that it can work even if the transition system changes during the model-learning process (as policy traces are being generated using the simulator) and enable it to selectively and correctly learn information that is not consistent with the learned model. We accomplish this by always monitoring executions of the simulator. If a transition is not consistent w.r.t. the learned model , i.e., , then we simultaneously update the model-learning process since a new query now needs to be synthesized that can resolve the inconsistency. To do so, we identify the predicates in the preconditions (or effects) of that were inconsistent with the model and then we add in the precondition (or effect) of to be relearned. This also applies to inconsistencies identified as policy traces are being generated as a part of the model-learning process. The new FOND problem will not include in the action in any form in its precondition (or effect) and thus the planner will need to compute an alternate solution for the current query.
Example If a predicate , and then this means that the action executed successfully on the simulator and the precondition is incorrectly represented in the currently learned model and must be relearned. We then add and to the list of models that need to be considered again by the query-synthesis process.
3.2 Goodness of Fit Tests
Another key difficulty when operating in non-stationary environments is when the transitions themselves are consistent w.r.t. the preconditions and effects but are drawn from a significantly different distribution. For example, two models of an action with similar preconditions and effects but differing only in the probabilities of effects can impact the ability of an agent to solve a task.
Example In our running example of a slippery gripper, as the probability of slippage increases, the optimal policy might switch to navigating to a human operator and communicating to them to pick up the object.
Such changes cannot be quickly reflected if only MLE estimates are used to compute probabilities since these estimates can be slow to adapt to the new distribution. We mitigate this by including goodness-of-fit tests in the planning and learning loop that actively invigilate whether the distributions have undergone shift and can promptly restart the MLE estimation process.
We use Pearson’s chi-square test (Pearson 1992) for detecting o.o.d. effects as follows. Once a model for an action has been learned (or a new task is specified), we initialize a table entry for each effect . Whenever a new -consistent transition is obtained using the simulator, we identify the index s.t. . We then increment and perform a goodness of fit test using Pearson’s chi-square test with degrees of freedom.
where is total observed frequency for . If the confidence computed using is less than some threshold ( in our experiments), the goodness-of-fit test is deemed to have failed and we reset the probabilities for all effects in . To ensure that we have enough samples, we only perform this test when . We then update the probabilities using the recorded frequencies via MLE, i.e., .
3.3 Continual Learning and Planning (CLaP)
Our approach of continual learning of PPDDL models has two key advantages. Firstly, since we learn models, Eqn. 1 can be used to compute policies for the task without needing to collect experience from the simulator. Secondly, lifted PPDDL models are generalizable in that they can be zero-shot transferred to tasks with differing object names, quantities, and/or goals. For example, the same action described earlier can be reused by different RMDP tasks with differing numbers of robots, locations, and/or packages. This methodology allows our approach to solve tasks efficiently.
Alg. 1 describes our overall process for continual learning and planning. The algorithm takes as input an RMDP task , a simulator , a simulator budget , a learned model , and hyperparameters , and representing the horizon, sampling count, failure threshold, and confidence threshold respectively. Note that in the context of Alg. 1, only specifies the initial state and goal for the task. The transition system represented by the simulator can arbitrarily change at any time but the agent still perceives it as the same task. Alg. 1 attempts to compute a policy for using the learned model (line 2) using an off-the-shelf RMDP solver such as LAO* (Hansen and Zilberstein 2001).
If the transition graph of derived using has no path to the goal or if the goal has not been reached for a certain threshold (lines 4-5) the agent performs an exploration of the state space using the simulator in order to find a transition that is not -consistent. Initially, when the learned model is empty (empty preconditions and effects for all actions), this step allows the agent to quickly discover transitions for which useful learning can be performed. We used random walks of length to conduct this exploration step in our experiments. If an inconsistent transition is discovered as part of the exploration process, then several models to consider are added to the model learner using the approach in Sec. 3.1. This causes model learning to be invoked to resolve the inconsistency and updates the learned model (line 7). We note that, as mentioned in Sec. 3.1, if new inconsistencies are identified during the model learning then they are resolved as well. Since the model has been updated, a new policy is computed (line 8).
Once any learning steps are complete and has been computed, we execute an action on the simulator (lines 9-10). If , then a goodness of fit test is performed to improve probability estimates as noted in Sec. 3.2 (line 13). An inconsistent transition always adds new models for the inconsistencies that need to be resolved by the model learner (line 15). If the goal is reached or the horizon is exceeded, the simulator is reset to the initial state and the total failures are incremented accordingly (lines 16-17). Finally, once the budget is exhausted (line 3) the learned model is returned (line 20) that can be used for solving future tasks.
3.4 Theoretical Results
Definition 3.3 (Variational Distance (VD)).
Given an RMDP , let be a set of transitions. Also let and be two models. The Variational Distance (VD) between these two models is then defined as .
Definition 3.4 (Locally Convergent Model Learning).
Given an RMDP , let be the current model and be the accurate (unknown) model s.t. . Consider to be an error bound on the variational distance between two models. Model learning is locally convergent iff such that , and a set of distinct transitions sampled from , s.t. the model learned using any containing will satisfy: VD.
Theorem 1.
Let be an RMDP with a series of transition system changes at timesteps implemented using a simulator , then during each stationary epoch between and Alg. 1 performs locally convergent model learning.
Proof (Sketch).
Let be the learned model at timestep . By the correctness property of AQML (Thm. 2 in Verma, Karia, and Srivastava (2023)) the set of transitions that can generate must be a subset of the ones that can. Let and let . Let VD be . has to be such that . Let be the model learned using a set of transitions that are consistent with but cannot be generated by . Choose such that has exactly elements. Now, using the model that AQML learns, it will be able to generate in addition to all the transitions that could generate. This implies: VD = , and we have the desired result with as the set that is required for local convergence. By properties of AQML (Thm. 1 in Verma, Karia, and Srivastava (2023)) any superset of transitions valid under that contains will also reduce VD by at least . ∎

4 Experiments
We implemented our approach (Alg. 1) in Python 3 and performed an empirical evaluation on four benchmark domains using a single core on a Xeon E5-2680 v4 CPU running at 2.4 GHz with a memory limit of 8 GiB. We found that our approach leads to significantly better transfer performance as compared to the baselines. We describe the empirical setup that we used for conducting the experiments followed by a discussion of the obtained results (Sec. 4.1).
Domains We used four benchmark domains that have been used in various International Probabilistic Planning Competitions (IPPCs) 222https://www.icaps-conference.org/competitions/ for our experiments. We used these benchmark domains since ground truth models for them are available and we synthesized simulators using these domains.
We briefly describe the domains that we used below. We refer to each domain as to indicate the total number of predicates and actions in the domain.
Tireworld is a popular domain that has been used in several IPPCs. The objective of this IPPC benchmark is to drive from the initial position to the goal position (accounting for flat tires that can stochastically occur).
FirstResponders is a domain inspired from emergency services. The objective is to put out all fires and treat all victims. To do so, a planning agent needs to be able to plan to reach locations under fire and put them out (refilling water as needed) and also treat victims either at the fire site or ferry them to a hospital if the injuries are too severe.
Elevators is a stochastic extension of the deterministic Miconic (Long and Fox 2003) domain wherein there are several new objectives such as coins to be collected and elements such as shafts and gates that constrain navigation.
Blocksworld is an environment where the goal is to arrange blocks in specific configurations. The IPPC variant is ExplodingBlocks wherein the table can be destroyed whilst stacking blocks. We tried to generate problems for ExplodingBlocks but were unsuccessful and as a result used the ergodic version instead where stacking blocks has a chance to drop them on the table. Nevertheless, the non-stationarity we introduce (described below) can often introduce dead-end states (i.e., states from which the goal cannot be reached).
Task Generation All tasks in the benchmark suite share a single transition system and, to the best of our knowledge, there are no official problem generators that can introduce non-stationarity and generate tasks for it. Thus, we introduced non-stationarity by generating new domain files obtained by changing a randomly selected action from the domain file of the previous task that was generated. We performed between 0 – 3 changes in both the preconditions and effects of the selected action by adding or deleting a predicate or by modifying an existing predicate in the action’s model and ensured that at least one change was made. This method of introducing non-stationarity resulted in the transition system of the final task being significantly different from the benchmark task with several actions changed.
Task Setup We generated five different tasks with different initial states and goals. was the benchmark task and the others were generated using Breadth First Search. We used and horizon for all tasks.
Baselines We used Q-Learning as our non-transfer RL baseline. We also used an Oracle that has complete access to the closed-form model of the simulator and uses LAO∗ to compute policies. The Oracle baseline provides an upper bound on the performance achievable by any algorithm.
We utilized QACE (Verma, Karia, and Srivastava 2023), a SOTA stochastic model learner, as the AQML-based model-learning algorithm for developing our second baseline. We modified QACE to detect changes in the transition system thus creating a SOTA adaptive baseline called Adaptive QACE. When an inconsistency is detected, Adaptive QACE invokes QACE to relearn the model from scratch. The extended version (Karia et al. 2024) includes an ablation called Non-adaptive QACE wherein QACE is informed when a change in the transition system occurs.
We also considered ILM (Ng and Petrick 2019) since it can learn noisy deictic rules but could not use it despite employing significant effort (and contacting the authors).
We compare the baselines against our system (CLaP): an active, adaptive, need-based learner implementing Alg. 1.
Hyperparameters We used for Q-Learning, for the AQML-based methods and CLaP. Additionally, we used and for CLaP.
4.1 Analysis of Results
As mentioned in Sec. 2, we consider a task accomplished when a goal state is reached. We used a simulator budget for each task. The transition system is kept stationary for steps. The simulator is then loaded with a new task and a new transition system .
Fig. 1 shows the obtained results from our experiments with 10 different random seeds used by the algorithms. We analyze the results to answer the following questions.
-
a.
Is CLaP sample efficient?
-
b.
Are CLaP solutions performant?
-
c.
Are CLaP solutions generalizable?
Evaluation Metrics We use the following evaluation metrics to answer the questions above; We answer (a) by plotting learning curves that showcase how many tasks were accomplished during the learning process; We answer (b) by comparing the policy quality wherein at every simulator steps, we freeze the computed policy and generate 10 policy traces each starting from the initial state of the task with a maximum horizon of 40. These simulations do not count towards the simulators budget. We report the average reward obtained while doing so; We answer (c) by computing the adaptive delay (Balloch et al. 2022) which measures how many steps are necessary in the environment before the steady-state performance converges to that of the Oracle’s. We defined steady state performance as the total steps needed in an environment after which performance in an episode was always within of the Oracle’s.
It is clear from Fig. 1 that our approach of continual learning and planning (CLaP) outperforms both non-transfer (Q-Learning) and model-based relearning (Adaptive QACE).
(a) Sample Efficiency Our results in Fig. 1(a) show that CLaP has a much better sample complexity compared to the baselines. The learning curves from FirstResponders, Elevators and Blocksworld show that our approach can accomplish significantly more tasks than the baselines. Q-Learning does not learn and transfer any knowledge and thus needs to collect large amounts of experience to solve tasks.
Adaptive QACE cannot efficiently correct the model when transition systems change since it needs to relearn all actions to converge. This drawback of comprehensive learners is highlighted in the results on the Elevators domain where even Q-Learning outperformed Adaptive QACE. For the Elevators domain, the transition system change rendered some task-irrelevant actions executable from states that were reachable only over very long horizons. Adaptive QACE exhausted the simulator’s budget trying to relearn these task-irrelevant actions and thus was not able to solve the task. CLaP on the other hand only lazily-evaluates whether to learn a fraction of the model or not and was able to quickly fix the learned model and compute a policy that could solve the task. These trends can also be seen in FirstResponders where Adaptive QACE must relearn 10 actions from scratch every time an inconsistency is observed.
(b) Better Task Performance Fig. 1(b) shows that avg. rewards of CLaP policies are very close to the Oracle’s. This suggests that our learned models are often good approximations of the transition system. CLaP’s policies converge to those of the Oracle’s across all tasks in our evaluation.
(c) Better Generalizablity Our approach has a significantly lower adaptive delay (Fig. 1(c)), i.e., CLaP is able to utilize and transfer the learned knowledge across problems efficiently compared to the baselines that take a significant number of samples to converge to the Oracle’s performance. For example, CLaP zero-shot transferred (adaptive delay was 0) between Blocksworld tasks and requiring no learning to solve task while also matching the Oracle’s performance. In cases where adaptation was needed (e.g., between Blocksworld tasks , , and ) CLaP few-shot learns the required knowledge to accomplish the task with policy qualities similar to that of the Oracle. In general, CLaP’s adaptive delay was the best among all baselines.
We also conducted a directed experiment to evaluate the adaptability of our method to changing distributions. To do this, we generate two tasks from a -armed bandit domain. Pulling any of the levers stochastically takes the agent to the goal. Thus, the optimal policy is to repeatedly pull the lever with the highest probability of reaching the goal. In task one, the first (second) lever would succeed with probability 0.8 (0.2). In the second, it was 0.1 (0.9) respectively with preconditions and effects unchanged. CLaP utilizes goodness of fit tests and thus was able to adapt to this distribution shift and chose lever 1 (2) for task one (two). Adaptive QACE cannot adapt to such changes and continued to use lever 1 for task two. This resulted in its policies being 9x worse than CLaP’s with overall only 950 goals achieved compared to CLaP’s 1550 ( per task, ). Plots are available in the extended version (Karia et al. 2024).
Limitations and Future Work Currently, CLaP does not consider the task goal in the model learning process (line 7 of Alg. 1). Making optimistic estimates about the model w.r.t. the goal might allow the model learner to expend fewer samples for learning a model that can accomplish the task.
We do not take into account transition system changes or goals that could be provided in advance. CLaP could utilize that information to develop a curriculum so that useful, unlikely-to-change actions are prioritized to be learned early even if they do not contribute towards the current task’s goal.
PPDDL models have expressiveness limitations such as difficulty in modeling exogenous effects. Future work could use techniques like inductive logic programming to learn models that are more expressive than PPDDL models.
When is it better to learn-from-scratch There were not many performance gains compared to Adaptive QACE in the Tireworld domain. This is because Tireworld is a small domain with only 2 (4) actions (predicates) that need to be learned. Devising heuristics that can evaluate whether learning from scratch would be easier than correcting the model is an interesting problem that we leave to future work.
5 Related Work
There has been plenty of work for transfer in RL (Mnih et al. 2013; Schulman et al. 2017) and on non-stationarity (commonly referred to as novelty in the RL literature). We focus on approaches that transfer across RMDP tasks. Tadepalli, Givan, and Driessens (2004) provides an extensive overview for relational RL approaches.
Model-Based Reinforcement Learning The Dyna framework (Sutton 1990) forms the basis of several model-based reinforcement learning (MBRL) approaches. Ng and Petrick (2019) use conjunctive first-order features to learn models and generalizable policies that transfer to related classes of RMDPs. Their approach does not perform guided exploration to resolve ambiguities. REX (Lang, Toussaint, and Kersting 2012) enables MBRL to automatically learn tasks autonomously. One challenge with this approach is learning accurate models since exploration can be sparse when using REX. V-MIN (Martínez et al. 2017) integrates model-learning and planning with RL by requesting demonstrations from a teacher if it cannot find a policy whose expected value is greater than a certain threshold. The requirement of an available teacher limits the transfer capabilities of this approach. Taskable RL (TRL) (Illanes et al. 2020) and RePReL (Kokel et al. 2023) show how Hierarchical Reinforcement Learning (HRL) using the options framework can be used for TRL. They use symbolic plans to guide the RL process. This approach requires models provided as input and are not learned. In contrast, our generates its own data for learning models using an active learning process.
Learning Models for Non-Stationary Settings GRL (Karia and Srivastava 2022) train a neural network to learn reactive policies that can transfer to problems from the same domain but with different state spaces. Their approach is limited to only changes in the state space and cannot adapt to changes in the transition dynamics. Nayyar, Verma, and Srivastava (2022) and Musliner et al. (2021) learn PDDL models whereas approaches like Sridharan and Meadows (2018) and Sridharan, Meadows, and Gómez (2017) use Answer Set Prolog to represent domain knowledge. These approaches work in non-stationary environments and can be integrated into the interleaved learning and planning loop. However, they only learn deterministic models. Bryce, Benton, and Boldt (2016) address the problem of learning the updated mental model of a user using particle filtering given prior knowledge about the user’s mental model. However, they assume that the entity being modeled can tell the learning system about flaws in the learned model if needed.
Eiter et al. (2010) propose a framework for updating action laws depicted in the form of graphs representing the state space. They assume that changes can only happen in effects, and that knowledge about the state space and what effects might change is available beforehand. There is a large body of work on adapting symbolic models to novelties in open-world environments for RL (Goel et al. 2022; Balloch et al. 2023; Sreedharan and Katz 2023; Mohan et al. 2023). These methods are limited to deterministic settings and/or can only learn new models from passively collected data.
6 Conclusions
We developed a sample-efficient method for transferring epistemial knowledge between an interleaved learning and planning process. Our approach can easily handle non-stationary environments on-the-fly by automatically detecting any changes that are inconsistent with the learned model. We reduce sample complexity by only updating parts of the model that are inconsistent with the simulator’s execution. Our approach is resilient to changes in the transition system even if it occurs during the model learning process. We show that when the transition system is stationary our approach is locally convergent. Furthermore, our learned lifted models easily transfer to new tasks. Our empirical results show that our approach significantly reduces sample complexity whilst remaining performant with respect to the optimal policy.
Acknowledgements
We would like to thank Gaurav Vipat for help with an earlier version of the source code. This work was performed in collaboration with Lockheed Martin and was supported in part by the ONR under grant N00014-21-1-2045.
References
- Balloch et al. (2022) Balloch, J. C.; Lin, Z.; Hussain, M.; Srinivas, A.; Wright, R.; Peng, X.; Kim, J. M.; and Riedl, M. O. 2022. NovGrid: A Flexible Grid World for Evaluating Agent Response to Novelty. In AAAI 2022 Spring Symposium on Designing AI for Open Worlds.
- Balloch et al. (2023) Balloch, J. C.; Lin, Z.; Peng, X.; Hussain, M.; Srinivas, A.; Wright, R.; Kim, J. M.; and Riedl, M. O. 2023. Neuro-Symbolic World Models for Adapting to Open World Novelty. In Proc. AAMAS.
- Bryce, Benton, and Boldt (2016) Bryce, D.; Benton, J.; and Boldt, M. W. 2016. Maintaining Evolving Domain Models. In Proc. IJCAI.
- Cimatti, Roveri, and Traverso (1998) Cimatti, A.; Roveri, M.; and Traverso, P. 1998. Strong Planning in Non-Deterministic Domains via Model Checking. In Proc. AIPS.
- Eiter et al. (2010) Eiter, T.; Erdem, E.; Fink, M.; and Senko, J. 2010. Updating Action Domain Descriptions. AIJ, 174(15): 1172–1221.
- Goel et al. (2022) Goel, S.; Shukla, Y.; Sarathy, V.; Scheutz, M.; and Sinapov, J. 2022. RAPid-Learn: A Framework for Learning to Recover for Handling Novelties in Open-World Environments. In Proc. ICDL.
- Hansen and Zilberstein (2001) Hansen, E. A.; and Zilberstein, S. 2001. LAO*: A Heuristic Search Algorithm that Finds Solutions with Loops. AIJ, 129(1-2): 35–62.
- Illanes et al. (2020) Illanes, L.; Yan, X.; Icarte, R. T.; and McIlraith, S. A. 2020. Symbolic Plans as High-Level Instructions for Reinforcement Learning. In Proc. ICAPS.
- Karia and Srivastava (2022) Karia, R.; and Srivastava, S. 2022. Relational Abstractions for Generalized Reinforcement Learning on Symbolic Problems. In Proc. IJCAI.
- Karia et al. (2024) Karia, R.; Verma, P.; Speranzon, A.; and Srivastava, S. 2024. Epistemic Exploration for Generalizable Planning and Learning in Non-Stationary Settings*. arXiv:2402.08145.
- Kokel et al. (2023) Kokel, H.; Natarajan, S.; Ravindran, B.; and Tadepalli, P. 2023. RePReL: A Unified Framework for Integrating Relational Planning and Reinforcement Learning for Effective Abstraction in Discrete and Continuous Domains. Neural Comput. Appl., 35(23): 16877–16892.
- Lang, Toussaint, and Kersting (2012) Lang, T.; Toussaint, M.; and Kersting, K. 2012. Exploration in Relational Domains for Model-based Reinforcement Learning. JMLR, 13: 3725–3768.
- Long and Fox (2003) Long, D.; and Fox, M. 2003. The 3rd International Planning Competition: Results and Analysis. JAIR, 20: 1–59.
- Martínez et al. (2017) Martínez, D. M.; Alenyà, G.; Ribeiro, T.; Inoue, K.; and Torras, C. 2017. Relational Reinforcement Learning for Planning with Exogenous Effects. JMLR, 18: 78:1–78:44.
- Mnih et al. (2013) Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; and Riedmiller, M. A. 2013. Playing Atari with Deep Reinforcement Learning. CoRR, abs/1312.5602.
- Mohan et al. (2023) Mohan, S.; Piotrowski, W.; Stern, R.; Grover, S.; Kim, S.; Le, J.; and Kleer, J. D. 2023. A Domain-Independent Agent Architecture for Adaptive Operation in Evolving Open Worlds. arXiv:2306.06272.
- Musliner et al. (2021) Musliner, D. J.; Pelican, M. J.; McLure, M.; Johnston, S.; Freedman, R. G.; and Knutson, C. 2021. OpenMIND: Planning and Adapting in Domains with Novelty. In Proc. CACS.
- Nayyar, Verma, and Srivastava (2022) Nayyar, R. K.; Verma, P.; and Srivastava, S. 2022. Differential Assessment of Black-Box AI Agents. In Proc. AAAI.
- Ng and Petrick (2019) Ng, J. H. A.; and Petrick, R. 2019. Incremental Learning of Planning Actions in Model-Based Reinforcement Learning. In Proc. IJCAI.
- Pearson (1992) Pearson, K. 1992. On the Criterion that a Given System of Deviations from the Probable in the Case of a Correlated System of Variables is Such that it Can be Reasonably Supposed to have Arisen from Random Sampling, 11–28. New York, NY: Springer New York. ISBN 978-1-4612-4380-9.
- Schulman et al. (2017) Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; and Klimov, O. 2017. Proximal Policy Optimization Algorithms. CoRR, abs/1707.06347.
- Sreedharan and Katz (2023) Sreedharan, S.; and Katz, M. 2023. Optimistic Exploration in Reinforcement Learning Using Symbolic Model Estimates. In Proc. NeurIPS.
- Sridharan and Meadows (2018) Sridharan, M.; and Meadows, B. 2018. Knowledge Representation and Interactive Learning of Domain Knowledge for Human-Robot Interaction. Advances in Cognitive Systems, 7: 77–96.
- Sridharan, Meadows, and Gómez (2017) Sridharan, M.; Meadows, B.; and Gómez, R. 2017. What Can I Not Do? Towards an Architecture for Reasoning about and Learning Affordances. In Proc. ICAPS.
- Sutton (1990) Sutton, R. S. 1990. Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming. In Proc. ICML.
- Sutton and Barto (1998) Sutton, R. S.; and Barto, A. G. 1998. Reinforcement Learning: An Introduction. MIT Press. ISBN 978-0-262-19398-6.
- Tadepalli, Givan, and Driessens (2004) Tadepalli, P.; Givan, R.; and Driessens, K. 2004. Relational Reinforcement Learning: An Overview. In ICML RRL Workshop.
- Verma, Karia, and Srivastava (2023) Verma, P.; Karia, R.; and Srivastava, S. 2023. Autonomous Capability Assessment of Sequential Decision-Making Systems in Stochastic Settings. In Proc. NeurIPS.
- Watkins (1989) Watkins, C. 1989. Learning from Delayed Rewards. Ph.D. thesis, King’s College, Cambridge, UK.
- Younes et al. (2005) Younes, H. L. S.; Littman, M. L.; Weissman, D.; and Asmuth, J. 2005. The First Probabilistic Track of the International Planning Competition. JAIR, 24: 851–887.