Marcus Hoerger, School of Mathematics & Physics, The University of Queensland, Australia
Adaptive Discretization using Voronoi Trees for Continuous POMDPs
Abstract
Solving continuous Partially Observable Markov Decision Processes (POMDPs) is challenging, particularly for high-dimensional continuous action spaces. To alleviate this difficulty, we propose a new sampling-based online POMDP solver, called Adaptive Discretization using Voronoi Trees (ADVT). It uses Monte Carlo Tree Search in combination with an adaptive discretization of the action space as well as optimistic optimization to efficiently sample high-dimensional continuous action spaces and compute the best action to perform. Specifically, we adaptively discretize the action space for each sampled belief using a hierarchical partition called Voronoi tree, which is a Binary Space Partitioning that implicitly maintains the partition of a cell as the Voronoi diagram of two points sampled from the cell. ADVT uses the estimated diameters of the cells to form an upper-confidence bound on the action value function within the cell, guiding the Monte Carlo Tree Search expansion and further discretization of the action space. This enables ADVT to better exploit local information with respect to the action value function, allowing faster identification of the most promising regions in the action space, compared to existing solvers. Voronoi trees keep the cost of partitioning and estimating the diameter of each cell low, even in high-dimensional spaces where many sampled points are required to cover the space well. ADVT additionally handles continuous observation spaces, by adopting an observation progressive widening strategy, along with a weighted particle representation of beliefs. Experimental results indicate that ADVT scales substantially better to high-dimensional continuous action spaces, compared to state-of-the-art methods.
keywords:
Planning under Uncertainty, Motion an Path Planning, Autonomy for Mobility and Manipulation1 Introduction
Planning in scenarios with non-deterministic action effects and partial observability is an essential, yet challenging problem for autonomous robots. The Partially Observable Markov Decision Process (POMDP) (Kaelbling et al., 1998; Sondik, 1971) is a general principled framework for such planning problems. POMDPs lift the planning problem from the state space to the belief space — that is, the set of all probability distributions over the state space. By doing so, POMDPs enable robots to systematically account for uncertainty caused by stochastic actions and incomplete or noisy observations in computing the optimal strategy. Although computing the optimal strategy exactly is intractable in general (Papadimitriou and Tsitsiklis, 1987), the past two decades have seen a surge of sampling-based POMDP solvers (reviewed in (Kurniawati, 2022)) that trade optimality with approximate optimality for computational tractability, enabling POMDPs to become practical for a variety of realistic robotics problems.
Despite these advances, POMDPs with continuous state, action and observation spaces remain a challenge, particularly for high-dimensional continuous action spaces. Recent solvers for continuous-action POMDPs (Fischer and Tas, 2020; Mern et al., 2021; Seiler et al., 2015; Sunberg and Kochenderfer, 2018) are generally online —that is, planning and execution are interleaved— and exploit Monte Carlo Tree Search (MCTS) to find the best action among a finite representative subset of the action space. MCTS interleaves guided belief space sampling, value estimation and action subset refinement to incrementally improve the possibility that the selected subset of actions contains the best action. They generally use UCB1 (Auer et al., 2002) to guide belief space sampling and Monte Carlo backup for value estimation, but differ in the action subset refinement.
Several approaches use the Progressive Widening strategy (Couëtoux et al., 2011) to continuously add new randomly sampled actions once current actions have been sufficiently explored. Examples include POMCPOW (Sunberg and Kochenderfer, 2018) and IPFT (Fischer and Tas, 2020). More recent algorithms combine Progressive Widening with more informed methods for adding new actions: VOMCPOW (Lim et al., 2021) uses Voronoi Optimistic Optimization (Kim et al., 2020) and BOMCP (Mern et al., 2021) uses Bayesian optimization. All of these solvers use UCT-style simulations and Monte Carlo backups. An early line of work, GPS-ABT (Seiler et al., 2015), takes a different approach: It uses Generalized Pattern Search to iteratively select an action subset that is more likely to contain the best action and add it to the set of candidate actions. GPS-ABT uses UCT-style simulations and Bellman backup (following the implementation of ABT (Hoerger et al., 2018; Klimenko et al., 2014)), though the distinction between Monte Carlo and Bellman backup was not clarified nor explored. All of these solvers have been successful in finding good solutions to POMDPs with continuous action spaces, though for a relatively low ( dimension.
To compute good strategies for POMDPs with high-dimensional action spaces, we propose a new MCTS-based online POMDP solver, called Adaptive Discretization using Voronoi Trees (ADVT). ADVT is designed for problems with continuous action spaces, while the state and observation spaces can either be discrete or continuous. ADVT contains three key ideas, as briefly described below.
The first key idea of ADVT is a new data structure for action space discretization, called Voronoi tree. A Voronoi tree represents a hierarchical partition of the action space for a single sampled belief. It follows the structure of a Binary Space Partitioning (BSP) tree, but each partitioning hyper-plane is only implicitly maintained and computed based on the Voronoi diagram of a pair of sampled actions, which results in a partitioning that is much more adaptive to the spatial locations of the sampled actions, compared to state-of-the-art methods (Bubeck et al., 2011; Lim et al., 2021; Mansley et al., 2011; Sunberg and Kochenderfer, 2018; Valko et al., 2013). We additionally maintain estimates of the diameters of the cells in the Voronoi tree. These diameters are used in the other two key ideas, namely, action selection and Voronoi tree refinement, as described in the next two paragraphs. The hierarchical structure of the Voronoi tree allows us to estimate the diameters with an efficient sampling-based algorithm that scales well to high-dimensional action spaces. Section 5 provides a more detailed description on the computational and representational advantages of the Voronoi tree.
The second key idea of ADVT is the use of a cell-diameter-aware upper-confidence bound on the values of actions to guide action selection during planning. This bound represents an upper-confidence bound on the values of all actions within a cell, based on the estimated value of the corresponding sampled action and the diameter of the cell. Our bound is a generalization of a bound that was developed for Lipschitz continuum-arm bandit problems (Wang et al., 2020). It is motivated by the observation that in many continuous action POMDPs for robotics problems, the distance between two actions can often be used as an indication of how similar their values are. Using this observation, ADVT assumes that the action value for a belief is Lipschitz continuous in the action space, which in turn allows us to derive the upper bound. The upper bound helps ADVT to exploit local information (i.e., the estimated value of the representative action of a cell and the cell diameter) with respect to the action value function and bias its search towards the most promising regions of the action space.
The third key idea is a diameter-aware cell refinement rule. We use the estimated diameters of the cells to help ADVT decide if a cell needs further refinement: a larger cell will be split into smaller ones after a small number of simulations, while a smaller will require more simulations before it is split. This helps ADVT to avoid an unnecessarily small partitioning of non-promising regions in the action space.
To further support ADVT in efficiently finding approximately optimal actions, we use a stochastic version of the Bellman backup (Klimenko et al., 2014) rather than the typical Monte-Carlo backup to estimate the values of sampled actions. Stochastic Bellman backups help ADVT to backpropagate the value of good actions deep in the search tree, instead of averaging them out. This strategy of estimating the action values is particularly helpful for problems with sparse rewards.
Aside from continuous action spaces, continuous observation spaces pose an additional challenge for MCTS-based online POMDP solvers. Recent approaches such as POMCPOW and VOMCPOW apply Progressive Widening in the observation space in conjunction with an explicit representation of the sampled beliefs via a set of weighted particles. Due to its simplicity, we adopt POMCPOW’s strategy to handle continuous observation spaces. This enables ADVT to scale to problems with continuous state, action and observation spaces.
Experimental results on a variety of benchmark problems with increasing dimension (up to 12-D) of the action space and problems with continuous observation spaces indicate that ADVT substantially outperforms state-of-the-art methods (Lim et al., 2021; Sunberg and Kochenderfer, 2018). Our C++ implementation of ADVT is available at https://github.com/hoergems/ADVT.
This paper extends our previous work (Hoerger et al., 2022) in three ways: First is the extension of ADVT to handle continuous observation spaces. Experimental results on an additional POMDP benchmark problem demonstrate the effectiveness of ADVT in handling purely continuous POMDP problems. Second is a substantially expanded discussion on the technical concepts of ADVT. And third is an extended ablation study in which we investigate the effectiveness of stochastic Bellman backups when applied to two baseline solvers POMCPOW and VOMCPOW.
2 Background and Related Work
A POMDP provides a general mathematical framework for sequential decision making under uncertainty. Formally, it is an 8-tuple . The robot is initially in a hidden state . This uncertainty is represented by an initial belief , which is a probability distribution on the state space , where is the set of all possible beliefs. At each step , the robot executes an action according to some policy . It transitions to a next state according to the transition model . For discrete state spaces, represents a probability mass function, whereas for continuous state spaces, it represents a probability density function. The robot does not know the state exactly, but perceives an observation according to the observation model . represents a probability mass function for discrete observation spaces, or a probability density function for continuous observation spaces respectively. In addition, it receives an immediate reward . The robot’s goal is to find a policy that maximizes the expected total discounted reward or the policy value
(1) |
where the discount factor ensures that is finite and well-defined.
The robot’s decision space is the set of policies defined as mappings from beliefs to actions. The POMDP solution is then the optimal policy, denoted as and defined as
(2) |
In designing solvers, it is often convenient to work with the action value or -value
(3) |
where is the expected reward of executing action at belief , while is the updated robot’s belief estimate after it performs action while at belief , and subsequently perceives observation . The optimal value function is then
(4) |
A more elaborate explanation is available in (Kaelbling et al., 1998).
Belief trees are convenient data structures to find good approximations to the optimal solutions via sampling-based approaches, which has been shown to significantly improve the scalability of POMDP solving (Kurniawati, 2022). Each node in a belief tree represents a sampled belief. It has outgoing edges labelled by actions, and each action edge is followed by outgoing edges labelled by observations which lead to updated belief nodes. Naïvely, bottom-up dynamic programming can be applied to a truncated belief tree to obtain a near-optimal policy, but many scalable POMDP solvers use more sophisticated sampling strategies to construct a compact belief tree, from which a close-to-optimal policy can be computed efficiently. ADVT uses such a sampling-based approach and belief tree representation too.
Various efficient sampling-based offline and online POMDP solvers have been developed for increasingly complex discrete and continuous POMDPs in the last two decades. Offline solvers (e.g., Bai et al. 2014; Kurniawati et al. 2011, 2008; Pineau et al. 2003; Smith and Simmons 2005) compute an approximately optimal policy for all beliefs first before deploying it for execution. In contrast, online solvers (e.g., Kurniawati and Yadav 2013; Silver and Veness 2010; Ye et al. 2017) aim to further scale to larger and more complex problems by interleaving planning and execution, and focusing on computing an optimal action for only the current belief during planning. For scalability purposes, ADVT follows the online solving approach.
Some online solvers have been designed for continuous POMDPs. In addition to the general solvers discussed in Section 1, some solvers (Agha-Mohammadi et al., 2011; Sun et al., 2015; Van Den Berg et al., 2011, 2012) restrict beliefs to be Gaussian and use Linear-Quadratic-Gaussian (LQG) control (Lindquist, 1973) to compute the best action. This strategy generally performs well in high-dimensional action spaces. However, they tend to perform poorly in problems with large uncertainties (Hoerger et al., 2020).
Last but not least, hierarchical rectangular partitions have been commonly used to discretize action spaces when solving continuous action bandits and MDPs (the fully observed version of POMDPs), such as HOO (Bubeck et al., 2011) and HOOT (Mansley et al., 2011). However, the partitions used in these algorithms are typically predefined, which are less adaptive than Voronoi-based partitions constructed dynamically during the search. On the other hand, Voronoi partitions have been proposed in VOOT (Kim et al., 2020) and VOMCPOW (Lim et al., 2021). However, their partitions are based on the Voronoi diagram of all sampled actions, which makes the computation of cell diameters and sampling relatively complex in high-dimensional action spaces. ADVT is computationally efficient, just like hierarchical rectangular partitions, and yet adaptive, just like the Voronoi partitions, getting the best of both worlds.
3 ADVT: Overview
In this paper, we consider a POMDP , where the action space is continuous and embedded in a bounded metric space with distance function . Typically, we define the metric space to be a -dimensional bounded Euclidean space, though ADVT can also be used with other types of bounded metric spaces. We further consider the state space and observation space to be either discrete or continuous.
ADVT assumes that the -value function is Lipschitz continuous in the action space; that is, for any belief , there exists a Lipschitz constant such that for any actions , we have . Since generally we do not know a tight Lipschitz constant, in the implementation, ADVT uses the same Lipschitz constant for all beliefs in , as discussed in Section 4.1.
ADVT is an anytime online solver for POMDPs. It interleaves belief space sampling and action space sampling to find the best action to perform from the current belief . The sampled beliefs are maintained in a belief tree, denoted as , while the sampled actions for a belief are maintained in a Voronoi tree, denoted as , which is adaptively refined. The Voronoi trees form part of the belief tree in ADVT: they determine the sampled action branches for the belief nodes.
Algorithm 1 presents the overall algorithm of ADVT, with details in the sections below. At each planning step, ADVT follows the MCTS approach of constructing a belief tree by sampling a set of episodes (line 7 in Algorithm 1), starting from the current belief. Details on the belief-tree construction are provided in Section 4. After sampling an episode, the estimated action values along the sequence of actions selected by the episode are updated using a backup operation (line 10 in Algorithm 1). In addition, ADVT refines the Voronoi tree as needed for each belief visited by the episode (line 12 in Algorithm 1), as discussed in Section 5. Once a planning budget is exceeded, ADVT selects an action from the current belief acoording to
(5) |
executes in the environment to obtain an observation , updates the current belief to (line 15-17 in Algorithm 1), and proceeds planning from the updated belief. This process repeats until the robot enters a terminal state or a maximum number of planning steps has been exceeded.
4 ADVT: Construction of the Belief Tree
The belief tree is a tree whose nodes represent beliefs and the edges are associated with action–observation pairs , where and . A node is a child of node via edge ) if and only if .
To construct the belief tree , ADVT interleaves the iterative select-expand-simulate-backup operations used in many MCTS algorithms with adaptive discretization. We assume that we have access to a generative model that simulates the dynamics, observation and reward models. In particular, for a given state and action , we have that , where is distributed according to , and . At each iteration, we first select a path starting from the root by sampling an episode as follows: We first set the current node as the root node, and sample from . At each step , we choose an action for using an action selection strategy (discussed in Section 4.1) , execute from state via the generative model to obtain a next state , an observation and the immediate reward . For problems with discrete observation spaces, we set the episode’s next state and observation to and respectively. For problems with continuous observation spaces, we select and according to an observation sampling strategy, as discussed in Section 4.3. Finally, we update to ’s child node via . The process terminates when encountering a terminal state or when the child node does not exist; in the latter case, the tree is expanded by adding a new node, and a rollout policy is simulated to provide an estimated value for the new node. In either case, backup operations are performed to update the estimated values for all actions selected by the episode. In addition, new actions are added to by refining the associated Voronoi tree as needed for each encountered belief. Algorithm 2 presents the pseudo-code for constructing , while the backup operation and refinement of are discussed in Section 4.2 and Section 5, respectively.
4.1 Action Selection Strategy
In contrast to many existing online solvers, which use UCB1 to select the action to expand a node of , ADVT treats the action selection problem as a continuum-arm bandit problem. Specifically, it selects an action from the set of candidate actions according to (Wang et al., 2020)
(6) | ||||
(7) |
where is the number of times node has been visited so far, is the number of times has been selected at , is the unique leaf cell containing in (see Section 5 for details on the Voronoi tree), and is the diameter of with respect to the distance metric . The constant is an exploration constant, where larger values of encourage exploration. In case , we set . With the Lipschitz continuity assumption, the value can be seen as an upper-confidence bound for the maximum possible Q-value within , as follows: is the standard UCB1 bound for the Q-value , and whenever this upper bounds , we have for any , because , where the last inequality holds due to the Lipschitz assumption. Since is unknown, we try different values of in our experiments and choose the best.
4.2 Backup
After sampling an episode , ADVT updates the estimates as well as the statistics and along the sequence of beliefs visited by the episode. To update , we use a stochastic version of the Bellman backup (Algorithm 3): Suppose is the immediate reward sampled by the episode after selecting from . We then update according to
(8) |
where is the child of in the belief tree via edge ; i.e., the belief we arrived at after performing action and perceiving observation from , and is the previous estimate of . This rule is in contrast to POMCP, POMCPOW and VOMCPOW, where the -value estimates are updated via Monte Carlo backup, i.e.,
(9) |
where is the the total discounted reward of episode , starting from belief .
Our update rule in LABEL:eq:\bellmanBackup is akin to the rule used in -Learning (Watkins and Dayan, 1992) and was implemented in the ABT software (Hoerger et al., 2018; Klimenko et al., 2014), though never explicitly compared with Monte Carlo backup.
The update rule in LABEL:eq:\bellmanBackup helps ADVT to focus its search on promising parts of the belief tree, particularly for problems where good rewards are sparse. For sparse-reward problems, the values of good actions deep in the search tree tend to get averaged out near the root when using Monte-Carlo backups, thus their influence on the action values at the current belief diminishes. In contrast, since stochastic Bellman backups always backpropagates the current largest action value for a visited belief, large action values deep in the search tree have a larger influence on the action values near the root. As we will demonstrate in the experiments, using stochastic Bellman backups instead of Monte-Carlo backups can lead to a significant performance benefit for many POMDP problems.
4.3 Observation Sampling Strategy
During the episode-sampling process, when ADVT selects an action at a belief according to eq. 6, we must sample an observation to advance the search to the next belief. ADVT uses different observation sampling strategies, depending on whether the observation space is discrete or continuous. Algorithm 4 summarizes ADVT’s observation sampling strategy for both discrete and continuous observation spaces.
For discrete observation spaces, ADVT follows the common strategy of sampling an observation from the generative model, given the current state of the episode and the selected action, and representing each sampled observation via an observation edge in the search tree. Since the number of possible observations is finite in the discrete setting, this strategy typically works well for moderately sized observation spaces.
For continuous observation spaces, however, the above strategy is unsuitable. In this setting, each sampled observation is generally unique, which leads to a possibly infinite number of observations that need to be represented in the search tree. As a consequence, we cannot expand the search beyond the first step, resulting in policies that are too myopic. Thus, to handle continuous observation spaces, we adopt a strategy similar to the one used by POMCPOW (Sunberg and Kochenderfer, 2018) and VOMCPOW (Lim et al., 2021). This strategy consists of two components. The first component is to apply Progressive Widening (Couëtoux et al., 2011) to limit the number of sampled observation edges per action edge as a function of , i.e., the number of times we selected from . In particular, let be the set of observation children of action at belief . We sample a new observation as a child whenever satisfies , where and are user defined parameters that control the rate at which new observations are added to the tree. If this condition is not satisfied, we uniformly sample an observation from . This enables ADVT to visit sampled observation edges multiple times and expand the search tree beyond one step. The second component is to use an explicit representation of each sampled belief in the search tree via a set of weighted particles , which allows us to obtain state samples that are correctly distributed according to the beliefs. When sampling an episode, suppose ADVT selects action from belief , samples a next state from the generative model and selects an observation using the strategy above. The weight corresponding to is then computed according to and is added to particle set representing belief whose parent is via the edge . To obtain a next state that is distributed according to , we resample from with a probability proportional to the particle weights and continue from .
Note that in contrast to online POMDP solvers for discrete observation spaces that only require a black-box model to sample observations, we additionally require access to the observation function . However, this is a common assumption for solvers that are designed for continuous observation spaces (Sunberg and Kochenderfer, 2018; Hoerger and Kurniawati, 2021).
Additionally, note that it is possible to use the observation sampling strategy for continuous observation spaces for discrete observation spaces. However, for discrete observation spaces, this introduces unneccessary computational overhead. As discussed above, the purpose of explicit belief representations is to obain state samples that are correctly distributed according to the beliefs. For discrete observation spaces, this is achieved by sampling a next state according to the transition model and an observation according to the observation model , and then use as a sample from the belief . Since the sampled observations are always distributed according to , is a sample of . This is in contrast to continuous observation spaces, where ADVT samples observations from a distribution that is potentially different to (line 7 in Algorithm 4). Therefore, ADVT handles discrete and continuous observation spaces separately and avoids the weighting an resampling step in the discrete case, thus saving computation time.
5 ADVT: Construction and Refinement of Voronoi Trees
For each belief node in the belief tree, its Voronoi tree is a BSP tree for . Each node in consists of a pair with and the representative action of , and each non-leaf node is partitioned into two child nodes. The partition of each cell in a Voronoi tree is a Voronoi diagram for two actions sampled from the cell.

To construct , ADVT first samples an action uniformly at random from the action space , and sets the pair as the root of . When ADVT decides to expand a leaf node , it first samples an action uniformly at random from . ADVT then implicitly constructs the Voronoi diagram between and within the cell , splitting it into two regions: One is , representing the set of all actions for which the distance , and the other is . The nodes and are then inserted as children of in . The leaf nodes of form the partition of the action space used by belief , while the finite action subset used to find the best action from is the set of actions associated with the leaves of . Figure 1 illustrates the relationship between a belief, the Voronoi tree and the partition of .
Voronoi trees have a number of representational and computational advantages compared to existing partitioning methods, such as hierarchical rectangular partitions (Bubeck et al., 2011; Mansley et al., 2011) or Voronoi diagrams (Kim et al., 2020; Lim et al., 2021). In contrast to hierarchical rectangular partitions, Voronoi trees are much more adaptive to the spatial locations of sampled actions, since the geometries of the cells are induced by the sampled actions. Furthermore, in contrast to Voronoi diagrams, the hierarchical structure of Voronoi trees allows us to derive efficient algorithms for estimating the diameters of the cells (discussed in Section 5.2) and sampling new actions from a cell (detailed in Section 5.3). For Voronoi diagrams the diameter computation can be prohibitively expensive in the context of online POMDP planning, since each sampled action results in a re-partitioning of the action space; thus, the diameters of the cells have to be re-computed from scratch. Voronoi tress combinine a hierarchical representation of the partition with an implicit construction of the cells via sampled actions, thereby achieving both adaptivity and computational efficiency.
The next section describes how ADVT decides which node of to expand.
5.1 Refining the Partition
ADVT decides how to refine the partitioning in two steps. First, it selects a leaf node of to be refined next. This step relies on the action selection strategy used for expanding the belief tree (Section 4.1). The selected leaf node of is the unique leaf node with chosen according to eq. 6
In the second step ADVT decides if the cell should indeed be refined. This decision is based on the quality of the estimate , as reflected by the number of samples used to estimate , and the variation of the -values for the actions contained in , as reflected by the diameter of . Specifically, ADVT refines the cell only when the following criteria is satisfied:
(10) |
where is an exploration constant. , i.e., the number of times that has been selected at , provides a rough estimate on the quality of the estimate, whereas serves as a measure of variation of the -value function within due to the Lipschitz assumption. This criterion, which is inspired by (Touati et al., 2020), limits the growth of the finite set of candidate actions and ensures that a cell is only refined when its corresponding action has been played sufficiently often. Larger cause cells to be refined earlier, thereby encouraging exploration.
Our refinement strategy is highly adaptive, in the sense that we use local information (i.e., the diameters of the cells, induced by the representative actions and the quality of the -value estimates of the representative actions), to influence the choice of the cell to be partitioned and when the chosen cell is partitioned, and the geometries of our cells are dependent on the sampled actions. This strategy is in contrast to other hierarchical decompositions, such as those used in HOO and HOOT, where the cell that corresponds to an action is refined immediately after the action is selected for the first time, which generally means the -value of an action is estimated based only on a single play of the action, which is grossly insufficient for our problem. In addition, our strategy is more adaptive than VOMCPOW. For VOMCPOW, the decision on when to refine the partition is solely based on the number of times a belief has been visited and neither takes the quality of the -value estimates, nor the diameters of the Voronoi cells into account. Furthermore, VOMCPOW’s refinement strategy is more global in a sense that each each sampled action results in a different Voronoi diagram of the action space. On the other hand, ADVT’s strategy is much more local, since each refinement only affects a single cell.
5.2 Estimating the Voronoi Cell Diameters
ADVT uses the diameters of the cells in the action selection strategy and the cell refinement rule, but efficiently computing the diameters of the cells is computationally challenging in high-dimensional spaces. We give an efficient approximation algorithm for computing the Voronoi cell diameters below.
Since the cells in are only implicitly defined, we use a sampling-based approach to approximate a cell’s diameter. Suppose we want to estimate the diameter of the cell corresponding to the node of the Voronoi tree . Then, we first sample a set of boundary points of , where is a user defined parameter. In our experiments, we typically set to be between and . To sample a boundary point , we first sample a point that lies on the sphere centered at with diameter – which can be easily computed for our benchmark problems – uniformly at random. The point lies either on the boundary or outside of . We then use the Bisection method (Burden et al., 2016) with and as the initial end-points, until the two end-points are less than a small threshold away from each other, but one still lies inside and the other outside . The point that lies inside is then a boundary point . The diameter of a bounding sphere that encloses all the sampled boundary points in (Welzl, 1991) is then an approximation of the diameter of .
The above strategy to estimate the diameter of a cell requires us to determine whether a sampled action lies inside or outside a cell . Fortunately, a Voronoi tree allows us to determine if easily. Observe that for each cell , we have that , where is the cell associated to the parent node of in the Voronoi tree. As a result, implies that . Therefore, to check whether , it is sufficent to check if is contained in all cells associated to the path in the Voronoi tree from the root to . Suppose is the sequence of nodes corresponding to a path in and we want to check whether . The point is inside if, for each , we have that , where is the inducing point of the cell corresponding to the sibling node of , and is the distance function on the action space. If, this condition is not satisfied, is outside , i.e., .
To further increase the computational efficiency of our diameter estimator, we re-use the sampled boundary points when ADVT decides to split into two child cells and . Suppose the diameter of was estimated using boundary points. Since every point in lies either on the boundary of or , we divide into and , such that and , where and are the inducing points of and respectively. This will leave us with and boundary points for cell and respectively. For both and we then sample and additional boundary points using the method above such that and . Using this method, we only need to sample new boundary points instead of when ADVT decides to split the cell , leading to increased computational efficiency.
5.3 Sampling from the Voronoi Cells
To sample an action that is approximately uniformly distributed in a cell , we use a simple Hit & Run approach (Smith, 1984) that performs a random walk within . Suppose is the cell corresponding to the node of the Voronoi tree . We first sample an action on the boundary of using the method described in Section 5.2. Subsequently, we take a random step from in the direction towards , resulting in a new action . We then use as the starting point, and iteratively perform this process for steps, which gives us a point that is approximately uniformly distributed in .
6 Experiments and Results
We evaluated ADVT on 5 robotics tasks, formulated as continuous-action POMDPs. The following section provides details regarding the tasks, while Table 1 summarizes the state, action and observation spaces for each problem scenario.
6.1 Problem Scenarios
![]() |
![]() |
![]() |
![]() |
![]() |
(a) | (b) | (c) | (d) | (e) |
Pushbox2D | observations | ||
---|---|---|---|
Pushbox3D | observations | ||
Parking2D | observations | ||
Parking3D | observations | ||
SensorPlacement-D | observations | ||
VDP-Tag | |||
LunarLander |
6.1.1 Pushbox
Pushbox is a scalable motion planning problem proposed in (Seiler et al., 2015) which is motivated by air hockey. A disk-shaped robot (blue disk in Figure 2(a)) has to push a disk-shape puck (red disk in Figure 2(a)) into a goal region (green circle in Figure 2(a)) by bumping into it, while avoiding any collision of itself and the puck with a boundary region (black region if Figure 2(a)). The robot receives a reward of when the puck is pushed into the goal region, while it receives a penalty of if the robot or the puck collides with the boundary region. Additionally, the robot receives a penalty of for every step. The robot can move freely in the environment by choosing a displacement vector. Upon bumping into the puck, the puck is pushed away and the motion of the puck is affected by noise. We consider two variants of the problem, Pushbox2D and Pushbox3D that differ in the dimensionality of the state and action spaces. For the Pushbox2D problem (illustrated in Figure 2(a)), the robot and the puck operate on a 2D-plane, whereas for Pushbox3D, both the robot and the puck operate inside a 3D-environment. Let us first describe the Pushbox2D variant. The state space consists of the -locations of both the robot and the puck, i.e., , while the action space is defined by . If the robot is not in contact with the puck during a move, the state evolves deterministically according to , where and are the -coordinates the robot and the puck respectively, corresponding to state , and is the displacement vector corresponding to action . In particular, if the robot bumps into the puck, the next position of the puck is computed as
(11) |
where the operator denotes the dot product, is the unit directional vector from the center of the robot to the center of the puck at the time of contact, and is a random variable drawn from a truncated Gaussian distribution , which is the Gaussian distribution truncated to the interval . For our experiments, we used , , and . The variables and are random variables drawn from a truncated Gaussian distribution .
The initial position position of the robot is known and is set to and respectively. The initial puck position, however, is uncertain. Its initial and coordinates are drawn from a truncated Gaussian distribution , but the robot has access to a noisy bearing sensor to localize the puck and a noise-free collision sensor which detects contacts between the robot and the puck. In particular, given a state , an observation consists of a binary component which indicates whether or not a contact between the robot and the puck occured, and a discretized bearing component calculated as
(12) |
where and are the -coordinates of the robot and the puck corresponding to the state , and is a random angle (expressed in radians) drawn from a truncated Gaussian distribution . Due to the operator in eq. 12, the number of discretized bearing observation is 12, thus the observation space consists of 24 unique observations.
Pushbox3D is a straightforward extension of the Pushbox2D problem: The state space is defined as , consisting of the -locations of the robot and the puck. The action space is , where each describes a 3D-displacement vector of the robot. The transition dynamics are defined similarly to Pushbox2D, except that all quantities are computed in 3D. The observation space is extended with an additional bearing observation, computed according to eq. 12, but with the term being replaced by , where and are the -coordinates of the robot and the puck respectively. The reward function is the same as in the Pushbox2D problem.
For both variants of the problem the discount factor is and a run is considered successful if the robot manages to push the puck into a goal region within 50 steps, while avoiding collisions of itself and the puck with the boundary region.
6.1.2 Parking
An autonomous vehicle with deterministic dynamics operates in a 3D-environment populated by obstacles, shown in Figure 2(b). The goal of the vehicle is to safely navigate to a goal area located between the obstacles while avoiding any collision with the obstacles (black regions in Figure 2(b)). The vehicle receives a reward of when reaching the goal area, while a collisions with the obstaces are penalized by . Attionally, the vehicle receives a penalty of for every step. We consider two variants of the problem, Parking2D and Parking3D. For Parking2D, the vehicle navigates on a 2D-plane, whereas for Parking3D, the vehicle operates in 3D-space. We first describe the Parking2D variant: The state space is and consists of the -position of the vehicle on the plane, its orientation and its velocity . The vehicle is controlled via a steering wheel angle and acceleration , i.e., the action space is , where is the continuous set of steering wheel angles and is the continuous set of accelerations. We assume that for a given state and action , the state of the vehicle evolves acoording to the following deterministic second-order discrete-time dynamics model:
(13) |
where , , and are the 2D-position, orientation and velocity corresponding to state , is a fixed control duration, and , are the steering wheel angle and acceleration components of the action. To compute the next state , given and , we numerically integrate eq. 13 for 3 steps.
There are three distinct areas in the environment, each consisting of a different type of terrain (colored areas in Figure 2(b)). Upon traversal, the vehicle receives an observation regarding the terrain type, which is only correct 70% of the time due to sensor noise. If the vehicle is outside the terrains, we assume that it deterministically receives a NULL observation. Initially the vehicle starts near one of three possible starting locations (red areas in Figure 2(b)) with equal probability. The exact initial position of the vehicle along the horizontal -axis is then drawn uniformly from around the starting location. For Parking3D the vehicle operates in the full 3D space, and we have additional continuous state and action components that model the vehicles elevation and change in elevation respectively. We assume that the elevation of the vehicle changes according to , where is the elevation component of the state, and is the elevation-change component of the action. The discount factor for both variants is and a run is considered successful if the vehicle enters the goal area within 50 steps while avoiding collisions with the obstacles.
Two properties make this problem challenging. First is the multi-modal beliefs which require the vehicle to traverse the different terrains for a sufficient amount of time to localize itself before attempting to reach the goal. Second, due to the narrow passage that leads to the goal, small perturbations from the optimal action can quickly result in collisions with the obstacle. As a consequence, good rewards are sparse and a POMDP solver must discover them quickly in order to compute a near-optimal strategy.
6.1.3 SensorPlacement
We propose a scalable motion planning under uncertainty problem in which a -DOF manipulator with revolute joints operates in muddy water inside a 3D environment. The robot is located in front of a marine structure, represented as four distinct walls, and its task is to mount a sensor at a particular goal area between the walls (rewarded with 1,000) while having imperfect information regarding its exact joint configuration. To localize itself, the robot’s end-effector is equipped with a touch sensor. Upon touching a wall, it provides noise-free information regarding which wall is being touched, while we assume that the sensor deterministically outputs a NULL observation in a contact-free state. However, in order to avoid damage, the robot must avoid collisions (penalized by ) between any of its other links and the walls. The state space of the robot consists of the set of joint-angles for each joint. The action space is , where is a vector of joint velocities. Due to underwater currents, the robot is subject to random control errors and the joint-angles corresponding to a state evolve according to
(14) |
where is the set of joint angles corresponding to the current state, and is a random vector sampled from a multivariate Gaussian distribution , where is the identity matrix of size , and . Initially the robot is uncertain regarding its exact joint angle configuration. We assume that the initial joint angles are distributed uniformly according to , where and , with corresponding to the configuration where all joint angles are zero, except for the second and third joint whose joint angles are and respectively and (units are in radians). We consider four variants of the problem, denoted as SensorPlacement-, with , that differ in the degrees-of-freedom (number of revolute joints and thus the dimensionality of the action space) of the manipulator. Figure 2(c) illustrates the SensorPlacement-8 problem, where the colored areas represent the walls and the green sphere represents the goal area. The discount factor is and a run is considered successful if the manipulator mounts the sensor within 50 steps while avoiding collisions with the walls. To successfully mount the sensor at the target location, the robot must use its touch sensor to carefully reduce uncertainty regarding its configuration. This is challenging, since a slight variation in the performed actions can quickly lead to collisions with the walls.
6.1.4 Van Der Pol Tag
Van Der Pol Tag (VDP-Tag) is a benchmark problem introduced in (Sunberg and Kochenderfer, 2018) in which an agent (blue particle in Figure 2(d)) operates in a 2D-environment. The goal is to tag a moving target (red particle in Figure 2(d)) whose motion is described by the Van Der Pol differential equation and disturbed by Gaussian noise with standard deviation . Initially, the position of the target is unknown. The agent travels at a constant speed but can pick its direction of travel at each step and whether to activate a costly range sensor, i.e., the action space is , where the first component is the direction of travel and the second component is the activation/deactivation of the range sensor. The robot receives observations from its range sensor via 8 beams (i.e., ) that measure the agent’s distance to the target if the target is within the beam’s range. These measurements are more accurate when the range sensor is active. While the target moves freely in the environment, the agent’s movements are restricted by four obstacles in the environment, shown in Figure 2(d). Catching the target is reward by , while activating the range sensor is penalized by . Additionally, each step incurs a penalty of . The discount factor is and a run is considered successful of the agent catches the target within steps. More details of the VDP-Tag problem can be found in (Sunberg and Kochenderfer, 2018).
Note that in this problem, the action space consists of two disconnected components, namely and . To allow a Voronoi tree in ADVT to cover the entire action space, we ensure that once the root node of is split, we set the range sensor component of the representative actions of the two resulting child cells to and respectively, such that one child cell covers the component , and the other child cell covers the component .
6.1.5 LunarLander
This problem is a partially observable adaptation of Atari’s Lunar Lander game. For this problem, the state, action and observation spaces are all continuous. The objective is to control a lander vehicle to safely land at a target zone located on the moon’s surface. The lander operates on a -plane and its state is a 6D-vector , where and are the horizontal and vertical positions of the lander and its orientation. , and represent the lander’s horizontal, vertical and angular velocities respectively. The action space is , where is the set of linear accelerations along the lander’s vertical axis, and is the set of angular accelerations about the lander’s geometric center. The initial belief of the lander is a multivariate Gaussian distribution with mean and covariance matrix .
We assume that the state of the lander evolves according to the following second-order discrete-time stochastic model:
(15) |
where and , with and being the vertical and angular accelerations corresponding to the action, and and are random control errors drawn from zero-mean Gaussian distributions with standard deviations and respectively. The variable is a constant step size, whereas and are motor constants. The variable is a constant describing the gravitational acceleration along the -axis. To obtain the next state, given the current state and an action, we numerically integrate the system in eq. 15 for 5 steps.
The lander perceives information regarding its state via three noisy sensors: The first two sensors measure the lander’s horizontal and angular velocities, whereas the third sensor measures the distance to the ground along the lander’s vertical axis (dashed line in Figure 2(e)). The readings of all three sensors are disturbed by standard-Gaussian noise.
The reward function is defined by
(16) |
The first term in eq. 16 encourages the lander to prevent dangerous angles and crashing into the ground. The second term encourages the lander to safely land at with an upright angle and a small vertical velocity. The third term encourages the lander to land as quickly as possible. The discount factor is and a run is considered successful if the lander’s vertical position reaches within steps, without crashing into the ground ().
6.2 Experimental Setup
The purpose of our experiments is three-fold: First is to evaluate ADVT and compare it with two state-of-the-art online POMDP solvers for continuous actions spaces, POMCPOW (Sunberg and Kochenderfer, 2018) and VOMCPOW (Lim et al., 2021). The results of those experiments are discussed in Section 6.3.1.
Second is to investigate the importance of the different components of ADVT, specifically the Voronoi tree-based partitioning, the cell-diameter-aware exploration term in eq. 7, and the stochastic Bellman backups. For this purpose, we implemented the original ADVT and three modifications. First is ADVT-R, which replaces the Voronoi decomposition of ADVT with a simple rectangular-based method: Each cell in the partition is a hyper-rectangle that is subdivided by cutting it in the middle along the longest side (with ties broken arbitrarily). The second variant is ADVT (L=0), which is ADVT where eq. 7 reduces to the standard UCB1 bound. For the third variant, ADVT-MC, we replace the stochastic Bellman backup in LABEL:eq:\bellmanBackup with the Monte Carlo backup in eq. 9, as used by POMCPOW and VOMCPOW. The results for this study are discussed in Section 6.3.2
Third is to test the effects of two algorithmic components of ADVT when applied to the baselines VOMCPOW and POMCPOW: In the original implementation of VOMCPOW and POMCPOW provided by the authors, the policy is recomputed after every planning step using a new search tree. In contrast, for discrete observation spaces, ADVT applies ABT’s (Kurniawati and Yadav, 2013) strategy that re-uses the partial search tree (starting from the updated belief) constructed in the previous planning steps and improves the policy within the partial search tree. Therefore, for problems with discrete observation spaces, we also tested modified versions of VOMCPOW and POMCPOW, where we follow ADVT’s strategy of re-using the partial search trees. Note that for the VDP-Tag and LunarLander problems, we did not test the variants of VOMCPOW and POMCPOW that re-use partial search trees, since each observation that the agent perceives from the environment leads to a new search tree due to the continuous observation spaces. Moreover, to test the effects of ADVT’s stochastic Bellman backup strategy further, we implemented variants of VOMCPOW and POMCPOW to use stochastic Bellman backups instead of Monte Carlo backups. The results are discussed in Section 6.3.3.
To approximately determine the best parameters for each solver and problem, we ran a set of systematic preliminary trials by performing a grid-search over the parameter space. For each solver and problem, we used the best parameters and ran 1,000 simulation runs, with a fixed planning time of 1s CPU time for each solver and scenario. Each tested solver and the scenarios were implemented in C++ using the OPPT-framework (Hoerger et al., 2018). All simulations were run single-threaded on an Intel Xeon Platinum 8274 CPU with 3.2GHz and 4GB of memory.
6.3 Results
Pushbox2D | Pushbox3D | Parking2D | Parking3D | VDP-Tag | LunarLander | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ADVT | ||||||||||||||||||
VOMCPOW | ||||||||||||||||||
POMCPOW |

Pushbox2D | Pushbox3D | Parking2D | Parking3D | VDP-Tag | LunarLander | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ADVT-R | ||||||||||||||||||
ADVT (L=0) | ||||||||||||||||||
ADVT-MC | ||||||||||||||||||
VOMCPOW+our RT+our BB | - | - | ||||||||||||||||
VOMCPOW+our BB | ||||||||||||||||||
VOMCPOW+our RT | - | - | ||||||||||||||||
POMCPOW+our RT+our BB | - | - | ||||||||||||||||
POMCPOW+our BB | ||||||||||||||||||
POMCPOW+our RT | - | - |
SensorPlacement-6 SensorPlacement-8 SensorPlacement-10 SensorPlacement-12 ADVT ADVT-R ADVT () ADVT-MC VOMCPOW VOMCPOW+our RT+our BB VOMCPOW+our BB VOMCPOW+our RT POMCPOW POMCPOW+our RT+our BB POMCPOW+our BB POMCPOW+our RT 6.8
Pushbox2D | Pushbox3D | Parking2D | Parking3D | VDP-Tag | LunarLander | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ADVT | ||||||||||||||||||
ADVT-R | ||||||||||||||||||
ADVT (L=0) | ||||||||||||||||||
ADVT-MC | ||||||||||||||||||
VOMCPOW | ||||||||||||||||||
VOMCPOW+our RT+our BB | - | |||||||||||||||||
VOMCPOW+our BB | ||||||||||||||||||
VOMCPOW+our RT | - | |||||||||||||||||
POMCPOW | ||||||||||||||||||
POMCPOW+our RT+our BB | - | |||||||||||||||||
POMCPOW+our BB | ||||||||||||||||||
POMCPOW+our RT | - |
SensorPlacement-6 SensorPlacement-8 SensorPlacement-10 SensorPlacement-12 ADVT ADVT-R ADVT () ADVT-MC VOMCPOW VOMCPOW+our RT+our BB VOMCPOW+our BB VOMCPOW+our RT POMCPOW POMCPOW+our RT+our BB POMCPOW+our BB POMCPOW+our RT
6.3.1 Comparison with State-of-the-Art Methods
Table 2 shows the average total discounted rewards of all tested solvers on the Pushbox, Parking, VDP-Tag and LunarLander problems, while Figure 3 shows the results for the SensorPlacement problems. Detailed results for the SensorPlacement problems are presented in Table 4 while results on the success rates of the tested solvers are shown in Table 5 and Table 6.
ADVT generally outperforms all other methods, except for VDP-Tag, where VOMCPOW performs better. Interestingly, as we will see in Section 6.3.2, the variant of ADVT that uses Monte-Carlo backups instead of stochastic Bellman backups (ADVT-MC) outperforms all other methods in this problem, which supports ADVT’s effectiveness in handling continuous actions. The results for the SensorPlacement problems indicate that ADVT scales well to higher-dimensional action spaces. Additionally, the results on the VDP-Tag and LunarLander problems indicate that ADVT is capable of handling continuous observation spaces well.
ADVT performs well in terms of the success rate, too (Table 5 and Table 6). ADVT maintains more than % success rate in the Pushbox, Parking, VDP-Tag and LunarLander problems. In the Parking3D problem, VOMCPOW’s and POMCPOW’s success rate can be as low as % and %. Similarly, in the SensorPlacement problems ADVT achieves a higher success rate compared to VOMCPOW and POMCPOW. While in the SensorPlacement-6 problem, the gap between ADVT and VOMCPOW is relatively small ( for ADVT and for VOMCPOW), the gap increases as the dimensionality of the action space increases. In the SensorPlacement-12 problem, ADVT achieves a success rate of , while VOMCPOW’s success rate drops to . POMCPOW achieves a success rate of in the SensorPlacement-6 problem, while only achieving a success rate of in the SensorPlacement-12 problem.
6.3.2 Understanding the Effects of Different Components of ADVT.
In this section we present results demonstrating the importance of three key algorithmic components of ADVT, namely the Voronoi-based partitions, the cell-size-aware optimistic upper-confidence bound and the Stochastic Bellman backups.
Effects of Voronoi-based partitioning for ADVT.
To understand the benefit of our Voronoi-based partitioning method, we compare the results of ADVT with those of ADVT-R. Table 3 shows that ADVT-R slightly outperforms ADVT in the Pushbox2D, Parking2D, VDP-Tag and LunarLander problems, indicating that a rectangular-based partitioning works well for low-dimensional action spaces. However, Table 4 shows that ADVT-R is uncompetitive in the SensorPlacement problems as the dimensionality of the action space increases. For rectangular-based partitionings, the diameters of the cells can shrink very slowly in higher-dimensional action spaces. Additionally, the cell refinement method is independent of the spatial locations of the sampled actions. Both properties result in loose optimistic upper-confidence bounds of the -values, leading to excessive exploration of sub-optimal areas of the action space. For Voronoi trees, the geometries (and therefore the diameters) of the cells are much more adaptive to the spatial locations of the sampled actions, leading to more accurate optimistic upper-confidence bounds of the associated -values which avoids over-exploration of areas in the action space that already contain sufficiently many sampled actions.
Effects of cell-size-aware optimistic upper-confidence bound.
To investigate the importance of the component in the optimistic upper-confidence bound in eq. 7, we compare ADVT and ADVT (L=0). The results in Table 3 and Table 4 indicate that the cell-diameter-aware component in the upper-confidence bound in eq. 7 is important for ADVT to perform well, particularly in the SensorPlacement problems. The reason is that in the early stages of planning, the partitions associated to the beliefs are still coarse, i.e., only a few candidate actions are considered per belief. If some of those candidate actions have small estimated -values, ADVT (L=0) may discard large portions of the action space for a very long time, even if they potentially contain useful actions. The cell-diameter-aware bias term in eq. 7 alleviates this issue by encouraging ADVT to explore cells with large diameters. This is particularly important for problems with large action spaces such as the SensorPlacement problems.
Effects of Stochastic Bellman backups.
To investigate the effects of this component, let us compare ADVT with ADVT-MC. Table 3 and Table 4 reveal that ADVT which uses stochastic Bellman backups often performs significantly better, particularly in the Parking2D and Parking3D problems. The reason is that in both problems good rewards are sparse, particularly for beliefs where the vehicle is located between the walls and slight deviations from the optimal actions can lead to collisions. The stochastic Bellman backups help to focus the search on more promising regions of the action space. On the other hand, in the VDP-Tag problem, ADVT-MC performs better than ADVT. In this problem, it is important to reduce the uncertainty with respect to the target location during the first few steps by activating the range sensor. However, due to the cost of activating the range sensor, this strategy often seems suboptimal during the early stages of planning, when only a few episodes have been sampled. At the same time, in this problem, the stochastic Bellman backups in ADVT tend to overestimate the -values for actions that do not activate the range sensor (this effect is known as maximization bias in -learning (Sutton and Barto, 2018)). As a result, ADVT tends to discard strategies that reduce the uncertainty with respect to the target location during the first few steps. ADVT-MC which uses Monte-Carlo backups, does not suffer from maximization bias, which helps it to perform better than ADVT in this problem.
6.3.3 Ablation Study
Here we investgate how two components of ADVT, namely, re-using partial search trees and using stochastic Bellman backups instead of Monte-Carlo backups, affect the baselines VOMCPOW and POMCPOW when we apply these ideas to the baselines. The variants of VOMCPOW and POMCPOW that re-use partial search trees are denoted by VOMCPOW+RT and POMCPOW+RT respectively. The variants of the baselines that use stochastic Bellman backups are denoted by VOMCPOW+BB and POMCPOW+BB. The variants VOMCPOW+RT+BB and POMCPOW+RT+BB both re-use partial search trees and perform stochastic Bellman backups.
Generally, the results in Table 3 and Table 4 indicate that the baselines that re-use partial search trees perform much better than the baselines VOMCPOW and POMCPOW respectively, particularly in the Pushbox problems. These results (as well as those of ADVT and all its variants) indicate the benefit of re-using partial search trees that were generated in previous planning steps instead of re-computing the policy at every planning step. Similarly, the baselines that use stochastic Bellman backups tend to outperform the ones that use Monte-Carlo backups, except for the VDP-Tag problem. This is consistent with our results for ADVT and ADVT-MC and indicates that stochastic Bellman backup is a simple, yet viable tool to improve the performance of MCTS-based solvers.
7 Conclusion
We propose a new sampling-based online POMDP solver, called ADVT, that scales well to POMDPs with high-dimensional continuous action and observation spaces. Our solver builds on a number of works that uses adaptive discretization of the action space, and introduces a more effective adaptive discretization method that uses novel ideas: A Voronoi tree based adaptive hierarchical discretization of the action space, a novel cell-size aware refinement rule, and a cell-size aware upper-confidence bound. For continuous observation spaces, our solver adopts the Progressive Widening and explicit belief representation strategy, enabling ADVT to scale to higher-dimensional observation spaces. ADVT shows strong empirical results against state-of-the-art algorithms on several challenging benchmarks. We hope this work further expands the applicability of general-purpose POMDP solvers.
We thank Jerzy Filar for many helpful discussions. This work is partially supported by the Australian Research Council (ARC) Discovery Project 200101049.
References
- Agha-Mohammadi et al. (2011) Agha-Mohammadi AA, Chakravorty S and Amato NM (2011) Firm: Feedback controller-based information-state roadmap. A framework for motion planning under uncertainty. In: IROS. IEEE, pp. 4284–4291.
- Auer et al. (2002) Auer P, Cesa-Bianchi N and Fischer P (2002) Finite-time analysis of the multiarmed bandit problem. Machine Learning 47(2-3): 235–256.
- Bai et al. (2014) Bai H, Hsu D and Lee WS (2014) Integrated perception and planning in the continuous space: A POMDP approach. IJRR 33(9): 1288–1302.
- Bubeck et al. (2011) Bubeck S, Munos R, Stoltz G and Szepesvári C (2011) X-armed bandits. Journal of Machine Learning Research 12(5).
- Burden et al. (2016) Burden RL, Faires JD and Burden AM (2016) Numerical analysis. Tenth edition. Boston, MA: Cengage Learning. ISBN 9781305253667.
- Couëtoux et al. (2011) Couëtoux A, Hoock JB, Sokolovska N, Teytaud O and Bonnard N (2011) Continuous upper confidence trees. In: LION. Springer, pp. 433–445.
- Fischer and Tas (2020) Fischer J and Tas ÖS (2020) Information particle filter tree: An online algorithm for POMDPs with belief-based rewards on continuous domains. In: ICML. PMLR, pp. 3177–3187.
- Hoerger and Kurniawati (2021) Hoerger M and Kurniawati H (2021) An on-line POMDP solver for continuous observation spaces. In: Proc. IEEE/RSJ Int. Conference on Robotics and Automation (ICRA). IEEE.
- Hoerger et al. (2020) Hoerger M, Kurniawati H, Bandyopadhyay T and Elfes A (2020) Linearization in motion planning under uncertainty. In: Algorithmic Foundations of Robotics XII. Springer, Cham, pp. 272–287.
- Hoerger et al. (2018) Hoerger M, Kurniawati H and Elfes A (2018) A software framework for planning under partial observability. In: IROS. IEEE, pp. 1–9. URL https://github.com/RDLLab/oppt.
- Hoerger et al. (2022) Hoerger M, Kurniawati H, Kroese D and Ye N (2022) Adaptive discretization using voronoi trees for continuous-action pomdps. In: Proc. Int. Workshop on the Algorithmic Foundations of Robotics.
- Kaelbling et al. (1998) Kaelbling LP, Littman ML and Cassandra AR (1998) Planning and acting in partially observable stochastic domains. Artificial intelligence 101(1-2): 99–134.
- Kim et al. (2020) Kim B, Lee K, Lim S, Kaelbling L and Lozano-Pérez T (2020) Monte carlo tree search in continuous spaces using Voronoi optimistic optimization with regret bounds. AAAI 34(06): 9916–9924.
- Klimenko et al. (2014) Klimenko D, Song J and Kurniawati H (2014) Tapir: A software toolkit for approximating and adapting POMDP solutions online. In: ACRA. URL https://github.com/rdl-algorithm/tapir.
- Kurniawati (2022) Kurniawati H (2022) Partially observable Markov decision processes and robotics. Annual Review of Control, Robotics, and Autonomous Systems 5(1). To appear.
- Kurniawati et al. (2011) Kurniawati H, Du Y, Hsu D and Lee WS (2011) Motion planning under uncertainty for robotic tasks with long time horizons. IJRR 30(3): 308–323.
- Kurniawati et al. (2008) Kurniawati H, Hsu D and Lee WS (2008) SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces. In: RSS.
- Kurniawati and Yadav (2013) Kurniawati H and Yadav V (2013) An online POMDP solver for uncertainty planning in dynamic environment. In: Proc. Int. Symp. on Robotics Research.
- Lim et al. (2021) Lim MH, Tomlin CJ and Sunberg ZN (2021) Voronoi progressive widening: Efficient online solvers for continuous state, action, and observation POMDPs. In: 60th IEEE Conference on Decision and Control (CDC). pp. 4493–4500.
- Lindquist (1973) Lindquist A (1973) On feedback control of linear stochastic systems. SIAM Journal on Control 11(2): 323–343. 10.1137/0311025.
- Mansley et al. (2011) Mansley C, Weinstein A and Littman M (2011) Sample-based planning for continuous action Markov decision processes. In: ICAPS.
- Mern et al. (2021) Mern J, Yildiz A, Sunberg Z, Mukerji T and Kochenderfer MJ (2021) Bayesian optimized monte carlo planning. In: AAAI, volume 35. pp. 11880–11887.
- Papadimitriou and Tsitsiklis (1987) Papadimitriou CH and Tsitsiklis JN (1987) The complexity of Markov decision processes. Mathematics of operations research 12(3): 441–450.
- Pineau et al. (2003) Pineau J, Gordon G and Thrun S (2003) Point-based Value Iteration: An anytime algorithm for POMDPs. In: IJCAI.
- Seiler et al. (2015) Seiler KM, Kurniawati H and Singh SP (2015) An online and approximate solver for POMDPs with continuous action space. In: ICRA. IEEE, pp. 2290–2297.
- Silver and Veness (2010) Silver D and Veness J (2010) Monte carlo planning in large POMDPs. In: Advances in neural information processing systems. pp. 2164–2172.
- Smith (1984) Smith RL (1984) Efficient monte carlo procedures for generating points uniformly distributed over bounded regions. Operations Research 32(6): 1296–1308.
- Smith and Simmons (2005) Smith T and Simmons R (2005) Point-based POMDP algorithms: Improved analysis and implementation. In: UAI.
- Sondik (1971) Sondik EJ (1971) The Optimal Control of Partially Observable Markov Decision Processes. PhD Thesis, Stanford, California.
- Sun et al. (2015) Sun W, Patil S and Alterovitz R (2015) High-frequency replanning under uncertainty using parallel sampling-based motion planning. IEEE TRO 31(1): 104–116.
- Sunberg and Kochenderfer (2018) Sunberg ZN and Kochenderfer MJ (2018) Online algorithms for POMDPs with continuous state, action, and observation spaces. In: ICAPS.
- Sutton and Barto (2018) Sutton RS and Barto AG (2018) Reinforcement learning: An introduction. MIT press.
- Touati et al. (2020) Touati A, Taiga AA and Bellemare MG (2020) Zooming for efficient model-free reinforcement learning in metric spaces. arXiv preprint 2003.04069 .
- Valko et al. (2013) Valko M, Carpentier A and Munos R (2013) Stochastic simultaneous optimistic optimization. In: ICML. PMLR, pp. 19–27.
- Van Den Berg et al. (2011) Van Den Berg J, Abbeel P and Goldberg K (2011) LQG-MP: Optimized path planning for robots with motion uncertainty and imperfect state information. The International Journal of Robotics Research 30(7): 895–913.
- Van Den Berg et al. (2012) Van Den Berg J, Patil S and Alterovitz R (2012) Motion planning under uncertainty using iterative local optimization in belief space. IJRR 31(11): 1263–1278.
- Wang et al. (2020) Wang T, Ye W, Geng D and Rudin C (2020) Towards practical Lipschitz bandits. In: FODS. pp. 129–138.
- Watkins and Dayan (1992) Watkins CJ and Dayan P (1992) Q-learning. Machine learning 8(3): 279–292.
- Welzl (1991) Welzl E (1991) Smallest enclosing disks (balls and ellipsoids). In: Maurer H (ed.) New Results and New Trends in Computer Science. Berlin: Springer. ISBN 978-3-540-46457-0, pp. 359–370.
- Ye et al. (2017) Ye N, Somani A, Hsu D and Lee WS (2017) DESPOT: Online POMDP planning with regularization. Journal of Artificial Intelligence Research 58: 231–266.