This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Dept. of Mathematics, Rutgers, NJ, USA 22institutetext: Dept. of Computer Science, Rutgers, NJ, USA

Effective and Robust Non-Prehensile Manipulation via Persistent Homology Guided Monte-Carlo Tree Search

Ewerton R. Vieira 11    Kai Gao 22    Daniel Nakhimovich 22   
Kostas E. Bekris
22
   Jingjin Yu This work supported by NSF CCF-2309866. EV is partially supported by Air Force Office of Scientific Research under award numbers FA9550-23-1-0011 and FA9550-23-1-0400.22
Abstract

Performing object retrieval in real-world workspaces must tackle challenges including uncertainty and clutter. One option is to apply prehensile operations, which can be time consuming in highly-cluttered scenarios. On the other hand, non-prehensile actions, such as pushing simultaneously multiple objects, can help to quickly clear a cluttered workspace and retrieve a target object. Such actions, however, can also lead to increased uncertainty as it is difficult to estimate the outcome of pushing operations. The proposed framework in this work integrates topological tools and Monte-Carlo Tree Search (MCTS) to achieve effective and robust pushing for object retrieval. It employs persistent homology to automatically identify manageable clusters of blocking objects without the need for manually adjusting hyper-parameters. Then, MCTS uses this information to explore feasible actions to push groups of objects, aiming to minimize the number of operations needed to clear the path to the target. Real-world experiments using a Baxter robot, which involves some noise in actuation, show that the proposed framework achieves a higher success rate in solving retrieval tasks in dense clutter than alternatives. Moreover, it produces solutions with few pushing actions improving the overall execution time. More critically, it is robust enough that it allows one to plan the sequence of actions offline and then execute them reliably on a Baxter robot.

Introduction video: \hrefhttps://youtu.be/00kEztqytRUyoutu.be/00kEztqytRU .
Code: \hrefhttps://github.com/DanManN/planning_baxter/tree/aggregationgithub.com/DanManN/planning_baxter/

keywords:
Manipulation, Topology, Monte-Carlo Tree Search.

1 Introduction

Retrieving a target object from a messy and constrained space, such as taking out a bottle of water from a fridge, requires a robotic arm to relocate other objects blocking access. Humans routinely perform such tasks with a high degree of success, and the manipulation primitives used to execute such tasks are not limited by pick-and-place-based rearrangements. Instead, they often involve non-prehensile manipulation, such as pushing and pulling actions. Endowing robots with such skills is highly desirable, especially if they are tasked to carry out ordinary human tasks. Humans execute such manipulation operations, including grouping objects before performing a push, in a naturally robust manner, while keeping the number of actions low.

\begin{overpic}[width=212.47617pt]{figures/system-example.png} \put(79.5,56.0){{\color[rgb]{1,1,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,0}\pgfsys@color@cmyk@stroke{0}{0}{1}{0}\pgfsys@color@cmyk@fill{0}{0}{1}{0}\text{$\mathcal{T}$}}} \end{overpic}
Figure 1: Top left: the experimental setup. Top right: Scene S2 is one of the initial configurations for the experiments. The robot must move cluttered objects blocking access to a target object 𝒯\mathcal{T} placed at the bottom close to the wall. (2) to (5) demonstrate the pushing actions executed after a given plan by the proposed method. A simple grasping plan is performed from (6) to (7).

This work proposes strategies for solving object retrieval problems focusing on efficiency and robustness. An example run is shown in Fig. 2, in which the robot must retrieve the target object located at the lower portion of the workspace (a Pringles can of a specific flavor). Our study tackles the challenge by exploring two related sub-strategies: (i) clustering objects to inform the choice of pushing actions, and (ii) searching through and evaluating feasible pushing operations before executing them in the real world. The first is motivated by human behaviors of implicitly grouping objects before pushing them simultaneously. Topological tools, such as persistent homology, have been successfully employed to comprehensively identify manageable object clusters for selecting the proper pushing actions [19]. The second direction leverages Monte-Carlo Tree Search (MCTS) to explore the feasible identified pushing actions with a high reward level in terms of the number of obstacles removed from the path to the target and how dispersed the clusters are after the performed actions. The reward metric is intuitive since a higher number of obstacles removed results in a higher chance to solve the task faster, and dispersing the clusters leads to easier pushing actions for consecutive steps.

The proposed framework integrates MCTS and the informed actions/rewards provided by Persistent Homology. It is referred to here as PHIM (Persistent Homology Informed actions and rewards for Monte-Carlo tree search). For real-world experiments, the pipeline obtains the initial configuration from perception and then planning via PHIM to produce a high-level sequence of operations that result in successful execution reliably. The solution is executed on the Baxter robot. This can be done either by performing the plan open-loop or optionally re-planning after each pushing action execution. Given the experimental evaluation, PHIM achieves a significantly higher success rate in solving cluttered problems in constrained, shelf-like workspaces than alternatives, due to its long-horizon planning capability and solution robustness. Furthermore, plans produced by PHIM demonstrate are visually more natural and human-like.

In summary, this work contributes: 1) PHIM (Persistent Homology Informed actions and rewards for Monte-Carlo tree search) is a principled framework that integrates topological tools and Monte Carlo methods to achieve long-horizon planning for object retrieval via non-prehensile actions.

Refer to caption
Figure 2: Proposed framework integrating Persistent Homology for informed actions/rewards and Monte-Carlo tree search.

2) In comparison to existing methods, including methods that only employ topological tools but do not perform planning over multiple horizons, PHIM delivers significantly improved solution optimality in terms of the success rate and the number of actions taken. At the same time, PHIM incurs little extra computations cost.
3) PHIM demonstrated superior robustness to uncertainty (e.g., in comparison to [19], which lacks real robot experiments) as its plans can be directly executed on a real Baxtor robot with inherently motor uncertainty plus additional perception uncertainty.

2 Related Work

Working in a constrained and cluttered environment requires manipulation tasks, such as pick-n-place and object rearrangement. Prehensile manipulation generally assumes accurate pose estimation of objects, which is challenging for cluttered setups with action uncertainty. In contrast, non-prehensile actions can be used in a cluttered and constrained workspace with uncertainty to rearrange multiple objects using a single operation, thus allowing large-scale object manipulation [8]. Sometimes they are preferred over picking [12, 17, 18, 13, 5] since pushing can be executed with simpler and smaller end-effectors for constrained workspaces. Non-prehensile actions, however, are less predictable and increase uncertainty regarding object placement; this motivates solutions for estimating the outcome of pushing actions [23, 24, 7].

Previous works in reaching through clutter focused on recognizing objects to be relocated to allow a collision-free region to reach a target [10, 11]. Uncertainty arising from occlusion or sensor noise complicates this problem, and efforts aim to minimize their effects by inferring object shape [16, 21], reason about pose uncertainty [20], or using probabilistic filtering [15]. To achieve higher success rates, human interaction has been proposed to guide a high-level plan, which proposes an ordered sequence of objects to be pushed to approximate intermediate goal positions [14]. For faster kinodynamic planning, non-prehensile actions are allowed to have quasi-static interactions [4].

The proposed method takes advantage of persistent homology to inform the selection of efficient and robust pushing actions, which outperform, in terms of planning time and the number of actions, prior efforts in the same domain [14, 4, 19]. Moreover, it provides high-level solutions that are robust under uncertainty (pose, arm movements, non-prehensile manipulation of objects). Related problems, where the target has to be pushed to a desired goal, have been explored [1]. Some approaches learn an optimal policy from visual input [22] or aim to compute effective manipulation state/action sequences [3].

3 Problem Setup and Notation

\begin{overpic}[width=260.17464pt]{figures/path_region_v3.pdf} \put(19.0,4.0){\small\text{$\mathcal{T}$}} \put(28.0,27.0){\small$N$} \put(1.0,27.0){\small$S$} \put(86.0,27.0){\small{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}$CC_{c}$}} \put(54.0,27.0){\small$\text{$\Pi$}(\text{$\mathcal{X}$})$} \end{overpic}
Figure 3: (left) The workspace 𝒲\mathcal{W} and the robot arm for the configuration of Fig. 1, where 𝒯\mathcal{T} is the target. (middle) The path region Π(𝒳)\text{$\Pi$}(\text{$\mathcal{X}$}). (right) Connected components in Π(𝒳)\text{$\Pi$}(\text{$\mathcal{X}$}) using r=0.116r=0.116, where CCc=CCc(𝒳,r)CC_{c}=CC_{c}(\text{$\mathcal{X}$},r) is the closest component to the gripper.

Let 𝒲3\text{$\mathcal{W}$}\subset\mathbb{R}^{3} be a bounded region with a cross-section given by a polygon and 𝒲\partial\text{$\mathcal{W}$} is its boundary, where movable obstacles 𝒪={o1,,on}\text{$\mathcal{O}$}=\{o_{1},\ldots,o_{n}\} and a target object 𝒯\mathcal{T} are approximated by cylinders residing inside 𝒲\mathcal{W}. Assume that an arm with a gripper gg can reach a target object at any position in 𝒲\mathcal{W}. Furthermore, the arm can access the interior of 𝒲\mathcal{W} from only one face of 𝒲\partial\text{$\mathcal{W}$}, where due to limited accessibility, it cannot perform overhand grasps or lift the objects. Collisions between 𝒲\partial\text{$\mathcal{W}$} and the arm are not permitted. In other words, the robot respects the workspace constraints and may move along a 2D plane below the center of mass of the objects to avoid toppling them. The grasping of the target 𝒯\mathcal{T} requires the arm’s last link to be collision-free with all obstacles, including the boundary 𝒲\partial\text{$\mathcal{W}$}. The robot is a Baxter with a 7-DoF articulated robotic arm with a gripper attached at its end. The task is to reach and grasp the target object 𝒯\mathcal{T} with a minimal set of actions used to move obstacles blocking access to 𝒯\mathcal{T}.

The configuration 𝒳\mathcal{X} is defined as 𝒳={𝒫,p𝒯,s}={(p1,,pn),p𝒯,s}𝒲n+1×SE(2)\text{$\mathcal{X}$}=\{\text{$\mathcal{P}$},p_{\text{$\mathcal{T}$}},s\}=\{(p_{1},\ldots,p_{n}),p_{\text{$\mathcal{T}$}},s\}\in\text{$\mathcal{W}$}^{n+1}\times SE(2), where pi𝒲p_{i}\in\text{$\mathcal{W}$} defines the object oio_{i}’s coordinates and p𝒯p_{\text{$\mathcal{T}$}} defines the 𝒯\mathcal{T}’s coordinate, respectively, while ss is the position and orientation of the robot’s gripper gg. To simplify the notation, 𝒳[oi]=pi\text{$\mathcal{X}$}[o_{i}]=p_{i} and 𝒳[𝒯]=p𝒯\text{$\mathcal{X}$}[\text{$\mathcal{T}$}]=p_{\text{$\mathcal{T}$}} represent that oio_{i} and 𝒯\mathcal{T} are at position pip_{i} and p𝒯p_{\text{$\mathcal{T}$}}, respectively. 𝒳[g]=s\text{$\mathcal{X}$}[g]=s represents that the gripper gg of the robot is at pose sSE(2)s\in SE(2). Let V(p)V(p) be the subset of 𝒲\mathcal{W} occupied by an object at position pp. A configuration 𝒳𝒲n+1×SE(2)\text{$\mathcal{X}$}\in\text{$\mathcal{W}$}^{n+1}\times SE(2) is feasible if: no objects are overlapping; objects are not dropped, toppled, or deformed. The arm is allowed to push multiple objects simultaneously, and the resulting action must lead to a feasible configuration.

4 Method

Persistent homology is a data analysis tool that provides topological features of a space at different spatial resolutions. It is often applied for analysis of point-cloud data (e.g., see [2], and [9]). Briefly, it considers expanding balls of radius rr centered at each point of a collection of points. As the balls grow, track the unions of all these balls as they overlap each other, given a 11-parameter family of spaces. For each radius rr, one can build a Vietoris–Rips complex (abstract simplicial complex) using the information given by the intersection of the rr-balls (for more details, see [9]). In this setting, persistent homology is the homology of the Vietoris–Rips complex as a function of rr. Intuitively, persistent homology counts the number of connected components (object clusters) and holes of various dimensions and keeps track of how they change with parameters.

\begin{overpic}[width=99.73074pt]{figures/Persistence_Diag.png} \end{overpic}
Figure 4: The persistence diagram for the positions of the obstacles of Fig. 1.

To find clustering that persists, it is sufficient to focus on the zeroth-homology (zero-dim. homology) where the coefficient 2\mathbb{Z}_{2} is ideal for computing connected components (clusters). While the radius rr increases, the zero-dimensional persistent homology stores when the ball in one connected component first intersects a ball of a different connected component, merging both into one, see Fig. 5.

\begin{overpic}[width=216.81pt,tics=5]{figures/ph-row.png} \put(0.8,10.0){\rotatebox{90.0}{{\small$0.5$}}} \end{overpic}
Figure 5: Examples of clusters (connected components) that persist obtained for four different radii rr in the persistence diagram in Fig. 4. (left) For r=0.062r=0.062, there are 6 different clusters shown by different markers. (left-middle) For r=0.1r=0.1, there are 4 clusters. (right-middle) For r=0.116r=0.116, two clusters. (right) Only one cluster for r0.144r\geq 0.144.

Non-Prehensile Manipulation Approach Given a target 𝒯𝒪\text{$\mathcal{T}$}\in\text{$\mathcal{O}$}, denote Π(𝒳)𝒲\text{$\Pi$}(\text{$\mathcal{X}$})\subset\text{$\mathcal{W}$} as the path region, where the robot arm will push the obstacles to clear the path to the target. The idea is to use persistent homology to find clusters (connected components) inside Π(𝒳)\text{$\Pi$}(\text{$\mathcal{X}$}) and use this information to push objects. Since Π(𝒳)\text{$\Pi$}(\text{$\mathcal{X}$}) is smaller than the workspace, the arm has to fit inside Π(𝒳)\text{$\Pi$}(\text{$\mathcal{X}$}) to perform the action. Thus the shape and size of Π(𝒳)\text{$\Pi$}(\text{$\mathcal{X}$}) depends on 𝒲\mathcal{W} and the width ww of the arm. For simplicity, 𝒲\mathcal{W} is considered to be a rectangular shelf described by the parallel walls of the shelf NN and SS as in Fig 3.

To describe Π(𝒳)\text{$\Pi$}(\text{$\mathcal{X}$}), first select axis xx to be associated with the depth of the shelf and yy to correspond to the width of the shelf with y=0y=0 and y=wy=w for points at the south wall SS and north wall NN, respectively, where ww is denoted by the width of the shelf. Then, Π(𝒳)\text{$\Pi$}(\text{$\mathcal{X}$}) is defined to be the rectangle given by (𝒳[g]x,𝒳[𝒯]yw)(\text{$\mathcal{X}$}[g]_{x},\text{$\mathcal{X}$}[\text{$\mathcal{T}$}]_{y}-w) and (𝒳[𝒯]x,𝒳[𝒯]y+w)(\text{$\mathcal{X}$}[\text{$\mathcal{T}$}]_{x},\text{$\mathcal{X}$}[\text{$\mathcal{T}$}]_{y}+w), where 𝒳[𝒯]=(𝒳[𝒯]x,𝒳[𝒯]y)\text{$\mathcal{X}$}[\text{$\mathcal{T}$}]=(\text{$\mathcal{X}$}[\text{$\mathcal{T}$}]_{x},\text{$\mathcal{X}$}[\text{$\mathcal{T}$}]_{y}) and 𝒳[g]=(𝒳[g]x,𝒳[g]y)\text{$\mathcal{X}$}[g]=(\text{$\mathcal{X}$}[g]_{x},\text{$\mathcal{X}$}[g]_{y}), see Fig. 3. When the target is too close to the walls SS and NN, it is possible to apply a planar rotation matrix RϕR_{-\phi} to obtain Π(𝒳)\text{$\Pi$}(\text{$\mathcal{X}$}) with incidence angle ϕ\phi as in [19].

Having Π(𝒳)\text{$\Pi$}(\text{$\mathcal{X}$}), obstacles can be clustered by applying persistent homology inside Π(𝒳)\text{$\Pi$}(\text{$\mathcal{X}$}). Define 𝒞𝒞(𝒳,r)\text{$\mathcal{CC}$}(\text{$\mathcal{X}$},r) to be the collection of connected components (clusters) for the radius r>0r>0. It is enough to use the zero-dimensional persistent homology to obtain the connected components and their generators. The goal is for the arm to try to push all obstacles in the closest connected component, denoted by 𝒞𝒞(𝒳,r)c\text{$\mathcal{CC}$}(\text{$\mathcal{X}$},r)_{c}, to the griper 𝒳[g]\text{$\mathcal{X}$}[g]. The performed action aa is successful when all obstacles in 𝒞𝒞(𝒳,r)c\text{$\mathcal{CC}$}(\text{$\mathcal{X}$},r)_{c} are removed from Π(𝒳)\text{$\Pi$}(\text{$\mathcal{X}$}).

Due to the unpredictability of a non-prehensile action aa, it is advantageous to bestow a reward on each action performed based on the number of obstacles removed from the path region and the number of connected components created by the pushing action aa. The reward function rwd\mathrm{rwd} in MCTS for a given action aa is defined by rwd(a)=b(a)+t(a)m(a),\text{$\mathrm{rwd}$}(a)=b(a)+t(a)\cdot m(a), where b(a)=#Π(ch(𝒳,a))#Π(𝒳)b(a)=\#\text{$\Pi$}(ch(\text{$\mathcal{X}$},a))-\#\text{$\Pi$}(\text{$\mathcal{X}$}) is the number of obstacles removed from the path region; m(a)=max(#𝒞𝒞(ch(𝒳,a),r)#𝒞𝒞(𝒳,r),0)m(a)=\mathrm{max}(\#\text{$\mathcal{CC}$}(ch(\text{$\mathcal{X}$},a),r)-\#\text{$\mathcal{CC}$}(\text{$\mathcal{X}$},r),0) is the number of connected components created by the action aa; and t(a)=1t(a)=1 if b(a)>0b(a)>0 or 0 elsewhere. Note that the term m(a)m(a) adds a positive reward for the configuration that increases the number of clusters (connected components), resulting in a configuration of sparse obstacles that leads to subsequent easier pushing actions.

The pushing actions are based on the circumscribed rectangle that contains 𝒞𝒞(𝒳,r)c\text{$\mathcal{CC}$}(\text{$\mathcal{X}$},r)_{c}, denoted by (𝒳,r)\text{$\mathcal{R}$}(\text{$\mathcal{X}$},r). The actions are either by a sweeping movement from the bottom of (𝒳,r)\text{$\mathcal{R}$}(\text{$\mathcal{X}$},r) to the top of (𝒳,r)\text{$\mathcal{R}$}(\text{$\mathcal{X}$},r) or from top to bottom of (𝒳,r)\text{$\mathcal{R}$}(\text{$\mathcal{X}$},r), denote the action by 𝗎𝗉\mathsf{up} and 𝖽𝗈𝗐𝗇\mathsf{down}, respectively.

Observe that for a given configuration 𝒳\mathcal{X}, each radius r>0r>0 may define a different (𝒳,r)\text{$\mathcal{R}$}(\text{$\mathcal{X}$},r), and without a proper discretization of the radii, the tree will grow exponentially. The persistence diagram provides an informed manner to select radii that lead to robust actions and bounds the exponential growth. The radii that present such properties are the collection of radii that persist under a selected hyper-parameter ν\nu, more specifically, a persistent radius for a given parameter ν>0\nu>0 is a radius r>0r>0 such that the number of connected components in the persistence diagram between rr and r+νr+\nu remains the same. Denote RνR_{\nu} by the set of all persistent radii for a given value ν>0\nu>0. In the persistence diagram in Fig. 4, R0.015={0.062,0.1,0.116,0.144}R_{0.015}=\{0.062,0.1,0.116,0.144\}. Note that RνR_{\nu} always contains the radius where the last connected component dies, thus RνR_{\nu}\neq\emptyset.

Another important parameter is the gripper width hh, where for radii less than hh the gripper cannot position between two clusters, thus RνR_{\nu} can be further decreased by removing all radii less than hh from it, denote this new collection of radii by R(𝒳)=Rν,h(𝒳)R(\text{$\mathcal{X}$})=R_{\nu,h}(\text{$\mathcal{X}$}). For example, in the persistence diagram of Fig. 4, Rν,h(𝒳)={0.062,0.1,0.116,0.144}R_{\nu,h}(\text{$\mathcal{X}$})=\{0.062,0.1,0.116,0.144\} for ν=0.015\nu=0.015 and h=0.05h=0.05. In [19], there are two proposed algorithms: PHIA takes the minimal radius of Rν,hR_{\nu,h} and performs the pushing actions, and PHIS builds a tree with nodes being the configurations and propagates by performing the pushing actions for each radius in Rν,hR_{\nu,h} then it selects the path that leads to success and minimizes the number of actions.

Given a configuration 𝒳\mathcal{X}, the set of all possible actions is represented by R(𝒳)×{𝗎𝗉,𝖽𝗈𝗐𝗇}R(\text{$\mathcal{X}$})\times\{\mathsf{up},\mathsf{down}\}, hence the expansion stage of MCTS for 𝒳\mathcal{X} produces children ch(𝒳,a)ch(\text{$\mathcal{X}$},a) where a=(r,d)R(𝒳)×{𝗎𝗉,𝖽𝗈𝗐𝗇}a=(r,d)\in R(\text{$\mathcal{X}$})\times\{\mathsf{up},\mathsf{down}\}, rr a radius and dd a direction. Actions aa may result in failure either by dropping/toppling obstacles or pressing obstacles against the wall. In all cases, a child node obtained by unfeasible action is marked as a failure, and no reward is given. The framework that integrates the informed actions and rewards by Persistent Homology and MTCS is named in short by PHIM (Persistent Homology Informed actions and rewards to Monte-Carlo tree search). For sample pushing actions planned by PHIM, see Fig. 1.

5 Experiments

To evaluate the proposed method, 25 real-world and 50 simulated experiments were performed and compared against baselines as described below. The real-world scenarios are described by S1, S2, S3, S4, and S5 in Fig. 6 and 1. The simulated experiments are labeled M1 through M10 as depicted in Fig. 7.

Experimental setup: The simulated experiments were run on a Ubuntu workstation with Intel Core i5-8259U 3.8Ghz 16GB RAM. Gazebo was used for simulating the actions, and MoveIt was used for forming motion planning queries to the Bi-EST planner [6] in OMPL.

\begin{overpic}[width=212.47617pt]{figures/exp_scene.png} \end{overpic}
Figure 6: (top row) the randomized baselines fail to find a plan; (bottom left - S4) GRTC-H fails to find a plan, and OOA fails to execute the plan; (bottom right - S5) GRTC-H, OOA, and PHIA perform the most actions. Scene S2 is given in Fig. 1.

For real-world experiments, the Baxter robot was used to perform the pushing actions and grasp the target. The choice of Baxter was ideal for the real experiment since it performs the moving actions with a small degree of imprecision, which helps to show the robustness of the proposed approach. An overview of the workspace is provided by an Intel Realsense D435 RGB-D camera. For each object, its position is detected with a 2D fiducial marker at the top using Chilitags.

Choice of parameter: 1 target and 7 obstacles with enough height to have the center of mass above the contact points performed by the pushing actions of Baxter’s arm. Bricks are used to create artificial walls to simulate a shelf workspace with a desirable size, and it allows recording and running perception over the table. The gripper can lift the target a bit to perform the retrieve action. However, it is not allowed to retrieve while passing over the obstacles. The objects have enough weight to provide friction and simulate the real grasping.

\begin{overpic}[width=104.07117pt]{figures/m2.png} \put(0.0,1.0){{\color[rgb]{1,1,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,0}\pgfsys@color@cmyk@stroke{0}{0}{1}{0}\pgfsys@color@cmyk@fill{0}{0}{1}{0}M2}} \end{overpic}
\begin{overpic}[width=104.07117pt]{figures/m1.png} \put(0.0,1.0){{\color[rgb]{1,1,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,0}\pgfsys@color@cmyk@stroke{0}{0}{1}{0}\pgfsys@color@cmyk@fill{0}{0}{1}{0}M5}} \end{overpic}
Figure 7: Randomized baselines M2 and M5 in simulation with varying shape obstacles.

The hyper-parameter ν\nu for the persistent homology algorithms is selected to be 0.0150.015m, the margin of error for the arm’s pushing actions, which is empirically chosen to be the difference between the desirable position minus the actual position after performing the actions.

Another useful hyper-parameter to consider is the width of the gripper hh. The selection of hh dictates if the gripper can be successfully fitted between two clusters (connected components). During the experiments, hh is considered the real width plus an extra margin of error. Thus, hh is selected to be greater than 0.030.03 + 0.0150.015m (h=0.05h=0.05m). Furthermore, for the exploration parameter in the Monte-Carlo tree search, the conventional choice of 2\sqrt{2} is chosen. The overall planning time threshold for all methods is 500 seconds. Above this threshold, failure is reported and real-world experiment is not performed.

Collisions: Soft collisions between obstacles and walls are allowed. Nevertheless, the simulated actions prevent hard collisions and the arm pressing the obstacles against the wall. In fact, actions that resulted in the arm pressing the obstacles were the only issue presented when the simulated experiments were performed in real-world scenarios.

\begin{overpic}[width=255.8342pt]{figures/fig_cases.pdf} \end{overpic}
Figure 8: Real-world actions (top) and planning time (bottom) performed by each method for scenes S1 to S5 of Fig. 1 and 6, for GRTC-H, OOA, PHIA, PHIS, and PHIM (ours), where applicable. All PHI* methods do reasonably well; PHIM always provides the best solution quality.

It was surprising that the challenge of avoiding dropping and toppling was easily solved. The proposed solution to fix pressing objects against the wall is to consider the walls in a simulated environment as movable objects with five times the weight of one obstacle. Any action resulting in moving the wall is considered unfeasible. Incorporating this extra condition increased the chances of failing during the planning phase. However, experiments showed that the PHIM approach could find high-level solutions with minimal pushing actions without significantly increasing planning time.

Comparisons: The proposed PHIM is compared against PHIA, PHIS, OOA [19] and the modified GRTC-Heuristic [14]. All algorithms were adapted to prevent the arm from pressing the objects against the wall. Without this extra condition, PHIA, OOA, and GRTC-Heuristic fail in all real-world experiments. This additional check condition does not require changing the algorithms. The algorithm OOA is a simple approach to push each obstacle one by one in the path region Π(𝒳0)\text{$\Pi$}(\text{$\mathcal{X}$}_{0}) of the initial configuration 𝒳0\text{$\mathcal{X}$}_{0}. And modified GRTC-Heuristic (GRTC-H) is based on the algorithm used in [14] that pushes obstacles in a straight line to their goal region outside of the path region Π(𝒳0)\text{$\Pi$}(\text{$\mathcal{X}$}_{0}), where the goal is randomly selected to be outside of Π(𝒳0)\text{$\Pi$}(\text{$\mathcal{X}$}_{0}) without overlapping any other obstacles. Both algorithms are randomized approaches to execute pushing actions without using clustering information from the persistent homology.

Robustness: To evaluate the robustness of the proposed method, the real-world experiments had the position of each object slightly different from the positions given to the planner to solve, adding a noise of 3cm to the tag-based position. Moreover, the planner simulated the whole collection of pushing actions to solve the retrieval problem without re-planning after every step. In other words, the planner solved the problem in a simulated world in an offline manner. This choice of offline planning is motivated by the robustness of the algorithms based on persistent homology.

Discussion: The success rate for GRTC-H is comparatively low, mainly due to taking a long time planning feasible actions (see Fig. 8). The planning time for OOA is the second fastest as it does not perform a search and only finds the closest obstacle to the gripper. However, it has a high chance of failing during execution as it is not robust. When it pushes an obstacle and if another one is near, it can push them together, thereby changing the configuration for the next action. Thus, it results in a lower success rate.

\begin{overpic}[width=255.8342pt]{figures/fig_avg.pdf} \put(0.0,23.0){{\bf a)}} \end{overpic}

. \begin{overpic}[width=255.8342pt]{figures/fig_sim_avg.pdf} \put(0.0,28.0){{\bf b)}} \end{overpic}

Figure 9: Left to right: average number of actions, planning time, planning success rate, and execution success rate for a) real-world and b) simulated experiments. PHIM provides the best solution quality and much faster than PHIS.

PHIM plans fewer actions than the other methods and has a 100%100\% success rate either in simulation or in real-world execution per Fig. 9. It achieves such performance without substantially increasing the planning time, showing that the rewards based on information collected by persistent homology are quite useful. In summary, when comparing all methods, PHIM accomplishes the most robust actions as well as reduces the number of actions needs to solve the problem without significantly increasing the planning time.

6 Conclusion

This paper introduces a robust and efficient method for planning non-prehensile manipulation actions in clutter. It uses topological tools and MCTS to obtain high-quality solutions for object retrieval problems. The solutions are robust under perturbations and gracefully handle the uncertainty associated with objects’ poses and arm actions. Furthermore, it allows to successfully plan offline actions using physics simulations that are successfully executed online. Real-world experiments show that PHIM has a higher success rate and is faster in finding robust solutions than alternatives. It reduces the number of pushing actions required and decreases planning time by avoiding the consideration of actions that result in time-expensive simulated tasks.

The proposed framework can be directly applied to highly dense clusters of objects. However, such a scenario is expected to perform similarly to the other methods since highly dense setups have a trivial topological shape to explore (fewer ways to cluster). A possible direction to be explored is to select obstacles to be removed by pick-n-place tasks, resulting in easier follow-up pushing actions to clean the path to the target.

References

  • [1] Cosgun, A., Hermans, T., Emeli, V., Stilman, M.: Push planning for object placement on cluttered table surfaces. In: IROS 2011. pp. 4627–4632. IEEE (2011)
  • [2] Edelsbrunner, H.: A short course in computational geometry and topology. SpringerBriefs in Applied Sciences and Technology, Springer, Cham (2014)
  • [3] Haustein, J.A., Arnekvist, I., Stork, J., Hang, K., Kragic, D.: Learning manipulation states and actions for efficient non-prehensile rearrangement planning. arXiv preprint (2019)
  • [4] Haustein, J.A., King, J., Srinivasa, S.S., Asfour, T.: Kinodynamic randomized rearrangement planning via dynamic transitions between statically stable states. In: ICRA 2015. pp. 3075–3082. IEEE (2015)
  • [5] Havur, G., Ozbilgin, G., Erdem, E., Patoglu, V.: Geometric rearrangement of multiple movable objects on cluttered surfaces: A hybrid reasoning approach. In: ICRA 2014. pp. 445–452. IEEE (2014)
  • [6] Hsu, D., Latombe, J.C., Motwani, R.: Path planning in expansive configuration spaces. In: ICRA 1997. vol. 3, pp. 2719–2726. IEEE (1997)
  • [7] Huang, B., Han, S.D., Boularias, A., Yu, J.: Dipn: Deep interaction prediction network with application to clutter removal. In: ICRA 2021 (2021)
  • [8] Huang, E., Jia, Z., Mason, M.T.: Large-scale multi-object rearrangement. In: ICRA 2019. pp. 211–218. IEEE (2019)
  • [9] Kaczynski, T., Mischaikow, K.M., Mrozek, M., Mischaikow, K.: Computational homology. Applied mathematical sciences; v. 157, Springer, New York (2004)
  • [10] Lee, J., Cho, Y., Nam, C., Park, J., Kim, C.: Efficient obstacle rearrangement for object manipulation tasks in cluttered environments. In: ICRA 2019. pp. 183–189. IEEE (2019)
  • [11] Nam, C., Lee, J., Cho, Y., Lee, J., Kim, D.H., Kim, C.: Planning for target retrieval using a robotic manipulator in cluttered and occluded environments. arXiv preprint (2019)
  • [12] Nieuwenhuisen, M., Droeschel, D., Holz, D., Stückler, J., Berner, A., Li, J., Klein, R., Behnke, S.: Mobile bin picking with an anthropomorphic service robot. In: ICRA 2013. pp. 2327–2334. IEEE (2013)
  • [13] Pan, Z., Hauser, K.: Decision making in joint push-grasp action space for large-scale object sorting. arXiv preprint (2020)
  • [14] Papallas, R., Dogar, M.R.: Non-prehensile manipulation in clutter with human-in-the-loop. In: ICRA 2020. pp. 6723–6729. IEEE (2020)
  • [15] Poon, J., Cui, Y., Ooga, J., Ogawa, A., Matsubara, T.: Probabilistic active filtering for object search in clutter. In: ICRA 2019. pp. 7256–7261. IEEE (2019)
  • [16] Price, A., Jin, L., Berenson, D.: Inferring occluded geometry improves performance when retrieving an object from dense clutter. arXiv preprint (2019)
  • [17] Shome, R., Tang, W.N., Song, C., Mitash, C., Kourtev, H., Yu, J., Boularias, A., Bekris, K.E.: Towards robust product packing with a minimalistic end-effector. In: ICRA 2019. pp. 9007–9013. IEEE (2019)
  • [18] Song, C., Boularias, A.: Object rearrangement with nested nonprehensile manipulation actions. In: IROS 2019. pp. 6578–6585. IEEE (2019)
  • [19] Vieira, E.R., Nakhimovich, D., Gao, K., Wang, R., Yu, J., Bekris, K.E.: Persistent homology for effective non-prehensile manipulation. In: ICRA 2022. pp. 1918–1924 (2022)
  • [20] Wang, R., Mitash, C., Lu, S., Boehm, D., Bekris, K.E.: Safe and effective picking paths in clutter given discrete distributions of object poses. In: IROS 2020. pp. 5715–5721. IEEE (2020)
  • [21] Wong, L.L., Kaelbling, L.P., Lozano-Pérez, T.: Manipulation-based active search for occluded objects. In: ICRA 2013. pp. 2814–2819. IEEE (2013)
  • [22] Yuan, W., Stork, J.A., Kragic, D., Wang, M.Y., Hang, K.: Rearrangement with nonprehensile manipulation using deep reinforcement learning. In: ICRA 2018. pp. 270–277. IEEE (2018)
  • [23] Zhou, J., Hou, Y., Mason, M.T.: Pushing revisited: Differential flatness, trajectory planning, and stabilization. IJRR 38(12-13), 1477–1489 (2019)
  • [24] Zhou, J., Mason, M.T., Paolini, R., Bagnell, D.: A convex polynomial model for planar sliding mechanics: theory, application, and experimental validation. IJRR 37(2-3), 249–265 (2018)