This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Monte Carlo Tree Search for a single target search game on a 2-D lattice

Elana Kozak Department of Mathematics, United States Naval Academy, Annapolis, Maryland    Scott Hottovy Department of Mathematics, United States Naval Academy, Annapolis, Maryland
Abstract

Monte Carlo Tree Search (MCTS) is a branch of stochastic modeling that utilizes decision trees for optimization, mostly applied to artificial intelligence (AI) game players. This project imagines a ?game? in which an AI player searches for a stationary target within a 2-D lattice. We analyze its behavior with different target distributions and compare its efficiency to the Levy Flight Search, a model for animal foraging behavior. In addition to simulated data analysis we prove two theorems about the convergence of MCTS when computation constraints disappear.

I Introduction

The problem of search and detection is applicable in many scenarios, both theoretical and practical. The concept of search and detection is complex, with many possible variations and applications. For example, a simple search could include one search agent and one target, both randomly moving throughout a space gage94 . Many methods for finding these targets have been tested and applied with varying levels of success bellman . We propose a new method to solve the problem of search and detection: Monte Carlo Tree Search (MCTS).

To test the MCTS method we start with a simple 2-D lattice search problem. The domain is a N×NN\times N lattice with periodic boundaries and discrete locations for the target and searcher to occupy. That is, the searcher at time tt has position

St=(x,y), such that x,y{1,2,3,,N}.S_{t}=(x,y),\mbox{ such that }x,y\in\{1,2,3,...,N\}. (1)

Further more The target TT is placed within the search domain according to some desired distribution P(x,y)P(x,y) on the lattice. The searcher starts at a predetermined location, typically (1,1)(1,1).

While playing the "game" the searcher can make moves in one of four directions: up, down, left, or right. That is, if St=(x,y)S_{t}=(x,y) then

St+1=St+ξ,S_{t+1}=S_{t}+\xi, (2)

where ξ{(0,1),(0,1),(1,0),(1,0)}\xi\in\{(0,1),(0,-1),(1,0),(-1,0)\}. All steps are one unit in length and the boundaries are periodic, i.e. if St=(x,N)S_{t}=(x,N) and ξ=(0,1)\xi=(0,1) then St+1=(x,1)S_{t+1}=(x,1). Similarly for the other boundaries. The target is stationary.

The solution to this search problem is a path to the target which can be measured by the number of steps. An optimal path is defined a set of moves which reaches the target in the minimum number of steps. This requires the searcher to always move directly towards the target in either the horizontal or vertical directions. A detection occurs when the searcher occupies the target’s position Sτ=TS_{\tau}=T for some time τ\tau. Achieving this optimal path is the best solution to this search problem. An example problem and one optimal path is shown in figure 1.

Refer to caption
Refer to caption
Figure 1: Example 2-D search problem and one of its optimal solutions.

The main focus of this paper is a method called the Monte Carlo Tree Search (MCTS), defined as "a method for finding optimal decisions in a given domain by taking random samples in the decision space and building a search tree according to the results" BrowneSurvey . The method has so far been used to develop artificial intelligence game players for games such as Go and chess. Important advantages include the ability to set a decision time limit and the lack of domain knowledge needed for success TreeSearchBoardGame . While very useful in certain board games, the process is at its core a decision making method, allowing us to apply the theoretical concepts to any choice, including search paths.

The Monte Carlo Tree Search depends on a game tree which consists of many nodes that represent possible game states. For our “game” of search and detection, each searcher location is a game state. The MCTS method works by running many simulations to learn about the game tree in order to choose its next move. The general process includes four steps: selection, expansion, playout, and back-propagation. Most significantly, the selection policy we use is the Upper Confidence Bound for Trees (UCT), which chooses the node with maximum UCT value defined by

UCT(vi,v)=Q(vi)N(vi)+clog(N(v))N(vi).UCT(v_{i},v)=\frac{Q(v_{i})}{N(v_{i})}+c\sqrt{\frac{log(N(v))}{N(v_{i})}}. (3)

Here, vv represents the root node, viv_{i} represents a child node, Q(v)Q(v) is the average reward, N(v)N(v) is the number of visits, and cc is a variable constant that balances exploration and exploitation ComparisonMCTS . For this study we choose c=2c=\sqrt{2} as it has been shown to be the optimal exploration coefficient for similar problems CITE.... The reward function Q(vi)Q(v_{i}) is chosen as

Q(vi)=1τvi,NQ(v_{i})=\frac{1}{\tau_{v_{i},N}}

where τvi,N\tau_{v_{i},N} is the stopping time for the default policy to find the target for the first time starting from node viv_{i} on an N×NN\times N grid. Thus a shorter time results in a higher reward.

For each simulation, the Monte Carlo Tree Search will follow a default rollout policy to reach a terminal node. The goal of this policy is to give an accurate representation of how the chosen child node may affect the end result. Additionally, the rollout policy should take minimal computing power so that the simulations can be completed as quickly as possible. In this work we explore two different rollout policies: random selection and Lévy flights. Examples of both are found in figure 2.

First, the most basic rollout policy is random selection. This is exactly as it sounds: the program chooses one of its available moves completely randomly. This takes the least computing power and usually has good results, although there is a high variance.

Second, we propose the Lévy flight search as a more complex rollout policy. This method mimics the way some animals search for food in their environment, utilizing a few large jumps within a set of many small steps DynamicExtending . The direction is chosen randomly but the distance is drawn from a stable distribution VisInRandomSearches :

P(lj)ljμ  1<μ3.P(l_{j})\sim l_{j}^{-\mu}\;\;1<\mu\leq 3. (4)
Refer to caption
(a) Random Walk.
Refer to caption
(b) Lévy Flight Search.
Figure 2: Example paths for two different rollout policies. The long lines in the Lévy flight search are artifacts from the periodic boundaries.

Although Monte Carlo Tree Search is most often applied to games, its foundation is in decision theory. It is beginning to be applied to a much wider array of problems mandziuk2018 . A similar application of MCTS to the search detection scenario described above is called the multi-armed bandit problem auer2002 ; kocsis2006 . There a general game is played where, given a node vv, many different arms can be played. The resulting play ends in a reward between zero and 1. In a follow on paper kocsis2006improved , a rigorous proof was given with restrictive assumption on the game and the rewards.

In this work we show that MCTS is an efficient method given various amounts of information. For example, given total information about the target (i.e. it is drawn from a delta distribution), the searcher finds an almost optimal path given a finite decision time. As the information is more uncertain (the distribution of the target widens into a Normal distribution) MCTS out performs, in number of steps to find the target, any default search policy such as a random walk, Lévy flight search, and nearly self avoiding random walk. The proof for the delta distributed target could be done by showing that the reward function QQ satisfies the assumptions stated in kocsis2006improved . However the proof done here is streamlined and shows insight to which parts of the default policy are key. Namely that the default policy must have a stopping time which, on average, is shorter the closer you start to the target. For the unknown target case the theorem and proof here are novel. They show that the nearly self avoiding random walk, or searching only in sites that have not been searched before, is the optimal case when there is no information about the target.

The outline of the paper is as follows. In section II the search algorithm is defined for the UCT as well as the target distribution. In section III, we present simulation data for various cases of information on the target. From complete information, delta distribution target, to fully unknown target, uniformly distributed target. In section IV the theorems for convergence to the optimal path for the delta distribution target, and convergence to a nearly self avoiding random walk for the uniformly distributed target are stated and proved. In section V the results are summarized.

II Methodology

For each target distribution the MATLAB™  program follow a similar setup github . First, we set the parameters such as grid size (NN), number of trials (TT), number of loops/simulations (LL), vision radius (rv=1r_{v}=1) and exploration constant (cc). Of note, the vision radius is the maximum distance between searcher and target which registers as a detection. The exploration constant is a parameter of the UCT algorithm which balances the tendency to exploit vs. explore the decision space BrowneSurvey . Next, we place the target in the search space according to the desired distribution.

There are three general cases of this search problem based on the information known about the target. We can either have a completely known target drawn from a delta distribution, a completely unknown target drawn from the uniformly random distribution, or a somewhat known target drawn from the Gaussian distribution with some specified value of standard deviation, σ\sigma. For this paper, we use the Gaussian distribution as the general target distribution and we can be changed by varying the standard deviation (σ\sigma). The target distribution is defined as follows. Let (x,y)2(x,y)\in\mathbb{R}^{2} be normally distributed random variables with mean N/2N/2 and standard deviation σ\sigma for both xx and yy. Then the target is selected by

(x,y)\displaystyle(x,y) 𝒩(N/2,σ)\displaystyle\sim\mathcal{N}(N/2,\sigma) (5)
T=(Tx,Ty)\displaystyle T=(T_{x},T_{y}) =(mini,j{|ix|2+|jy|2})mod(N,N)+(1,1)\displaystyle=\left(\min_{i,j\in\mathbb{Z}}\{|i-x|^{2}+|j-y|^{2}\}\right)\mod(N,N)+(1,1) (6)

where the modulo operator is done for each component. The searcher is started at (1,1)(1,1) for all trials.

The Monte Carlo Tree Search section of the code is repeated until the searcher is within rvr_{v} of the target. Before each move, the program plays out LL loops. In each loop a practice target is placed according to the known distribution of the real target. The UCT policy is used to select a node, then the rollout policy plays the game to completion. The number of steps taken in each loop is recorded and added to the data for each searcher location that the simulation visited. After all the loops are completed the searcher will choose the best move and update its location.

After each trial we record the total number of steps taken and compare that to the optimal steps from searcher to target. For an accurate comparison, we use the average steps over optimal (ASOO) as the final metric.

Throughout this work we compare the MCTS results to those of a random walk (RW), a nearly self-avoiding random walk (NSARW), and a simple Lévy Flight Search. Both the RW and LFS are described in section 1.2. The NSARW is similar to a random walk except that it avoids previously visited locations. It is also called a weakly (or restricted) self avoiding random walk and has been studied extensively shuler2009 ; bauerschmidt2012 . Here, the NSARW means that each directional step is chosen randomly out of the options that have not yet been visited. If all of the targets have been visited at least once, then the direction will be chosen randomly out of the options visited the minimum number of times. An example of the NSARW path is seen in figure 3, which shows the step counts for each location in an example nearly self-avoiding random walk.

Refer to caption
Figure 3: Example step counts for a realization of a nearly self-avoiding random walk.

III Simulation Results

To simulate various cases we vary the standard deviation, σ\sigma of the target distribution. We can view this as a range, where σ=0\sigma=0 corresponds to the known target (delta distribution) case and σ=\sigma=\infty corresponds to an unknown target (uniformly random distribution). These distributions are shown in figure 4(d) for four different values of σ\sigma. Accordingly, the simulated data shows that as σ\sigma increases, our performance decreases, leveling out to match the unknown target case (a nearly self-avoiding random walk). This data is shown in figure 4(d). It is also important to note that for any value of σ\sigma the Monte Carlo Tree Search with random walk default policy outperforms both a basic Lévy Flight Search (LFS) and a random walk (RW).

Refer to caption
(a) Delta Distribution.
Refer to caption
(b) Small σ\sigma example distribution.
Refer to caption
(c) Large σ\sigma example distribution.
Refer to caption
(d) Uniform distribution.
Figure 4: Example Gaussian target histogram with various σ\sigma values. Each histogram is run with 10410^{4} targets and bins at each lattice point (i,j)2(i,j)\in\mathbb{Z}^{2}.
Refer to caption
Figure 5: Performance with a Gaussian Target Distribution, 95% Confidence Interval. Each box and whisker is taken over 10310^{3} trials.

III.1 Special Case: Known Target

One special case of the Gaussian distribution occurs when σ=0\sigma=0. This is known as the delta distribution or in our case, the known target scenario. Since the standard deviation is 0, every time a target is drawn it will be in the same spot. This means that the simulated trials will be a very useful representation of the real problem, resulting in a very efficient search method. Figure 6 shows how the MCTS compares to other methods under these conditions. It is interesting to see that the Lévy Flight Search default policy results in a more efficient search than the random walk policy. We hypothesize that this is a result of the LFS covering more of the search space and back-tracking across its path less.

Refer to caption
Figure 6: Performance with a Delta Target Distribution, 95% Confidence Interval. Averaged over 10310^{3} trials.

In section IV.1 we will prove that in this scenario the MCTS converges to the optimal path. Supporting data is shown in figure 7 with both the number of loops per step and the decision time as independent variables. For the decision time case (figure 7 (a)) the random walk out performs the Levy Flight Search because it is faster at finding targets in moderately sized grids.

Refer to caption
(a) Effect of decision time for MCTS.
Refer to caption
(b) Effect of loops for MCTS.
Figure 7: Effect of resource limits on MCTS with each default policy. Each data point is the average over 10310^{3} trials.

III.2 Special Case: Unknown Target

When σ\sigma is large enough for the given search space the Gaussian distribution functions as a uniformly random distribution. This is the most difficult scenario for the searcher to find an efficient path because it knows nothing useful about the target’s location. Nevertheless, for both LFS and random walk default policies the MCTS does outperform a basic LFS and a basic random walk. The supporting data is shown in figure 8.

Refer to caption
Figure 8: Performance with a Uniform Target Distribution, 95% Confidence Interval over 10310^{3} trials.

Additionally, we show in figure 9 that the MCTS with random walk default policy will converge to a nearly self-avoiding random walk as the size of the search grid gets larger, assuming the target is drawn from a uniformly random distribution. The nearly self-avoiding random walk (NSARW) is defined as a path where the searcher picks its move randomly but avoids previously visited spaces unless there are no other options. We provide a proof of this in section IV.2.

Refer to caption
Figure 9: Convergence of MCTS to a Self-Avoiding Random Walk. Each point is the average of 10310^{3} trials. The confidence interval is linearly approximated for values between data points.

IV Theoretical Results

Here we provide two proofs regarding the behavior of a searcher using the MCTS method to find a single target. The results are dependent on the knowledge of the target distribution and are supported by the data above.

IV.1 Convergence to the Optimal Path

Assumptions: We start with a finite search lattice of size N×NN\times N. A target is placed randomly within, at location T=(Tx,Ty)T=(T_{x},T_{y}). The searcher is started from (1,1)(1,1). An optimal path is defined as a series of directional moves that results in the searcher reaching the target in as few time steps as possible. The optimal number of steps is calculated by using the 1(𝕋2)\ell^{1}(\mathbb{T}^{2}) metric on a torus

((1,1)(Tx,ty))1={|1Tx|+|1Ty| if |1Tx|N/2,|1Ty|N/2,N/2|1Tx|+|1Ty|, if |1Tx|>N/2,|1Ty|<N/2|1Tx|+N/2|1Ty|, if |1Tx|<N/2,|1Ty|>N/2N|1Tx||1Ty|, if |1Tx|>N/2,|1Ty|>N/2\|((1,1)-(T_{x},t_{y}))\|_{\ell^{1}}=\left\{\begin{array}[]{ll}\Big{|}1-T_{x}\Big{|}+\Big{|}1-T_{y}\Big{|}&\mbox{ if }\Big{|}1-T_{x}\Big{|}\leq N/2,\quad\Big{|}1-T_{y}\Big{|}\leq N/2,\\ N/2-\Big{|}1-T_{x}\Big{|}+\Big{|}1-T_{y}\Big{|},&\mbox{ if }\Big{|}1-T_{x}\Big{|}>N/2,\quad\Big{|}1-T_{y}\Big{|}<N/2\\ \Big{|}1-T_{x}\Big{|}+N/2-\Big{|}1-T_{y}\Big{|},&\mbox{ if }\Big{|}1-T_{x}\Big{|}<N/2,\quad\Big{|}1-T_{y}\Big{|}>N/2\\ N-\Big{|}1-T_{x}\Big{|}-\Big{|}1-T_{y}\Big{|},&\mbox{ if }\Big{|}1-T_{x}\Big{|}>N/2,\quad\Big{|}1-T_{y}\Big{|}>N/2\end{array}\right.

where the 𝕋2\mathbb{T}^{2} is suppressed for the remainder of the paper. Finally we assume that the default policy chosen has a finite stopping time E[τD,N]E[\tau_{D},N] of finding the target starting DD steps away on an N×NN\times N grid and

E[τD,N]<E[τD+2,N].E[\tau_{D},N]<E[\tau_{D+2},N].

Theorem 1: For a target with the delta distribution, as the number of simulation loops goes to infinity, the algorithm will choose the move that leads the searcher in an optimal direction (i.e., choose the direction that minimizes the 1\ell^{1} distance to the target).

Before proving the theorem, we use a lemma about the number of times the optimal and suboptimal choices are simulated. Lemma 1: As ll\to\infty for all moves i{1,,4}i\in\{1,...,4\} the number of times move ii has been played, Ti(l)T_{i}(l)\to\infty.

Proof.

We proceed by contradiction. Assume at lease one move (denote as jj) is chosen a finite number of times, i.e., Tj(l)T_{j}(l) is bounded. Thus we can find some constant, TT such that for all ll,

Tj(l)T.T_{j}(l)\leq T.

Based on the UCT method, the program will chose its next move by maximizing

UCT(i)=Yi,Ti(l)+cloglTi(l).UCT(i)=Y_{i,T_{i}(l)}+c\sqrt{\frac{\log{l}}{T_{i}(l)}}.

Using the assumption that one move is bounded, we know that at least one of the other moves must not be bounded since the total number of simulations goes to infinity.
Since 0Yi,Ti(l)10\leq Y_{i,T_{i}(l)}\leq 1 we can say that

UCT(j)1+cloglTj(l)UCT(j)\leq 1+c\sqrt{\frac{\log{l}}{T_{j}(l)}}
UCT(j)cloglTUCT(j)\geq c\sqrt{\frac{\log{l}}{T}}

For the unbounded move, ii, we know that as ll\to\infty, Ti(l)T_{i}(l)\to\infty. For all moves, logl\log{l} is the same.
We choose an LL such that for all lLl\geq L, Ti(l)>T2\;T_{i}(l)>T^{2}.
Thus

1Ti(l)<1T21T\frac{1}{T_{i}(l)}<\frac{1}{T^{2}}\leq\frac{1}{T}

Multiplying by logl\log{l}, which is the same for all arms ii, we get

loglTi(l)loglT2loglT\frac{\log{l}}{T_{i}(l)}\leq\frac{\log{l}}{T^{2}}\leq\frac{\log{l}}{T}
loglTi(l)loglT2loglT\sqrt{\frac{\log{l}}{T_{i}(l)}}\leq\sqrt{\frac{\log{l}}{T^{2}}}\leq\sqrt{\frac{\log{l}}{T}}

Since TT is a constant and logl\log{l} is monotonically increasing, we can choose L>LL^{\prime}>L such that for all lLl\geq L^{\prime} we get

clogl(T1T)>1c\sqrt{\log{l}}\Big{(}\frac{\sqrt{T}-1}{T}\Big{)}>1
clogl(1T1Ti(l))>1c\sqrt{\log{l}}\Big{(}\frac{1}{\sqrt{T}}-\frac{1}{\sqrt{T_{i}(l)}}\Big{)}>1

Thus for all lmax{L,L}l\geq\;\max\{L,L^{\prime}\} we conclude that

UCT(i)1+cloglTi(l)<cloglT2<cloglTUCT(j)UCT(i)\leq 1+c\sqrt{\frac{\log{l}}{T_{i}(l)}}<c\sqrt{\frac{\log{l}}{T^{2}}}<c\sqrt{\frac{\log{l}}{T}}\leq UCT(j)

Therefore UCT(i)UCT(j)UCT(i)\leq UCT(j) so move jj will be chosen and played out.
This contradicts the assumption that Tj(l)T_{j}(l) is bounded, so we know that as ll\to\infty Tj(l)T_{j}(l)\to\infty. The same logic can be applied to all other moves i{1,,4}i\in\{1,...,4\} so we conclude that for all ii, as ll\to\infty Ti(l)T_{i}(l)\to\infty.

Proof.

Assume the searcher is at point S=(Sx,Sy)S=(S_{x},S_{y}). For the searcher choosing between 44 possible directions (1=1= up, 2=2= right, 3=3= down, 4=4= left) let i{1,,4}i\in\{1,...,4\} represent a suboptimal choice and ii_{*} represent the optimal move. We define Yi,Ti(l)Y_{i,T_{i}(l)} as the average reward for move ii after ll loops have been played out. Here, Ti(l)T_{i}(l) is the number of times direction ii has been played after ll loops. Note that as ll\to\infty, Ti(l)T_{i}(l)\to\infty for all moves i{1,,4}i\in\{1,...,4\} ((Lemma 1). We calculate the average reward as

Yi,Ti(l1)=1Xi,Ti(l)Y_{i,T_{i}(l-1)}=\frac{1}{X_{i,T_{i}(l)}}

where Xi,Ti(l)X_{i,T_{i}(l)} represents the average step count, defined as

Xi,Ti(l)=τ1,D+1,N+τ2,D+1,N+τTi(l),D+1,NTi(l).X_{i,T_{i}(l)}=\frac{\tau_{1,D+1,N}+\tau_{2,D+1,N}+...\tau_{T_{i}(l),D+1,N}}{T_{i}(l)}.

Each τD,N\tau_{D,N} is a random variable that describes the number of steps taken in a random walk from the searcher’s initial location to the target. DD is the number of optimal steps away from the target and NN is the size of the grid. Assume the searcher is exactly DD steps away from the target before choosing its first move. That is,

D=ST1.D=\|S-T\|_{\ell^{1}}.

For a suboptimal move, ii, the searcher initially moves one step away from the target, increasing this distance to D+1D+1, hence the sequence of τD+1,N\tau_{D+1,N}. For the optimal arm, ii_{*}, however, the searcher moves toward the target, decreasing the distance to D1D-1. Thus we define the average step count for ii_{*} after ll loops as

X,T(l)=τ1,D1,N+τ2,D1,N+τT(l),D1,NT(l).X_{*,T_{*}(l)}=\frac{\tau_{1,D-1,N}+\tau_{2,D-1,N}+...\tau_{T*(l),D-1,N}}{T_{*}(l)}.

The infinite sequence of random walks starting from the same distance converges to an expected value of E[τD,N]E[\tau_{D,N}] . Since farther distances will on average take longer to reach the target, E[τD1,N]<E[τD+1,N]E[\tau_{D-1,N}]<E[\tau_{D+1,N}]. By the law of large nubmers we can say that for a suboptimal move, ii,

limlτ1,D+1,N+τ2,D+1,N+τTi(l),D+1,NTi(l)=limlXi,Ti(l)=E[τD+1,N]\lim_{l\to\infty}\frac{\tau_{1,D+1,N}+\tau_{2,D+1,N}+...\tau_{T_{i}(l),D+1,N}}{T_{i}(l)}=\lim_{l\to\infty}X_{i,T_{i}(l)}=E[\tau_{D+1,N}]

with probability 11. Similarly, for the optimal move we have

limlτ1,D1,N+τ2,D1,N+τT(l),D1,NT(l)=limlX,T(l)=E[τD1,N].\lim_{l\to\infty}\frac{\tau_{1,D-1,N}+\tau_{2,D-1,N}+...\tau_{T*(l),D-1,N}}{T_{*}(l)}=\lim_{l\to\infty}X_{*,T_{*}(l)}=E[\tau_{D-1,N}].

Since the average reward is the inverse of the average step count, we substitute to get

limlYi,Ti(l)=1E[τD+1,N],limlY,T(l)=1E[τD1,N].\lim_{l\to\infty}Y_{i,T_{i}(l)}=\frac{1}{E[\tau_{D+1,N}]},\;\;\;\lim_{l\to\infty}Y_{*,T_{*}(l)}=\frac{1}{E[\tau_{D-1,N}]}.

Set δ>0\delta>0. Then, for all δ\delta we can find LL such that for all lLl\geq L we know

Yi,Ti(l)1E[τD+1,N]δ,Yi,Ti(l)1E[τD+1,N]+δ,Y,T(l)1E[τD1,N]δ,Y,T(l)1E[τD1,N]+δY_{i,T_{i}(l)}\geq\frac{1}{E[\tau_{D+1,N}]}-\delta,\quad Y_{i,T_{i}(l)}\leq\frac{1}{E[\tau_{D+1,N}]}+\delta,\quad Y_{*,T_{*}(l)}\geq\frac{1}{E[\tau_{D-1,N}]}-\delta,\quad Y_{*,T_{*}(l)}\leq\frac{1}{E[\tau_{D-1,N}]}+\delta

For each real move of the searcher, the MCTS method chooses the direction with the best UCT value:

Il=maxi{1,,K}{Yi,Ti(l1)+cl1,Ti(l1)},I_{l}=\max_{i\in\{1,...,K\}}\{Y_{i,T_{i}(l-1)}+c_{l-1,T_{i}(l-1)}\},

where cl,sc_{l,s} is a bias sequence chosen to be

ct,s=2lnts.c_{t,s}=\sqrt{\frac{2\ln t}{s}}.

Fix ϵ>0\epsilon>0. We want to show that if ll is sufficiently large, (Ili)ϵ\mathds{P}(I_{l}\neq i_{*})\leq\epsilon. For any suboptimal move, ii, let pi,l=(Yi,Ti,Ti(l)Y,T(l))p_{i,l}=\mathds{P}(Y_{i,T_{i,T_{i}(l)}}\geq Y_{*,T_{*}(l)}). We substitute the inverse of our average reward to get pi,l=(X,T(l)Xi,Ti(l))p_{i,l}=\mathds{P}(X_{*,T_{*}(l)}\geq X_{i,T_{i}(l)}). Then we know (Ili)iipi,l\mathds{P}(I_{l}\neq i_{*})\leq\sum_{i\neq i_{*}}p_{i,l}. Therefore it suffices to show that pi,lϵKp_{i,l}\leq\frac{\epsilon}{K} is true for all suboptimal arms when ll is large enough.
Using the limits above, if X,T(l)<E[τD1,N]+δX_{*,T_{*}(l)}<E[\tau_{D-1,N}]+\delta and Xi,Ti(l)>E[τD1,N]δX_{i,T_{i}(l)}>E[\tau_{D-1,N}]-\delta then X,T(l)<Xi,Ti(l)X_{*,T_{*}(l)}<X_{i,T_{i}(l)}. Therefore,

pi,l(X,T(l)E[τD1,N]+ϵ)+(Xi,Ti(l)E[τD+1,N]ϵ).p_{i,l}\leq\mathds{P}(X_{*,T_{*}(l)}\geq E[\tau_{D-1,N}]+\epsilon)+\mathds{P}(X_{i,T_{i}(l)}\leq E[\tau_{D+1,N}]-\epsilon).

By the law of large numbers, as nn\to\infty, (|E[X]Xn|>δ)0\mathds{P}(|E[X]-X_{n}|>\delta)\to 0. We apply this to the equality above to say that

limlpi,llimlP(X,T(l)E[τD1,N]ϵ)+limlP(E[τD+1,N]Xi,Ti(l)ϵ)=0.\lim_{l\to\infty}p_{i,l}\leq\lim_{l\to\infty}P(X_{*,T_{*}(l)}-E[\tau_{D-1,N}]\geq\epsilon)+\lim_{l\to\infty}P(E[\tau_{D+1,N}]-X_{i,T_{i}(l)}\geq\epsilon)=0.

Then, since limlpi,l=0\lim_{l\to\infty}p_{i,l}=0 we can choose an LL sufficiently large such that for all lLl\geq L

pi,lϵK.p_{i,l}\leq\frac{\epsilon}{K}.

IV.2 Convergence to a Self-Avoiding Random Walk

Theorem: A Monte Carlo Tree Search that uses the UCT selection policy with random walk default policy and for a uniformly distributed target will converge to a nearly self-avoiding walk as MM, the size of the search lattice, goes to infinity.

Proof.

We want to show that the UCT algorithm, defined by

UCT(v)=Q(v)N(v)+clog(N(v))N(v),UCT(v^{\prime})=\frac{Q(v^{\prime})}{N(v^{\prime})}\ +c\sqrt{\frac{log(N(v))}{N(v^{\prime})}},

chooses a child node (vv^{\prime}) randomly as the number of trials goes toward infinity, unless one direction has been picked more frequently than the others.

Let TT be the randomly selected target location and StS_{t} be the location of the searcher at time tt. Assume that S0=(0,0)S_{0}=(0,0). For all ϵ>0\epsilon>0 choose KϵK_{\epsilon} such that 1Kϵ<ϵ\frac{1}{K_{\epsilon}}<\epsilon as the radius of a region EE around the origin. Thus KϵK_{\epsilon} is the minimum number of steps required to exit this region. Then choose MM as the dimension of a square search grid where TT is located randomly within it such that P(dist(T,S0)>Kϵ)=1δP(dist(T,S_{0})>K_{\epsilon})=1-\delta where dist(x,y)dist(x,y) returns a discrete value of minimum steps between xx and yy and δ\delta is arbitrarily small. This is represented pictorially in figure 10.

Refer to caption
Figure 10: Search grid given ϵ\epsilon.

Let vi,i{1,2,3,4}v^{\prime}_{i},\;i\in\{1,2,3,4\} represent each of the four child nodes (up, down, left and right) of the root node vv. Then Q(v)Q(v^{\prime}) represents the success rate of this node, where Q(v)=1kQ(v^{\prime})=\frac{1}{k} where kk is the average number of steps for many trials. Recall that KϵK_{\epsilon} is the step count for an optimal path, so on average kKϵk\gg K_{\epsilon}. This means that as the number of trials increases we get Q(v)1Kϵ<ϵQ(v^{\prime})\ll\frac{1}{K_{\epsilon}}<\epsilon. Therefore as ϵ\epsilon gets smaller, Q(v)Q(v^{\prime}) approaches zero. Hence the first term of the UCT algorithm approaches zero.

The second term, clog(N(v))N(v)c\sqrt{\frac{log(N(v))}{N(v^{\prime})}}, is a relationship between the visit counts of the parent and child nodes. For the first four trials, each child node gets visited once (chosen in a random order as per MCTS protocol). It is important to note that after the first step is taken, the previous root node (now a child node) will have a very large visit count because it was included in all of the simulations from the past decision. Thus, after 4 trials, N(v)=1vi,i{1,2,3,4}N(v^{\prime})=1\;\;\forall\;v^{\prime}_{i},\;i\in\{1,2,3,4\} except the direction from which it came, which will be significantly higher (equal to the number of loops). Note that N(v)N(v) is the same for all viv^{\prime}_{i} since this new root node’s visit count increases after each and every trial.

Since N(v)N(v^{\prime}) has equal entries for all directions except one, and N(v)N(v) is a constant, the UCT vector will have equal values for three moves, excluding the direction from which it came. Thus the UCT program picks one of the three directions randomly, avoiding its previous path. Call this direction v1v^{\prime}_{1}. The next trial will be run using v1v^{\prime}_{1} as the first move. Regardless of the outcome, we now get N(v)=1vii1N(v^{\prime})=1\;\>\forall v^{\prime}_{i}\;\>i\neq 1 (except the previously visited node), and 22 for v1v^{\prime}_{1}. Since the UCT value has N(v)N(v^{\prime}) in the denominator of the second term, the UCT value will decrease for this v1v^{\prime}_{1} that has been visited twice. Therefore max(UCT)max(UCT) will randomly return one of the three remaining values (these are all equal).

The process will repeat until all four nodes are visited twice, then three times, etc. Regardless of the time this program runs it will continue randomly cycling through the nodes, trying to avoid hitting one node more times than the others. At the end, all four nodes will have an equal visit count so the final move decision will be chosen uniformly random.

Therefore the program will choose a random direction unless one or more of the directions have been chosen an unequal number of times, in which case the program will choose randomly between the options visited the minimum number of times. Thus the path becomes a nearly self-avoiding random walk. ∎

V Conclusion

In this paper we studied a search detection game on a 2-D finite lattice with periodic boundaries. The target was chosen using a normal distribution with varying standard deviation σ\sigma. The searcher followed the Monte Carlo Tree Search algorithm with the UCT selection policy with the rewards based on the inverse of the time it takes the searcher to find the target. In the study, the standard deviation of the target was varied. Additionally two default policies for the MCTS were studied, a random walk and Lévy flight search.

From numerical simulations, we find the MCTS learns to find the target efficiently for all cases of σ\sigma. Although there are diminishing returns for σ>10\sigma>10 which corresponds to 95% of the targets being found within a 20×2020\times 20 subsection of the 40×4040\times 40 grid. In the case where the target is known (drawn from the delta distribution), the searcher converges to the optimal bath as the number of simulation loops (or decision time) grows. Interestingly, we found that between the two default policies, Lévy flight search out performs random walk based on loops per decision where as the random walk default policy is better for a restricted decision time. This could be because the random walk is more efficient at searching the moderate sized grid (40×4040\times 40) that we were using. We expect that a Lévy flight default policy would be optimal in both cases on larger grids.

Theoretically we prove two theorems for this game. The first theorem shows that MCTS will converge to the optimal path under mild assumptions. The key to the proof are that the default policy must have a stopping time which, on average, is shorter the closer you start to the target. The second theorem shows that for a uniformly selected target, i.e. no information given, as the grid grows large the MCTS with the random walk default policy converges to a nearly (or weakly) self avoiding random walk. Intuitively this makes sense. The searcher only looks for the target in previously unvisited sites.


Acknowledgements: The work of S. Hottovy is partially supported by the National Science Foundation under Grant DMS-1815061. E. Kozak was supported as a student by the NSF grant. This project is also part of the Trident Scholar program at USNA and received support and feedback from the Trident committee, especially in the School of Math and Science.

References

  • [1] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235–256, 2002.
  • [2] Roland Bauerschmidt, Hugo Duminil-Copin, Jesse Goodman, and Gordon Slade. Lectures on self-avoiding walks. Probability and Statistical Physics in Two and More Dimensions (D. Ellwood, CM Newman, V. Sidoravicius, and W. Werner, eds.), Clay Mathematics Institute Proceedings, 15:395–476, 2012.
  • [3] Richard Bellman. Mathematical optimization techniques. Univ of California Press, 1963.
  • [4] Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1–43, 2012.
  • [5] Vincenzo Fioriti, Fabio Fratichini, Stefano Chiesa, and Claudio Moriconi. Levy foraging in a dynamic environment–extending the levy search. International Journal of Advanced Robotic Systems, 12(7):98, 2015.
  • [6] Douglas W Gage. Randomized search strategies with imperfect sensors. In Mobile Robots VIII, volume 2058, pages 270–279. International Society for Optics and Photonics, 1994.
  • [7] Richard Kelly and David Churchill. Comparison of monte carlo tree search methods in the imperfect information card game cribbage.
  • [8] Levente Kocsis and Csaba Szepesvári. Bandit based monte-carlo planning. In European conference on machine learning, pages 282–293. Springer, 2006.
  • [9] Levente Kocsis, Csaba Szepesvári, and Jan Willemson. Improved monte-carlo search. Univ. Tartu, Estonia, Tech. Rep, 1, 2006.
  • [10] Elana. Kozak. Trident-2021. https://github.com/ekozak7/Trident-2021, 2020.
  • [11] Jacek Mańdziuk. Mcts/uct in solving real-life problems. In Advances in Data Analysis with Computational Intelligence Methods, pages 277–292. Springer, 2018.
  • [12] G Roelofs. Monte carlo tree search in a modern board game framework. Research paper available at umimaas. nl, 2012.
  • [13] Kurt Egon Shuler. Stochastic processes in chemical physics, volume 30. John Wiley & Sons, 2009.
  • [14] GM Viswanathan, V Afanasyev, Sergey V Buldyrev, Shlomo Havlin, MGE Da Luz, EP Raposo, and H Eugene Stanley. Lévy flights in random searches. Physica A: Statistical Mechanics and its Applications, 282(1-2):1–12, 2000.