This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Average-Case Analysis of Greedy Matching for D2D Resource Sharing

Shuqin Gao, Costas Courcoubetis, and Lingjie Duan Engineering Systems and Design Pillar, Singapore University of Technology and Design
School of Data Science, The Chinese University of Hong Kong, Shenzhen
Abstract

Given the proximity of many wireless users and their diversity in consuming local resources (e.g., data-plans, computation and even energy resources), device-to-device (D2D) resource sharing is a promising approach towards realizing a sharing economy. In the resulting networked economy, nn users segment themselves into sellers and buyers that need to be efficiently matched locally. This paper adopts an easy-to-implement greedy matching algorithm with distributed fashion and only sub-linear O(logn)O(\log n) parallel complexity, which offers a great advantage compared to the optimal but computational-expensive centralized matching. But is it efficient compared to the optimal matching? Extensive simulations indicate that in a large number of practical cases the average loss is no more than 10%10\%, a far better result than the 50%50\% loss bound in the worst case. However, there is no rigorous average-case analysis in the literature to back up such encouraging findings, which is a fundamental step towards supporting the practical use of greedy matching in D2D sharing. This paper is the first to present the rigorous average analysis of certain representative classes of graphs with random parameters, by proposing a new asymptotic methodology. For typical 2D grids with random matching weights we rigorously prove that our greedy algorithm performs better than 84.9%84.9\% of the optimal, while for typical Erdős-Rényi random graphs we prove a lower bound of 79%79\% when the graph is neither dense nor sparse. Finally, we use realistic data to show that our random graph models approximate well D2D sharing networks encountered in practice.

I Introduction

Thanks to advances in wireless and smartphone technologies, mobile users in proximity can use local wireless links (e.g., short-range communications) to share local resources (e.g., data-plans, computation and energy resources). For instance, subscribed users who have leftover data plans can set up personal/portable hotspots and share data connections to travelers with high roaming fees or those who face data deficit [1, 2]. Besides sharing data-plans, mobile terminals with extra local memories can exchange the requested files with each other in the vicinity [3]. Furthermore, a user in need of bandwidth for video streaming can seek some neighboring users’ assistance to download and forward video segments via local links [4]. Given the large diversity for each user in the levels of her individual resource utilization, device-to-device (D2D) resource sharing is envisioned as a promising approach to pool resources and increase social welfare.

Some recent studies have been conducted for modelling and guiding D2D resource sharing in wireless networks (e.g., [1, 2, 4, 6, 3, 5]). As a node in the established D2D network graph, each mobile user can be a resource buyer or seller, depending on whether her local resource is sufficient or not. As in [6] and [3], according to their locations, each user can only connect to a subset of users in the neighborhood through wireless connections, and the available wireless links are modelled as edges in the network graph. Sharing between any two connected users brings in certain benefit to the pair, which is modeled as a non-negative weight to the corresponding edge.

All these works (e.g., [1, 2, 4, 6, 3, 5]) optimize resource allocation by matching buyers and sellers in a centralized manner that requires global information and strict coordination. Hence the developed approaches cannot scale well in a scenario involving a large number of users, due to a large communication and computation overhead caused by the centralized nature of the proposed solutions. Carrying this argument further, the existing optimal weighted matching algorithms from the literature cannot be effectively used in the case of large user-defined networks due to their centralized nature and super-linear time complexity [7]. This motivates the need for developing distributed algorithms that exploit parallelism, have low computation complexity and good average performance for practical parameter distributions.

In the broader literature of distributed algorithm design for matching many buyers and sellers in a large graph, a greedy matching algorithm of linear complexity is proposed in [8, 9] without requiring a central controller to gather all information. It simply selects each time the edges with local maximum weights and yields an approximation ratio of 1/21/2 as compared to the optimum. A parallel algorithm is further proposed in [10] to reduce complexity at the cost of obtaining a smaller approximation ratio than 1/21/2. It should be noted that in the analysis of these algorithms, complexity and approximation ratio are always worst-case measures, but the worst-case approximation ratio rarely happens in most network cases in practice. Overall, there is a lack of average-case analysis of the greedy matching in the literature to compare its average performance with the optimal matching.

Since worst-case bounds no longer work for average-case analysis, we develop totally new techniques to analyze average performance. These techniques become more accurate when taking into account the structure of the network graph, and provide a very positive assessment of the greedy matching’s average performance that is far from the worst case. Since the greedy matching can be naturally implemented in parallel by each node in the network, we also prove that with high probability (w.h.p.), the algorithm has sub-linear parallel complexity O(logn)O(\log n) in the number of users nn. Our main contributions and key novelty are summarized as follows.

  • New average-case analysis of greedy matching in 2D grid networks with random weights: To provide rigorous results we first study the average performance of the greedy matching in the representative case of 2D grid network with random weights. In this case, dynamic programming cannot be directly applied since matching users does not divide the grid into sub-grids. We introduce a new asymptotic analysis method to prove that our greedy matching’s average performance ratio as compared to the average of the optimal matching is at least 84.9%84.9\%. If the greedy algorithm is allowed to run in parallel (as expected in practice), we prove that it has only sub-linear complexity O(logn)O(\log n) w.h.p. in 2D grids. Thus, our algorithm provides a great implementation advantage compared to the optimal but computational-expensive centralized matching.

  • New average-case analysis of greedy matching in random networks: We develop a new theoretic technique to analyze large Erdős-Rényi random graphs G(n,p)G(n,p), where each of nn users connects to any other user with probability pp. For a dense random graph with constant pp, we prove that the greedy matching achieves an average performance ratio that tends to 100%100\% as nn increases. The analysis of sparse graphs is more challenging, yet we equivalently reduce to the asymptotic analysis of random trees. By exploiting the recursive nature of trees, we obtain rigorous average performance ratio bounds and parallel complexity O(logn)O(\log n) (w.h.p.) when p<1/np<1/n. We also prove that the average performance ratio reaches its minimum (still above 79%79\%) when the graph is neither dense nor sparse.

  • Application to practical scenarios: We conduct experiments using real data for mobile user locations to simulate realistic D2D networks with given constraints on the maximum allowed communication distance between devices. We show that our analytical G(n,p)G(n,p) performance measure approximates well many practical cases of such D2D sharing networks. To decide the maximum D2D sharing range among users, we take into account the D2D communication failure due to path-loss and mutual interference among matched pairs. The optimal sharing range is achieved by tradeoff between transacting with more devices but with the higher risk that the chosen best neighbor might not be effectively usable due to a communication failure.

The paper is organized as follows. In Section II, we present our network model and the greedy matching algorithm for solving the D2D resource sharing problem in any network graph. In Section III, we analyze the average performance ratios of the algorithm in the 2D grids. Sections IV and V extend the average-case analysis to random graphs and practical scenarios. Section VI concludes the paper.

II System Model and Preliminaries

II-A System Model for D2D Resource Sharing

We first describe our D2D resource sharing model that involves a large number of potential users to share resources with each other via local wireless links (e.g., short-range communications). In this model sharing takes place in repeated time intervals (or ‘rounds’). In each interval, we first run an algorithm to determine the pairs of users that are willing to exchange resources during the given round, and then realize the actual sharing of the corresponding resources as determined by the algorithm. The set of participating users and the available D2D links may be different for different rounds.

In each round, we form the ‘network graph’ G=(U,E)G=(U,E), where UU is the set of nodes corresponding to users that are participating in the given round, and EE is the set of D2D links between participating users that are feasible to establish with some minimum level of performance (e.g., signal strength, actual physical distance, etc., depending on the application). For each user uiU={u1,u2,,un}u_{i}\in U=\{u_{1},u_{2},\cdots,u_{n}\}, the subset A(ui)UA(u_{i})\subseteq U denotes the set of her neighbors in GG, i.e., there is a feasible D2D link eijEe_{ij}\in E between uiu_{i} and any ujA(ui)u_{j}\in A(u_{i}). Note that different definitions of ‘feasibility’ for D2D links will imply a different set of edges EE between the users in UU. Also the set UU is changing over time since new users may join the sharing economy and existing users may drop out after satisfying their needs or moving out of range.

With each edge eijEe_{ij}\in E there is an associated weight wij0w_{ij}\geq 0 that models the (net) sharing benefit between this pair of users, if these users eventually transact with each other (i.e., are ‘matched’ in our terminology), converted in some monetary basis (say $\$). Let W={wij}W=\{w_{ij}\} be the weight vector over all edges of GG. Note that our model is very flexible and can fit to various applications of D2D resource sharing by allowing for different ways to define the values for wijw_{ij}. For example, in a secondary data-plan trading market [1, 2], user uiu_{i} with data-plan surplus shares her personal hotspot connection with neighboring user uju_{j} with high roaming fee, and weight wijw_{ij} models the difference between user uiu_{i}’s saved roaming fee and the sharing cost (e.g., energy consumption in battery) of user uiu_{i}. In another example of cooperative video streaming [4], user uiu_{i} seeks user uju_{j}’s assistance to download video segments for local sharing, and wijw_{ij} becomes the difference between the QoE improvement of user uiu_{i} and the download cost of user uju_{j}.

In any given round, our sharing model corresponds to an instance of a random weighted graph (G=(U,E),W)(G=(U,E),W). A simple interpretation of the model is that a typical user, when participating, corresponds to a random node in GG. In particular, we don’t care for the actual identity of the participating users (after all, we care for the total value generated in the economy, summed over all participants). To simplify the model, we assume certain i.i.d. properties for the resulting stochastic process, i.e., in each round the set UU and the corresponding EE, WW are i.i.d., with certain distributions. In particular, we assume that the weights wijw_{ij} take values from a finite discrete set V={v1,v2,,vK}V=\{v_{1},v_{2},\cdots,v_{K}\} according to the general probability distribution Pr(wij=vk)=pkPr(w_{ij}=v_{k})=p_{k} with k=1Kpk=1\sum_{k=1}^{K}p_{k}=1.111We can similarly extend our average-case analysis to continuous weight distributions, though it is unlikely to have any practical interest. Without loss of generality, we assume 0v1<v2<<vK0\leq v_{1}<v_{2}<\cdots<v_{K}. A small-scale illustrative instance of the D2D resource sharing model is shown spatially on the ground in Fig. 1, which can be abstracted to a weight graph (G=(U={u1,u2,,u7},E={e12,e17,e23,e25,e36,e45,e47}),W={w12,w17,w23,w25,w36,w45,w47})(G=(U=\{u_{1},u_{2},\cdots,u_{7}\},E=\{e_{12},e_{17},e_{23},e_{25},e_{36},e_{45},e_{47}\}),W=\{w_{12},w_{17},w_{23},w_{25},w_{36},w_{45},w_{47}\}).

Refer to caption
Figure 1: An illustrative instance of the D2D resource sharing model with n=7n=7 users is captured spatially.

In typical practices of D2D sharing (e.g., energy transfer), a user is only matched to a single neighbor (if any) to finally transact with. Keeping this simple but practical case of single matching per user222Allowing more concurrent matchings might not greatly improve performance, since in practice most of the total benefit is usually obtained from one among the possible matchings (e.g., assume a Pareto distribution on the values of the possible matchings). Furthermore, obtaining the resources as a result of a single matching decreases the marginal benefit of the resources from more matchings., given a weighted graph (G=(U,E),W)(G=(U,E),W), we would like to match the most valuable (with the highest value of wijw_{ij}) pairs of users to maximize the total sharing benefit (i.e., the ‘social welfare’). Assuming full and globally available information on GG and WW, we formulate the social welfare maximization problem as a maximum weighted matching problem:

𝒫1:\displaystyle\mathcal{P}_{1}: max\displaystyle\max\quad eijEwijxij,\displaystyle\sum_{e_{ij}\in E}w_{ij}x_{ij}, (1a)
s.t. ujA(ui)xij1,uiU,\displaystyle\sum_{u_{j}\in A(u_{i})}x_{ij}\leq 1,\quad\forall u_{i}\in U, (1b)
xij{0,1},eijE,\displaystyle x_{ij}\in\{0,1\},\quad\forall e_{ij}\in E, (1c)

where xijx_{ij} is the binary optimization variable denoting whether edge eije_{ij} is included in the final matching (xij=1x_{ij}=1) or not (xij=0x_{ij}=0). Constraint (1b) tells that any user uiu_{i} can only be matched to at most one user in her set of neighbors A(ui)A(u_{i}).

II-B Preliminaries of Greedy Algorithm

According to [7], to optimally solve the maximum weighted matching problem 𝒫1\mathcal{P}_{1}, one needs to centrally gather the weight and graph connectivity information beforehand. Further, searching for all possible matchings results in super-linear computation complexity, which is formidably high for a network with a large number nn of users. Alternatively, the greedy matching addresses these two issues by keeping information local and allowing the algorithm to be performed in a distributed fashion. Algorithm 1 outlines the key steps of the greedy matching algorithm (please see intuition in the text that follows).

Algorithm 1: Greedy matching algorithm for solving problem 𝒫1\mathcal{P}_{1} for the graph (G=(U,E),WG=(U,E),W).

Initialization: U=UU^{\prime}=U; A(ui)=A(ui),uiUA^{\prime}(u_{i})=A(u_{i}),\forall u_{i}\in U; xijx_{ij}

=0,eijE=0,\forall e_{ij}\in E.

In each iteration, repeat the following two phases:

Proposal phase:

For each unmatched user uiUu_{i}\in U^{\prime}:

  • User uiu_{i} selects a user uju_{j^{*}} among her unmatched neighbors in A(ui)A^{\prime}(u_{i}) with the maximum weight wijw_{ij^{*}}.

  • User uiu_{i} sends to uju_{j^{*}} a matching proposal.

Matching phase:

For a user pair (ui,uj)(u_{i},u_{j}) that both uiu_{i} and uju_{j} receive proposals

from each other:

  • Match uiu_{i} and uju_{j} by updating xij=1x_{ij}=1 and U=U{ui,uj}U^{\prime}=U^{\prime}\setminus\{u_{i},u_{j}\}.

  • Make uiu_{i} and uju_{j} unavailable for matching with others, by updating A(uk)=A(uk){ui}A^{\prime}(u_{k})=A^{\prime}(u_{k})\setminus\{u_{i}\} for any ukA(ui)u_{k}\in A^{\prime}(u_{i}), and similarly for uju_{j}.

Algorithm 1 is randomized333It randomizes in the selection of preferred neighbors in case there are multiple equally best choices in the proposal phase. A way to simplify this and make the algorithm deterministic is to assume that nodes are assigned unique IDs and that a node assigns priority in the case of ties to her neighbor with the highest ID value. This avoids loops and guarantees termination in O(|E|)O(|E|) steps. In the rest of the paper we can assume this deterministic version for Algorithm 1. and can be implemented distributedly: at each time, each user uses local information to choose the unmatched neighbor with the highest weight as her potential matching partner; she will stop once this preference becomes reciprocal, or there are no available unmatched neighbors. This algorithm calculates a matching with total weight at least 1/21/2 of the optimum (see [9]). This worst-case approximation ratio of 1/21/2 is achieved in a three-edge instance where the middle edge has slightly larger weight than its two adjacent edges, since the greedy matching chooses the middle edge while the optimal matching chooses the two side edges.

II-C Our Problem Statement for Average-Case Analysis

Though the approximation ratio of Algorithm 1 is 1/21/2 with half efficiency loss in the worst case, this ratio is achieved in the three-edge instance only when the middle edge has slightly larger weight than its two adjacent edges. In a large network instance, given the i.i.d. assumption regarding the choice of the weights, it is improbable that the graph will consist of an infinite repetition of the above special weighted three-edge pattern which leads to the worst-case performance. Hence, we expect the average performance ratio of the greedy matching to be much above 1/21/2. We next provide the necessary definitions for our average performance analysis.

By taking expectation with respect to the weights in WW that are i.i.d. with a general discrete distribution Pr(wij=vk)=pk,k=1,,KPr(w_{ij}=v_{k})=p_{k},\forall k=1,\ldots,K, we define the average performance ratio PR(G)PR(G) of Algorithm 1 for a given graph GG as follows:

PR(G)=𝔼W[f^(G,W)=eijEwijx^ij]𝔼W[f(G,W)=eijEwijxij],\displaystyle PR(G)=\frac{\mathbb{E}_{W}[\hat{f}(G,W)=\sum_{e_{ij}\in E}w_{ij}\hat{x}_{ij}]}{\mathbb{E}_{W}[f^{\star}(G,W)=\sum_{e_{ij}\in E}w_{ij}x^{\star}_{ij}]}, (2)

where f(G,W)f^{\star}(G,W) and f^(G,W)\hat{f}(G,W) denote the total weights (i.e., social welfare) under the optimal matching and the greedy matching, respectively, {xij},{x^ij}\{x^{\star}_{ij}\},\{\hat{x}_{ij}\} being the corresponding matchings. Since over time the algorithm is repeated for new instances, the numerator and denominator correspond to the time-average of the social welfare obtained by running the greedy and the optimal algorithms, respectively.

We next evaluate the performance ratio for several special forms of practical interest for GG that corroborate the excellent performance of the greedy matching, including 2D grids and the more general random graph G(n,p)G(n,p) networks. In the case of random graphs, we must take expectation in (2) over both GG and WW. Besides, we will also prove the sub-linear computation complexity to run Algorithm 1 for these networks.

III Average-Case Analysis for D2D Sharing in 2D Grid Networks

In wireless networks, 2D grids are widely used to model social mobility of users (e.g., [11, 12]). In this section, we analyze the average performance ratio and the parallel complexity of Algorithm 1 to validate its performance on planar user connectivity graphs. Note that the average-case analysis of 2D grids is an important benchmark for the more general random graphs analyzed in the following sections.

III-A Average Performance Analysis of Optimal Matching

It is impossible to obtain the exact value of the average total weight 𝔼W[f(G,W)]\mathbb{E}_{W}[f^{\star}(G,W)] under the optimal matching due to the exponential number of the possible matchings. Instead, we propose a method to compute an upper bound for the denominator 𝔼W[f(G,W)]\mathbb{E}_{W}[f^{\star}(G,W)] in (2) using a methodology that holds for general graphs. This upper bound will be used to derive a lower bound for the average performance ratio PR(G)PR(G) in (2) later.

In any graph G=(U,E)G=(U,E), each matched edge eijEe_{ij}\in E adds value wijw_{ij} to the final matching. Equivalently, we can think of it as providing individual users uiu_{i} and uju_{j} with equal benefit wij/2w_{ij}/2. For any user uiu_{i}, this individual benefit does not exceed half of the maximum weight of its neighboring edges. Using this idea and summing over all users, the total weight of the optimal matching is upper bounded by

f(G,W)12uiUmaxujA(ui)wij.\displaystyle f^{\star}(G,W)\leq\frac{1}{2}\sum_{u_{i}\in U}\max_{u_{j}\in A(u_{i})}w_{ij}. (3)

By taking expectation over the weight distribution, we obtain the closed-form upper bound of the average total weight.

Proposition 1

For a general graph G=(U,E)G=(U,E) with the weight set V={v1,v2,,vK}V=\{v_{1},v_{2},\cdots,v_{K}\} and the weight distribution P={p1,p2,,pK}P=\{p_{1},p_{2},\cdots,p_{K}\}, the average total weight of the optimal matching is upper bounded by

𝔼W[f(G,W)]12uiUk=1Kvk((i=1kpi)|A(ui)|(i=1k1pi)|A(ui)|),\displaystyle\mathbb{E}_{W}[f^{\star}(G,W)]\leq\frac{1}{2}\sum_{u_{i}\in U}\sum_{k=1}^{K}v_{k}((\sum_{i=1}^{k}p_{i})^{|A(u_{i})|}-(\sum_{i=1}^{k-1}p_{i})^{|A(u_{i})|}), (4)

where |A(ui)||A(u_{i})| is the cardinality of A(ui)A(u_{i}).

The proof is given in Appendix A.

III-B Average Performance Ratio of Algorithm 1

Refer to caption
Figure 2: Given the weight set size K=2K=2 and v1<v2v_{1}<v_{2}, an edge that certainly matches is always found in the first 22 edges, as marked in red. There are totally KK=4K^{K}=4 weight combination cases of the first 22 edges. In each case, after matching the red edge, the remaining graph is still linear but with smaller size n2n-2 or n3n-3.

We next turn in the calculation of the average total weight of the greedy matching, i.e., the numerator 𝔼W[f^(G,W)]\mathbb{E}_{W}[\hat{f}(G,W)] in (2). In the case of 2D grid networks we cannot directly use dynamic programming since matching users does not divide the grid into sub-grids. Instead, we start by considering the simplest 1×n1\times n grid where each user uiu_{i} locally connects with two adjacent users ui1u_{i-1} and ui+1u_{i+1} (except for starting and ending users u1u_{1} and unu_{n}). In such linear networks, for notational simplicity we use eie_{i} instead of ei,i+1e_{i,i+1} to denote the connection between users uiu_{i} and ui+1u_{i+1}, and similarly use weight wiw_{i} instead of wi,i+1w_{i,i+1}. Without loss of generality, we assume that each user facing the same weights of the two adjacent edges assigns higher priority to match with the left-hand-side neighbor. This implies that Algorithm 1 becomes deterministic and returns a unique solution.

We prove that given the weight set size KK, an edge eie_{i} that has the local maximum weight (i.e., wi>wi1,wiwi+1w_{i}>w_{i-1},w_{i}\geq w_{i+1} ) and will certainly match in Algorithm 1 can be found within the first KK edges of the linear network graph. Further, by considering all the KKK^{K} weight combinations {w1,,wK}\{w_{1},\cdots,w_{K}\} of the first KK edges and the existence of edge eie_{i} that will certainly match, we derive the recursive formula for {an}\{a_{n}\} where ana_{n} denotes the average total weight of the greedy matching in the linear graph with nn users. An illustrative example for K=2K=2 is shown in Fig. 2 and the recursive formula averaged with respect to the probabilities is given by

an=p12(v1+an2)+p2(v2+an2)+p1p2(v2+an3)\displaystyle a_{n}=p_{1}^{2}(v_{1}+a_{n-2})+p_{2}(v_{2}+a_{n-2})+p_{1}p_{2}(v_{2}+a_{n-3})
=p12v1+(p2+p1p2)v2+(p2+p12)an2+p1p2an3.\displaystyle=p_{1}^{2}v_{1}+(p_{2}+p_{1}p_{2})v_{2}+(p_{2}+p_{1}^{2})a_{n-2}+p_{1}p_{2}a_{n-3}. (5)

Based on that, we derive an=p12v1+(p2+p1p2)v22p2+2p12+3p1p2n+o(n)a_{n}=\frac{p_{1}^{2}v_{1}+(p_{2}+p_{1}p_{2})v_{2}}{2p_{2}+2p_{1}^{2}+3p_{1}p_{2}}n+o(n) when nn is large by using asymptotic analysis. Similarly, for an arbitrary K3K\geq 3, we can also obtain the general formula for the sequence {an}\{a_{n}\} as a function of V={v1,v2,,vK}V=\{v_{1},v_{2},\cdots,v_{K}\} and P={p1,p2,,pK}P=\{p_{1},p_{2},\cdots,p_{K}\}.

Refer to caption
Figure 3: Illustration of the graph decomposition process in three steps for analyzing greedy matching. Algorithm 1 adds the red-colored edges to the greedy matching in each step.

Note that a simple extension of the previous result to n×nn\times n grid by dividing it into nn sub-grids of size 1×n1\times n provides a bad lower bound because all the vertical edges become unavailable to match. Instead, we split the grid network into sub-grids in a way that keeps half of the vertical edges, and then estimate a lower bound of the average total weight. For illustration purpose, we treat the case of weight set size K=2K=2, v1<v2v_{1}<v_{2}. Our procedure involves the following three steps (as illustrated in Fig. 3), and it is easy for the reader to generalize for K>2K>2.

Step 1: Split the n×nn\times n grid into n/2n/2 sub-grids of size 2×n2\times n by eliminating the corresponding edges.

Step 2: For each 2×n2\times n sub-grid from step 1, greedily add the vertical edges of (largest) weight value v2v_{2} to the matching and remove the corresponding users. This results in further disconnecting the given sub-grid into smaller sub-grids of size 2×t2\times t, for appropriate values of tt. Note that each such sub-grid only has vertical edges of (smaller) weight v1v_{1}.

Step 3: For each sub-grid of size 2×t2\times t from step 2, if t=1t=1, simply add this single-edge to the matching. If t>1t>1, greedily match the horizontal edges in the corresponding upper and lower linear networks first and remove the corresponding users; then match the remaining vertical edges of smaller weight v1v_{1}.

The above procedure allows us to find a lower bound on the matching obtained by Algorithm 1. This is obtained by i) eliminating from the set of edges that may be matched the vertical edges in step 1 (see Fig. 3), ii) analyzing the greedy matching of the two horizontal linear networks in the resulting 2×t2\times t sub-grids (using similar analysis as in (5)), and leaving out the possible matching of the rest vertical edges in step 3. By combining this lower bound for the performance of Algorithm 1 with the upper bound for the optimal matching in (9) we obtain the following bound for the average performance ratio.

Proposition 2

In n×nn\times n grids with the weight set V={v1=1,v2=2}V=\{v_{1}=1,v_{2}=2\} and uniform weight distribution p1=p2=12p_{1}=p_{2}=\frac{1}{2}, the average performance ratio of Algorithm 1 satisfies PR(G)84.9%PR(G)\geq 84.9\% when nn\rightarrow\infty.

The proof is given in Appendix B. Observe that here we consider the case of weight set V={1,2}V=\{1,2\} (‘low’ and ‘high’) and uniform weight distribution P={12,12}P=\{\frac{1}{2},\frac{1}{2}\} for illustration purpose. The analysis can be extended for any possible VV and PP.

III-C Parallel Complexity Analysis of Algorithm 1

In this subsection, we focus on analyzing the parallel running time of Algorithm 1, where a unit of time corresponds to one iteration of the steps of Algorithm 1. Similarly, we start with the simplest 1×n1\times n grid. Let H(ui)H(u_{i}) denote the length of the longest chain (sequence of edges) that has non-decreasing weights and starts from uiu_{i} towards the left or right side. Suppose that wi1wi2wiH(ui)+1wiH(ui)>wiH(ui)1w_{i-1}\leq w_{i-2}\leq\cdots\leq w_{i-H(u_{i})+1}\leq w_{i-H(u_{i})}>w_{i-H(u_{i})-1} is such a longest chain. We claim that uiu_{i} will terminate running Algorithm 1 (i.e., by being matched or knowing that there is no unmatched neighbors) within H(ui)/2H(u_{i})/2 time. This is easy to see since starting from time 0, the edge eiH(ui)e_{i-H(u_{i})} will be included in the total matching in iteration 1, eiH(ui)+2e_{i-H(u_{i})+2} in iteration 2, etc. Hence, in less than H(ui)/2H(u_{i})/2 steps, all neighbors of uiu_{i} will have resolved their possible preferences towards users different than uiu_{i}, and subsequently uiu_{i} will either be matched with one of her neighbors or be left with an empty unmatched neighbor set. As Algorithm 1 terminates when all users make their final decision, if the probability of any user in GG having a chain longer than clognc\log n (i.e., maxuiUH(ui)>clogn\max_{u_{i}\in U}H(u_{i})>c\log n) for some constant cc is very small, then the parallel execution of Algorithm 1 will terminate within O(logn)O(\log n) time with very high probability.

Then, we extend our analysis to the n×nn\times n grid. Note that the number of possible chains that start from any given node uiu_{i} and have non-decreasing weights is no longer two (toward left or right) as in 1×n1\times n grid networks, but exponential in the size of the chain (since from each node there are 41=34-1=3 ‘out’ ways for the chain to continue), and such chains now form with non-negligible probability. This problem is not an issue for Algorithm 1 since every node will need to use priorities over ties among neighbors whose edge has the same weight. This significantly reduces the number of possible chains that are relevant to a user’s decisions and we can prove the following proposition.

Proposition 3

In n×nn\times n grids, Algorithm 1 runs in O(logn)O(\log n) time w.h.p..

The proof is given in Appendix C. In conclusion, our distributed matching algorithm has low complexity and provides a great implementation advantage compared to the optimal but computational-expensive centralized matching.

IV Average-Case Analysis for D2D Sharing in G(n,p)G(n,p) Networks

In practice, a mobile user may encounter a random number of neighbors. In this section, we extend our analysis to random networks G(n,p)G(n,p), where nn users connect with each other with probability pp and hence each user has in the average (an order of magnitude) d=npd=np neighbors. Although the actual spatial distribution of users is not necessarily planar, such random graphs can still represent their connectivity on the ground and the analysis also holds.

We study the average performance ratio of Algorithm 1 in the cases of dense random graphs with a constant pp (i.e., dense since d=npd=np increases linearly in nn) [13], and sparse random graphs with a constant average neighbor number d<1d<1 (i.e., p<1/np<1/n) [14]. Unlike the 2D grid networks, the structure of the random network G(n,p)G(n,p) is no longer fixed due to the random connectivity. Though it is more technically difficult to analyze the average performance of Algorithm 1 for random graph structure, we are able to derive the ratio using statistical analysis in the two important cases below. For intermediate values of dd where our techniques cannot be applied, we have used exhaustive sets of simulations.

IV-A Average-Case Analysis of Dense Random Graphs

Given pp remains a constant, as nn increases, each user will have an increasing number of neighbors with the largest possible weight value vKv_{K}. Since such edges are preferred by greedy matching, as nn goes to infinity, the greedy matching will almost surely provide the highest possible total matching value of nvK/2nv_{K}/2 (n/2n/2 pairs of users with weight value vKv_{K}).

Proposition 4

For a random graph G(n,p)G(n,p) with a constant pp, the average performance ratio of Algorithm 1 satisfies PR=100%PR=100\% w.h.p..

The proof is given in Appendix D. In this result, in the definition of the average performance ration PRPR we have taken expectation over both GG and WW. Note that the computation complexity is not anymore O(logn)O(\log n) in this case due to the increasing graph density. An obvious bound is O(|E|)O(|E|) proved in [9]..

IV-B Average-Case Analysis of Sparse Random Graphs

In this subsection, we consider that the connection probability is p=d/np=d/n and hence each user has a constant average number of neighbors d(n1)/ndd(n-1)/n\to d as nn becomes large. We first prove low parallel complexity for Algorithm 1 as long as each user has a small enough number of neighbors to pair with that depends on the distribution of the edge weights.

Proposition 5

For G(n,d/n)G(n,d/n) type of networks, Algorithm 1 runs in O(logn)O(\log n) time w.h.p. if d<2/max{p1,p2,,pK}d<2/\max\{p_{1},p_{2},\cdots,p_{K}\}.

The proof is given in Appendix E. Note that this condition is always satisfied when d<1d<1 as the weight probability pk1p_{k}\leq 1 for any kk.

Next, we focus on studying the average performance ratio PRPR for sparse random graphs G(n,d/n)G(n,d/n). The average total weight of the optimal matching can be upper bounded by (9), which works for any graph. Then, we only need to study the average total weight 𝔼GG(n,d/n),W[f^(G,W)]\mathbb{E}_{G\sim G(n,d/n),W}[\hat{f}(G,W)] of the greedy matching. Note that when matching any graph GG, we can equivalently view that the weight of any matched edge is equally split and allocated to its two end-nodes. Then we can rewrite the above expression as follows:

𝔼GG(n,d/n),W[f^(G,W)]=n𝔼GG(n,d/n),W[xi(G,W)],\displaystyle\mathbb{E}_{G\sim G(n,d/n),W}[\hat{f}(G,W)]=n\mathbb{E}_{G\sim G(n,d/n),W}[x_{i}(G,W)], (6)

where xi(G,W)x_{i}(G,W) is half of the weight of the matched edge corresponding to each user uiu_{i} under the greedy matching.

We cannot use dynamic programming directly to compute the average weight 𝔼GG(n,d/n),W[xi(G,W)]\mathbb{E}_{G\sim G(n,d/n),W}[x_{i}(G,W)] per user in (6) since G(n,d/n)G(n,d/n) may have loops and it cannot be divided into independent sub-graphs. Given that nn is large and assuming d<1d<1, then graph G(n,d/n)G(n,d/n), with very high probability, is composed by a large number of random trees without forming loops. In this case the matching weight xi(G,W)x_{i}(G,W) of user uiu_{i} only depends on the connectivity with other users in the same tree. To analyze xi(G,W)x_{i}(G,W), we want to mathematically characterize such trees which turn out to be ‘small’ because d<1d<1. Note that, in G(n,d/n)G(n,d/n), each user has n1n-1 independent potential neighbors, and its random neighbor number follows a binomial distribution B((n1),d/n)B((n-1),d/n) with mean (n1)d/nd(n-1)d/n\rightarrow d, as nn becomes large. This binomial distribution can be well approximated by the Poisson distribution Poi(d)Poi(d) (with mean dd). We define T(d)T(d) as a random tree where each node in the tree gives birth to children randomly according to the Poisson distribution Poi(d)Poi(d).

Proposition 6

Given a sparse random network G(n,d/n)G(n,d/n) with d<1d<1 and sufficiently large nn, the average matching weight of any node uiu_{i} is well approximated by the average matching weight of the root node of a random tree T(d)T(d), i.e.,

limn𝔼GG(n,dn),W[xi(G,W)]=𝔼TT(d),W[xroot(T,W)].\displaystyle\lim_{n\to\infty}\mathbb{E}_{G\sim G(n,\frac{d}{n}),W}[x_{i}(G,W)]=\mathbb{E}_{T\sim T(d),W}[x_{root}(T,W)]. (7)

The proof is given in Appendix F. We will show numerically later that the approximation in (7) yields trivial performance gap and remains accurate as long as d10d\leq 10. By substituting (7) into (6), we obtain approximately the average total weight 𝔼GG(n,d/n),W[f^(G,W)]\mathbb{E}_{G\sim G(n,d/n),W}[\hat{f}(G,W)]. Hence, it remains to derive the form of 𝔼TT(d),W[xroot(T,W)]\mathbb{E}_{T\sim T(d),W}[x_{root}(T,W)]. Given the recursive nature of trees, we are able to use dynamic programming techniques.

The root node may receive multiple proposals from its children corresponding to different possible edge weights in the set {v1,v2,,vK}\{v_{1},v_{2},\cdots,v_{K}\}, and will match to the one (of them) with the maximum weight. We define yky_{k}, k{1,2,,K}k\in\{1,2,\cdots,K\}, to denote the probability that the root node receive a proposal from a child who connects to it with an edge of weight vkv_{k}. Then, by considering all the possible weight combinations of the root’s children, we can compute the probability to match a child with any given weight, using the proposal probabilities yky_{k}. In a random tree T(d)T(d), given the root node is matched with one of its children, the remaining graph can be divided into several sub-trees which are generated from the grand-child or child nodes of the root node. In any case, a sub-tree starting with any given node has the similar graph structure and statistical property as the original tree T(d)T(d). Thus, we are able to analytically derive the recursive equations for finding the proposal probabilities {yk}\{y_{k}\} for the root node.

Proposition 7

In the random tree T(d)T(d), for any k{1,2,,K}k\in\{1,2,\cdots,K\}, the proposal probability yky_{k} from a child of edge weight vkv_{k} to the root node is the unique solution to the following equation:

yk=e(pK+j=k+1Kyjpj)di=0(pkd)i(1(1yk)i+1)(i+1)!yk.\displaystyle y_{k}=e^{-(p_{K}+\sum_{j=k+1}^{K}y_{j}p_{j})d}\sum_{i=0}^{\infty}\frac{(p_{k}d)^{i}(1-(1-y_{k})^{i+1})}{(i+1)!y_{k}}. (8)

The proof is given in Appendix G. Though not in closed-form, we can easily solve (8) using bisection, and then compute the probability that the root node matches to a child with any given weight. Based on that we derive the average matching weight 𝔼TT(d),W[xroot(T,W)]\mathbb{E}_{T\sim T(d),W}[x_{root}(T,W)] of the root for (7) and thus 𝔼GG(n,d/n),W[f^(G,W)]\mathbb{E}_{G\sim G(n,d/n),W}[\hat{f}(G,W)] in (6). Finally, by comparing with (9) under the optimal matching, we can obtain the average performance ratio of Algorithm 1.

IV-C Numerical Results for Random Graphs

Refer to caption
Figure 4: The average performance ratio of Algorithm 1 in the large random graph G(n,p=d/n)G(n,p=d/n) under different values of average neighbor number dd.

Next, we conduct numerical analysis for sparse random graphs with d<1d<1 and random graphs with finite d1d\geq 1. To do that by using analytic formulas, we need to approximate the random graph by random trees, and one may wonder if the approximation error is significant (when d>1d>1). To answer this question, we consider large network size of n=10,000n=10,000, with edge weights uniformly chosen from the weight set V={1,2}V=\{1,2\} (‘low’ and ‘high’). Our extensive numerical results show that the difference between the simulated average matching weight 𝔼GG(n,dn),W[xi(G,W)]\mathbb{E}_{G\sim G(n,\frac{d}{n}),W}[x_{i}(G,W)] and the analytically derived average matching weight 𝔼TT(d),W[xroot(T,W)]\mathbb{E}_{T\sim T(d),W}[x_{root}(T,W)] in the approximated tree T(d)T(d) is always less than 0.05%0.05\% when d<1d<1 and is still less than 1%1\% even for large 1d101\leq d\leq 10. This is consistent with Proposition 6.

Fig. 4 shows the average performance ratio of Algorithm 1, which is greater than 79%79\% for any dd value. It approaches 100%100\% as dd is small in the sparse random graph regime. Intuitively, when the average neighbor number dd is small and users are sparsely connected, both Algorithm 1 and the optimal algorithm try to match as many existing pairs as possible, resulting in trivial performance gap. When dd is large, each user has many neighbors and choosing the second or third best matching in the greedy matching is also close to the optimum. This is consistent with Proposition 4 for dense random graphs.

V Practical Application Aspects

In practice, the network graphs that one may obtain by restricting the D2D sharing range may have different distributions than the 2D grids and the G(n,p)G(n,p) graphs used in our analysis. In addition to that, the actual performance of the algorithm might be degraded because of communication failures of nodes that are far or mutual interference among pairs. In this section, we provide an initial investigation of the above issues. We construct instances of a network graph based on realistic user location data and check how well our analytical G(n,p=d/n)G(n,p=d/n) performance measure captures the actual performance of the greedy algorithm on the above graph instances by tuning dd to match the average number of neighbors in the realistic graph. Next, we analyze the impact of D2D communication failures on the optimum selection of D2D maximum sharing range.

V-A Approximating Realistic Networks

Refer to caption
Figure 5: The average matching weight per user of the greedy matching obtained by Algorithm 1 versus the average neighbor number dd in the three practical instances of the realistic network and the G(n,p=d/n)G(n,p=d/n) network.

The G(n,d/n)G(n,d/n) network studied in Section IV assumes users connect with each other with the same probability p=d/np=d/n, and hence the average performance of Algorithm 1 in G(n,d/n)G(n,d/n) is characterized by the average neighbor number dd. However, in practice, the connectivity distribution of users can follow different laws due to the structure of the environment and the D2D communication limitations. To validate our analysis on real scenarios, we run our algorithm on graphs corresponding to realistic mobile user data and compare the numerical results with our analytically derived results for G(n,d/n)G(n,d/n) using Propositions 6 and 7.

We use the real dataset in [15] that records users’ position information in a three-story university building. We choose three instances in the peak (in terms of density) time from the dataset and each instance contains hundreds of users. For these users, we assume that any two of them can share resources with each other when they are in the same story and the distance between them is less than the D2D sharing range LL. By setting different values for LL the structure of the graph changes and the average number d(L)d(L) of neighbors per node increases. In Fig. 5, we show the average matching weight (per user) of the greedy matching versus the average neighbor number dd (instead of versus LL) for the three practical instances of the realistic user network and its G(n,d/n)G(n,d/n) approximation. We observe that the average matching weight increases in dd since increasing dd (by increasing LL) provides more sharing choices for each user and our performance measure obtained for G(n,d/n)G(n,d/n) approximates well the performance for the realistic network, especially for large dd.

V-B D2D Sharing Range under Communication Failures

Our numerical results from the previous section suggest that, as expected, the average matching weight of the greedy matching increases with the D2D sharing range LL. We wonder whether a large LL always benefits resource sharing. In fact, for two users who are connected and share resources via a D2D wireless link, a communication failure may occur due to the long-distance transmission or the mutual interference among different matched pairs. In our experiment, we consider that the resource sharing transmission between any two users fails with a probability that increases in the distance between them based on a practical path-loss model and the number of interfered pairs in proximity [16]. In our simulation experiment, we consider a large number n=10,000n=10,000 of users uniformly distributed in a circular ground cell with radius of R=1000R=1000 meters and we can adjust the maximum sharing range LL inside which two users can apply D2D resource sharing.

Refer to caption
Figure 6: The average matching weight per user of the greedy matching obtained by Algorithm 1 versus the maximum D2D sharing range LL with and without considering D2D communication failures.

In Fig. 6, we depict the average matching weight (per user) of the greedy matching versus the maximum D2D sharing range LL for two cases depending on whether or not we consider communication failures. Under communication failures, this weight first increases and then decreases. Intuitively, when the sharing range LL is small, each user has few potential users to share resources or interfere and failures occur rarely to be an issue. But when LL is large, since most of the neighbors are located remotely and the channels between matched pairs may cross each other, there is a high chance for the algorithm to choose such a remote neighbor to incur large path loss or interference. Then, if communication fails, this results in adding zero value to the total matching. This is in contrast to the case without failures, where the performance of the system is always increasing in LL.

VI Conclusions

In this paper, we adopt a greedy matching algorithm to maximize the total sharing benefit in large D2D resource sharing networks. This algorithm is fully distributed and has sub-linear complexity O(logn)O(\log n) in the number of users nn. Though the approximation ratio of this algorithm is 1/21/2 (a worst-case result), we conduct average-case analysis to rigorously prove that this algorithm provides a much better average performance ratio compared to the optimal in 2D grids and random networks G(n,p)G(n,p) when these are sparse or very dense. We also use realistic data to show that our analytical G(n,p)G(n,p) performance measure approximates well D2D networks encountered in practice. Finally, we consider the communication failures due to the practical implementation issues and study the best maximum D2D sharing range.

Appendix A Proof of Proposition 1

For any user uiUu_{i}\in U with degree d(ui)d(u_{i}), the probability that the maximum weight of all the d(ui)d(u_{i}) neighboring edges is equal to or less than vkv_{k} is given by (i=1kpk)d(ui)(\sum_{i=1}^{k}p_{k})^{d(u_{i})} where i=1kpk\sum_{i=1}^{k}p_{k} is the probability that one edge has weight equal to or less than vkv_{k}. Thus, the average maximum weight of all d(ui)d(u_{i}) edges incident to user uiu_{i} is given by k=1Kvk((i=1kpk)d(ui)(i=1k1pk)d(ui))\sum_{k=1}^{K}v_{k}((\sum_{i=1}^{k}p_{k})^{d(u_{i})}-(\sum_{i=1}^{k-1}p_{k})^{d(u_{i})}). Further, the expectation of (3) can be given by:

𝔼W[f(G,W)]12uiUk=1Kvk((i=1kpi)d(ui)(i=1k1pi)d(ui)).\displaystyle\mathbb{E}_{W}[f^{\star}(G,W)]\leq\frac{1}{2}\sum_{u_{i}\in U}\sum_{k=1}^{K}v_{k}((\sum_{i=1}^{k}p_{i})^{d(u_{i})}-(\sum_{i=1}^{k-1}p_{i})^{d(u_{i})}). (9)

The proof is completed.

Appendix B Proof of Proposition 2

Based on Proposition 1, we are able to compute the upper bound of the average total weight under the optimal matching for any given graph. We now consider a n×nn\times n grid and that the weight set V={v1=1,v2=2}V=\{v_{1}=1,v_{2}=2\} and the weight probability distribution P={p1=12,p2=12}P=\{p_{1}=\frac{1}{2},p_{2}=\frac{1}{2}\} are given. Note that, in this gird, there are (n2)2(n-2)^{2} nodes with degree 44, 4n84n-8 nodes with degree 33, and 44 nodes with degree 22 in the grid. Moreover, the average maximum weight of a node with degree dd is 2(12)d2-(\frac{1}{2})^{d}. Thus, we finally obtain the average total weight under the optimal matching in the grid is upper bounded by: 3132n218n18\frac{31}{32}n^{2}-\frac{1}{8}n-\frac{1}{8}.

Regarding the average total weight of greedy matching, we first note that for each 2×n2\times n sub-grid, the greedy algorithm must add the vertical edges with weight 22 between the two rows and the average number of such edges is given by n2\frac{n}{2}. Therefore, the average weight caused by adding the vertical edges with weight 22 is equal to nn.

After adding these edges, the remaining graph becomes a lot of segments as shown in Fig. 3. We first compute the average number of segments with length tt by using probability analysis, and it is given by n2t+2\frac{n}{2^{t+2}}. Moreover, note that, for a segment with length 11, the algorithm will add the only vertical edge with weight 11. Therefore, the average weight caused by adding the vertical edges in segments of length 11 is equal to n23\frac{n}{2^{3}}.

As for for a segment with size 2×t2\times t and t>1t>1, its greedy matching consists of three parts: the individual greedy matching in the first row, the individual greedy matching in the second row, and all possible matching of the vertical edges between the remaining unmatched nodes of the two rows. Thus, the average total weight under the greedy matching in this segment should be at least equal to 2at12a_{t-1}, where at1a_{t-1} denotes the average total weight for a linear network with tt nodes. Therefore, the average weight caused by matched edges in segments of length t>1t>1 is larger than or equal to 2t=2n2t+2at12\sum_{t=2}^{\infty}\frac{n}{2^{t+2}}a_{t-1}. To compute that, we need the value of ata_{t} for any tt. Note that a0=0a_{0}=0, a1=32a_{1}=\frac{3}{2}, a2=74a_{2}=\frac{7}{4} and we have at=74+34at2+14at3a_{t}=\frac{7}{4}+\frac{3}{4}a_{t-2}+\frac{1}{4}a_{t-3} according to (5). Thus, we can compute the value of ata_{t} for finite number of tt, for example, from t=1t=1 to t=1000t=1000. Then, we can further derive a lower bound for t=2n2t+2at1\sum_{t=2}^{\infty}\frac{n}{2^{t+2}}a_{t-1}, which is given by t=21000n2t+2at10.26n\sum_{t=2}^{1000}\frac{n}{2^{t+2}}a_{t-1}\approx 0.26n. Note that the value of t=1001n2t+2at1\sum_{t=1001}^{\infty}\frac{n}{2^{t+2}}a_{t-1} is almost zero since 2t+22^{t+2} increases exponentially while ata_{t} increases linearly when tt is large.

In sum, we can prove that the average total weight under the greedy matching in the n×nn\times n grid is lower bounded by:

limn𝔼W[f^(G,W)]\displaystyle\lim_{n\to\infty}\mathbb{E}_{W}[\hat{f}(G,W)] >n2(n+18n+2t=21000n2t+2at1)\displaystyle>\frac{n}{2}(n+\frac{1}{8}n+2\sum_{t=2}^{1000}\frac{n}{2^{t+2}}a_{t-1})
>0.8225n2,\displaystyle>0.8225n^{2},

where 𝔼W[f^(G,W)]\mathbb{E}_{W}[\hat{f}(G,W)] denotes the average total weight under the greedy matching in the n×nn\times n grid and ata_{t} denotes the average total weight under the greedy matching in a linear network with tt edges.

Now, we know that the lower bound for the average total weight under the greedy matching is given by limn𝔼W[f^(G,W)]>0.8225n2\lim_{n\to\infty}\mathbb{E}_{W}[\hat{f}(G,W)]>0.8225n^{2} and the upper bound of the average total weight under the optimal matching is given by 3132n2+o(n2)\frac{31}{32}n^{2}+o(n^{2}). Therefore, the average performance guarantee is 0.8225163284.90%\frac{0.8225}{\frac{16}{32}}\approx 84.90\% when nn\to\infty.

Appendix C Proof of Proposition 3

We first consider the special case of 1×n1\times n grid. For any user uiu_{i} in such grid, let I(ui)I(u_{i}) be the indicator variable that the length H(ui)H(u_{i}) of the longest chain (sequence of edges) that has non-decreasing weights and starts from uiu_{i} towards the left or right side is greater than clognc\log n (c is a constant). Let I(G)I(G) be the indicator variable that the linear graph GG has at least one such chain with length greater than clognc\log n. Then we have:

𝔼(I(G))𝔼(i=1nI(ui))2nq,\displaystyle\mathbb{E}(I(G))\leq\mathbb{E}(\sum_{i=1}^{n}I(u_{i}))\leq 2nq, (10)

where qq denotes the probability that clognc\log n consecutive edges has non-decreasing weights.

The weight of each edge is assumed to independently take value from KK kinds of weight values {v1,v2,,vK}\{v_{1},v_{2},\cdots,v_{K}\} according to the probability distribution P={p1,p2,,pK}P=\{p_{1},p_{2},\cdots,p_{K}\}. Then, for any clognc\log n edges, there are totally:

k=1K1(1+clognk)(K2k1),\sum_{k=1}^{K-1}{{1+c\log n}\choose k}{K-2\choose k-1},

kinds of non-decreasing weight combinations, and for each of the combinations, the probability to happen is upper bounded by (max{p1,p2,,pK})clogn(\max\{p_{1},p_{2},\cdots,p_{K}\})^{c\log n}. Therefore, the upper bound of the probability qq that any clognc\log n consecutive edges have non-decreasing weights is given by:

q(max{p1,p2,,pK})clognk=1K1(1+clognk)(K2k1)q\leq(\max\{p_{1},p_{2},\cdots,p_{K}\})^{c\log n}\sum_{k=1}^{K-1}{1+c\log n\choose k}{K-2\choose k-1}
nK(max{p1,p2,,pK})clogn.\leq n^{K}(\max\{p_{1},p_{2},\cdots,p_{K}\})^{c\log n}.

Then, we have:

𝔼(I(G))2nK+1(max{p1,p2,,pK})clogn.\mathbb{E}(I(G))\leq 2n^{K+1}(\max\{p_{1},p_{2},\cdots,p_{K}\})^{c\log n}.

Note that when c>K+1logmax{p1,p2,,pK}c>\frac{K+1}{-\log\max\{p_{1},p_{2},\cdots,p_{K}\}}, 𝔼(I(G))\mathbb{E}(I(G)) converges to 0 when nn\to\infty.

Therefore we can conclude that, in 1×n1\times n grid, the algorithm run in O(logn)O(\log n) time w.h.p. according to the first moment method.

Next, we extend our analysis to the n×nn\times n grid. Note that, in such grid, the number of possible chains that start from any given node uiu_{i} and have non-decreasing weights is no longer two (toward left or right) as in 1×n1\times n grid networks, but still limited to be less than 4K4K (the weight set size KK is a given constant). This is because we assume in Algorithm 1, every node will need to use priorities over ties among neighbors whose edge has the same weight and thus the number of possible chains that are relevant to a user’s decisions is significantly reduced. Then we have:

𝔼(I(G))𝔼(i=1nI(ui))4Kn2q,\displaystyle\mathbb{E}(I(G))\leq\mathbb{E}(\sum_{i=1}^{n}I(u_{i}))\leq 4Kn^{2}q, (11)

By using the similar arguments in the 1×n1\times n grid, we can prove that in n×nn\times n grid, the algorithm run in O(logn)O(\log n) time w.h.p..

Appendix D Proof of Proposition 4

In the random graph G(n,p)G(n,p) with a large user number nn and a constant connection probability pp, the probability that there exist n\sqrt{n} edges that have less edges than n\sqrt{n} among them is upper bounded by:

limn(nn)i=0n1(n(n1)2i)pi(1p)n(n1)2i\displaystyle\lim_{n\to\infty}{n\choose\sqrt{n}}\sum_{i=0}^{\sqrt{n}-1}{\frac{\sqrt{n}(\sqrt{n}-1)}{2}\choose i}p^{i}(1-p)^{\frac{\sqrt{n}(\sqrt{n}-1)}{2}-i}
limn(1p)nn2(nn)2i=0n1(p1p)i=0.\displaystyle\leq\lim_{n\to\infty}(1-p)^{\frac{n-\sqrt{n}}{2}}{n\choose\sqrt{n}}^{2}\sum_{i=0}^{\sqrt{n}-1}(\frac{p}{1-p})^{i}=0.

Therefore, for any n\sqrt{n} nodes in G(n,p)G(n,p), there are more than n\sqrt{n} edges among them with probability 1.

Let EiE_{i} denote the number of edges in the random graph after the greedy matching algorithm adding ii edges. The probability that the heaviest edge among EiE_{i} edges has weight vKv_{K} is 1(k=1K1pk)Ei1-(\sum_{k=1}^{K-1}p_{k})^{E_{i}}. Thus, the probability that the first heaviest edge to add in the greedy algorithm has weight vKv_{K} is given by 1(k=1K1pk)E01-(\sum_{k=1}^{K-1}p_{k})^{E_{0}}. After adding the first edge in the greedy algorithm, the probability that the heaviest edge to add among the remaining edges has weight vKv_{K} is given by (1(k=1K1pk)E0)(1(k=1K1pk)E1)1(k=1K1pk)E0(k=1K1pk)E1(1-(\sum_{k=1}^{K-1}p_{k})^{E_{0}})(1-(\sum_{k=1}^{K-1}p_{k})^{E_{1}})\geq 1-(\sum_{k=1}^{K-1}p_{k})^{E_{0}}-(\sum_{k=1}^{K-1}p_{k})^{E_{1}}. Similarly, we have that when nn\to\infty, the probability that the (nn)/2(n-\sqrt{n})/2-th edge to add have weight vKv_{K} satisfies:

limn1i=0(nn)/21(k=1K1pk)Ei\displaystyle\lim_{n\to\infty}1-\sum_{i=0}^{(n-\sqrt{n})/2-1}(\sum_{k=1}^{K-1}p_{k})^{E_{i}}
limn1nn2(k=1K1pk)E(nn)/21\displaystyle\geq\lim_{n\to\infty}1-\frac{n-\sqrt{n}}{2}(\sum_{k=1}^{K-1}p_{k})^{E_{(n-\sqrt{n})/2-1}}
limn1nn2(k=1K1pk)n=1,\displaystyle\geq\lim_{n\to\infty}1-\frac{n-\sqrt{n}}{2}(\sum_{k=1}^{K-1}p_{k})^{\sqrt{n}}=1, (12)

where the second inequality is because the number E(nn)/21E_{(n-\sqrt{n})/2-1} of edges among the remaining n+2\sqrt{n}+2 unmatched users is large than n\sqrt{n} with probability 1 when nn\to\infty as we have proved earlier,

Therefore, we have that all the first (nn)/2(n-\sqrt{n})/2 edges added to the greedy matching have the largest weight vKv_{K} with probability 1. Moreover, an obvious upper bound of the total weight under the optimal matching is given by nvK/2nv_{K}/2 since any two users can be matched with a edge with weight vKv_{K} at most. Therefore, the average performance guarantee of the greedy algorithm is given by:

limnvK((nn)/2)vKn/2=1.\lim_{n\to\infty}\frac{v_{K}((n-\sqrt{n})/2)}{v_{K}n/2}=1.

The proof is completed.

Appendix E Proof of Proposition 5

Similarly, let I(ui)I(u_{i}) be the indicator variable that the length H(ui)H(u_{i}) of the longest chain (sequence of edges) that has non-decreasing weights and starts from uiu_{i} towards the left or right side is greater than clognc\log n (c is a constant). Let I(G)I(G) be the indicator variable that the linear graph GG has at least one such chain with length greater than clognc\log n. Then we have

𝔼(I(G))\displaystyle\mathbb{E}(I(G)) 𝔼(i=1nI(ui))=n𝔼(I(u1))\displaystyle\leq\mathbb{E}(\sum_{i=1}^{n}I(u_{i}))=n\mathbb{E}(I(u_{1}))
n(d(n1)n)lognp<ndclognp,\displaystyle\leq n(\frac{d(n-1)}{n})^{\log n}p<nd^{c\log n}p,

where pp denotes the probability that clognc\log n consecutive edges has non-decreasing weights.

By using the similar arguments in the proof of Proposition 3, we have p<nK(max{p1,p2,,pK}2)clognp<n^{K}(\frac{\max\{p_{1},p_{2},\cdots,p_{K}\}}{2})^{c\log n}. Then we obtain:

𝔼(I(G))(12)KnK+1dclogn(max{p1,p2,,pK}2)clogn.\mathbb{E}(I(G))\leq(\frac{1}{2})^{-K}n^{K+1}d^{c\log n}(\frac{\max\{p_{1},p_{2},\cdots,p_{K}\}}{2})^{c\log n}.
=(12)KnK+1c(logmax{p1,p2,,pK}2logd)=(\frac{1}{2})^{-K}n^{K+1-c(-\log\frac{\max\{p_{1},p_{2},\cdots,p_{K}\}}{2}-\log d)}

Note that when d<2max{p1,p2,,pK}d<\frac{2}{\max\{p_{1},p_{2},\cdots,p_{K}\}}, there always exists a constant c>K+1log2logmax{p1,p2,,pK}logdc>\frac{K+1}{\log 2-\log\max\{p_{1},p_{2},\cdots,p_{K}\}-\log d} that makes 𝔼(I(G))\mathbb{E}(I(G)) converge to 0 when nn\to\infty. Therefore we can conclude that the algorithm will terminate within clognc\log n iterations w.h.p. according to the first moment method.

Appendix F Proof of Proposition 6

In G(n,d/n)G(n,d/n) with d<1d<1, to prove (7), we first show the connected component of any user uiu_{i} has no loops w.h.p., and thus we can analyze the conditional expectation of 𝔼GG(n,d/n)[xi(G)|C(ui)=0]\mathbb{E}_{G\sim G(n,d/n)}[x_{i}(G)|C(u_{i})=0] assuming the number C(ui)C(u_{i}) of loops in uiu_{i}’s component is zero, instead of directly analyzing 𝔼GG(n,d/n)[xi(G)]\mathbb{E}_{G\sim G(n,d/n)}[x_{i}(G)] (step 1 below). Then, it remains to show 𝔼GG(n,d/n)[xi(G)|C(ui)=0]\mathbb{E}_{G\sim G(n,d/n)}[x_{i}(G)|C(u_{i})=0] can be well approximated by 𝔼TT(d),W[xroot(T,W)]\mathbb{E}_{T\sim T(d),W}[x_{root}(T,W)] in the approximated random tree T(d)T(d), as both of them are considered in graphs without loops (step 2 below).

Step 1: In G(n,d/n)G(n,d/n), there are totally a random number of loops each of which includes at least three users, and for any t3t\geq 3 users, they have totally (t1)!2\frac{(t-1)!}{2} kinds of permutations to form a loop. Thus, the average total number of loops is:

𝔼(Loop)=t=3n(nt)(t1)!2(dn)t<16t=3n(d)t<(d)36(1d),\displaystyle\mathbb{E}(Loop)=\sum_{t=3}^{n}{n\choose t}\frac{(t-1)!}{2}(\frac{d}{n})^{t}<\frac{1}{6}\sum_{t=3}^{n}(d)^{t}<\frac{(d)^{3}}{6(1-d)}, (13)

which implies that regardless of the graph size nn, there are at most a constant number of loops. Moreover, since [14] proves that G(n,d/n)G(n,d/n) almost surely has no connected components of size larger than clognc\log n for some constant cc, the average size of the largest connected component in G(n,d/n)G(n,d/n) is only o(n)o(n). By combining this with (13), the average total number of users in the components with loops should be less than o(n)𝔼(Loop)=o(n)o(n)\mathbb{E}(Loop)=o(n). Therefore, the probability that user uiu_{i} is one of these users who are in components with loops is:

limnProb(C(ui)1)=o(n)n=0,\displaystyle\lim_{n\to\infty}Prob(C(u_{i})\geq 1)=\frac{o(n)}{n}=0, (14)

where C(ui)C(u_{i}) denotes the number of loops in the connected component of user uiu_{i}. Based on (14), we now have:

limn𝔼GG(n,dn)[xi(G)]=limn𝔼GG(n,dn)[xi(G)|C(ui)=0].\displaystyle\lim_{n\to\infty}\mathbb{E}_{G\sim G(n,\frac{d}{n})}[x_{i}(G)]=\lim_{n\to\infty}\mathbb{E}_{G\sim G(n,\frac{d}{n})}[x_{i}(G)|C(u_{i})=0]. (15)

Step 2: In this step, our problem becomes to prove that the conditional expectation 𝔼GG(n,d/n)[xi(G)|C(ui)=0]\mathbb{E}_{G\sim G(n,d/n)}[x_{i}(G)|C(u_{i})=0] can be well approximated by 𝔼TT(d),W[xroot(T,W)]\mathbb{E}_{T\sim T(d),W}[x_{root}(T,W)]. Given the connected component of user uiu_{i} is a tree (i.e., with C(ui)=0C(u_{i})=0 loop), we start by first showing that the users in the component connect with each other in a similar way as in the random tree T(d)T(d).

We do a breadth-first search (BFS) starting from uiu_{i} to explore all its connected users by searching all its direct neighbors prior to moving on to the two-hop neighbors. Note that the number of neighbors we explore from user uiu_{i} follows the binomial distribution B(n1,d/n)B(n-1,d/n). In fact, for any user in the BFS tree of uiu_{i}, the number of new neighbors we directly explore from this user follows the binomial distribution B(nm,d/n)B(n-m,d/n) where mm is the number of users that have already been explored currently and cannot be explored again (otherwise a loop would occur). Meanwhile, in T(d)T(d), each node gives birth to children according to the same Poisson distribution Poi(d)Poi(d) no matter how large the tree currently is.

Next, we prove that the difference between the branching under B(nm,d/n)B(n-m,d/n) and Poi(d)Poi(d) is trivial. Actually, for any t,mclognt,m\leq c\log n, the ratio of the probability of getting tt from distribution B(nm,d/n)B(n-m,d/n) and the probability of getting tt from distribution Poi(d)Poi(d) is bounded by:

11n(nmt)(dn)t(1dn)nmteddt/(t)!1+1n,\displaystyle 1-\frac{1}{\sqrt{n}}\leq\frac{{n-m\choose t}(\frac{d}{n})^{t}(1-\frac{d}{n})^{n-m-t}}{e^{-d}d^{t}/(t)!}\leq 1+\frac{1}{\sqrt{n}}, (16)

when nn is sufficiently large.

We define Θ\Theta as the set of all possible graph structures that the random tree T(d)T(d) can have, and p1(θ)p_{1}(\theta) as the corresponding probability for any structure θΘ\theta\in\Theta. As the set Θ\Theta includes all possible structures that the BFS tree of uiu_{i} can have, we also define p2(θ)p_{2}(\theta) similarly for the BFS tree. Then, based on (16), for any θΘ\theta\in\Theta with user size s(θ)<clogns(\theta)<c\log n, we have:

(11n)s(θ)p2(θ)p1(θ)(1+1n)s(θ),\displaystyle(1-\frac{1}{\sqrt{n}})^{s(\theta)}\leq\frac{p_{2}(\theta)}{p_{1}(\theta)}\leq(1+\frac{1}{\sqrt{n}})^{s(\theta)}, (17)

which is because both the probabilities p1(θ)p_{1}(\theta) and p2(θ)p_{2}(\theta) are given by the product of all s(θ)s(\theta) users’ individual probability to give birth to a given number of children as in structure θ\theta.

We now note that the difference between 𝔼TT(d)[xroot(T)]\mathbb{E}_{T\sim T(d)}[x_{root}(T)] and 𝔼GG(n,d/n)[xi(G)|C(ui)=0]\mathbb{E}_{G\sim G(n,d/n)}[x_{i}(G)|C(u_{i})=0] is determined by the difference between p1(θ)p_{1}(\theta) and p2(θ)p_{2}(\theta):

𝔼TT(d)[xroot(T)]=θΘp1(θ)xroot(θ),\displaystyle\mathbb{E}_{T\sim T(d)}[x_{root}(T)]=\sum_{\theta\in\Theta}p_{1}(\theta)x_{root}(\theta), (18)
𝔼GG(n,d/n)[xi(G)|C(ui)=0]=θΘp2(θ)xroot(θ).\displaystyle\mathbb{E}_{G\sim G(n,d/n)}[x_{i}(G)|C(u_{i})=0]=\sum_{\theta\in\Theta}p_{2}(\theta)x_{root}(\theta). (19)

(19) uses xroot(θ)x_{root}(\theta) instead of xi(θ)x_{i}(\theta) because the matching weight xi(θ)x_{i}(\theta) of node uiu_{i} should be equal to the matching weight xroot(θ)x_{root}(\theta) of the root node when the BFS tree of uiu_{i} has the same structure θ\theta as the rooted tree.

By substituting (17) into (18) and (19), we have:

limn|𝔼TT(d)[xroot(T)]𝔼GG(n,d/n)[xi(G)|C(ui)=0]|\displaystyle\lim_{n\to\infty}|\mathbb{E}_{T\sim T(d)}[x_{root}(T)]-\mathbb{E}_{G\sim G(n,d/n)}[x_{i}(G)|C(u_{i})=0]|
=limn|θΘxroot(θ)(p1(θ)p2(θ))|\displaystyle=\lim_{n\to\infty}|\sum_{\theta\in\Theta}x_{root}(\theta)(p_{1}(\theta)-p_{2}(\theta))|
limnvK2{θΘ:s(θ)clogn}p1(θ)+p2(θ)\displaystyle\leq\lim_{n\to\infty}\frac{v_{K}}{2}\sum_{\{\theta\in\Theta:s(\theta)\geq c\log n\}}p_{1}(\theta)+p_{2}(\theta)
+vK2{θΘ:s(θ)<clogn}p1(θ)((1+1n)s(θ)1)\displaystyle+\frac{v_{K}}{2}\sum_{\{\theta\in\Theta:s(\theta)<c\log n\}}p_{1}(\theta)((1+\frac{1}{\sqrt{n}})^{s(\theta)}-1)
=(a)limnvK21n{θΘ:s(θ)<clogn}p1(θ)s(θ)\displaystyle\overset{\text{(a)}}{=}\lim_{n\to\infty}\frac{v_{K}}{2}\frac{1}{\sqrt{n}}\sum_{\{\theta\in\Theta:s(\theta)<c\log n\}}p_{1}(\theta)s(\theta)
(b)limnvK21n11d=0,\displaystyle\overset{\text{(b)}}{\leq}\lim_{n\to\infty}\frac{v_{K}}{2}\frac{1}{\sqrt{n}}\frac{1}{1-d}=0, (20)

where we split the analysis in two cases depending on whether the graph structure θ\theta has size larger than clognc\log n or not. As we mentioned earlier, [14] proves that G(n,d/n)G(n,d/n) almost surely has no connected components of size larger than clognc\log n for the constant cc, thus we have the probability {θΘ:s(θ)clogn}p2(θ)=0\sum_{\{\theta\in\Theta:s(\theta)\geq c\log n\}}p_{2}(\theta)=0 to derive equality (a). Inequality (b) is due to that the average size of the random tree T(d)T(d) is given by θΘp1(θ)s(θ)=t=0dt=11d\sum_{\theta\in\Theta}p_{1}(\theta)s(\theta)=\sum_{t=0}^{\infty}d^{t}=\frac{1}{1-d} where dtd^{t} is the average size of the tt-th generation as each node gives birth to dd children on average. Moreover, according to the first moment method, we also have {θΘ:s(θ)clogn}p1(θ)=0\sum_{\{\theta\in\Theta:s(\theta)\geq c\log n\}}p_{1}(\theta)=0 for (a).

Based on (15) and (20), we finally prove (7).

Appendix G Proof of Proposition 7

To compute the proposal probability yky_{k}, we further define ykcy_{k}^{c} to denote the probability that the root node receive a proposal from a child who connects to it with an edge of weight vkv_{k} given this child gives birth to cc grandchild nodes (happens with probability (n1c)(d/n)c(1d/n)n1cdcc!ed{n-1\choose c}(d/n)^{c}(1-d/n)^{n-1-c}\to\frac{d^{c}}{c!}e^{-d} as nn\to\infty). If the edge between the root node and the child has the maximum weight (i.e., k=Kk=K), the child will send a proposal to the root node only when all ii grandchildren that have the maximum weight among the cc grandchild nodes (happens with probability (ci)(pK)i(1pK)ci{c\choose i}(p_{K})^{i}(1-p_{K})^{c-i}) either want great-grandchildren (happens with probability 1yK1-y_{K}) or have lower priority to be added. Thus, the recursive equation for the proposal probability yKcy_{K}^{c} is given as follows:

yKc=i=0c(ci)(pK)i(1pK)ci(j=0i(ij)yKij(1yK)j1ij+1).y_{K}^{c}=\sum_{i=0}^{c}{c\choose i}(p_{K})^{i}(1-p_{K})^{c-i}(\sum_{j=0}^{i}{i\choose j}y_{K}^{i-j}(1-y_{K})^{j}\frac{1}{i-j+1}).

Moreover, by considering all possible number of grandchildren that the child can give birth to, we can derive the aggregate recursive equation for the matching possibility yKy_{K}:

yK\displaystyle y_{K} =c=0dcc!edyKc\displaystyle=\sum_{c=0}^{\infty}\frac{d^{c}}{c!}e^{-d}y_{K}^{c}
=c=0dcc!edi=0c(ci)(pK)i(1pK)ci(j=0i(ij)yKij(1yK)jij+1)\displaystyle=\sum_{c=0}^{\infty}\frac{d^{c}}{c!}e^{-d}\sum_{i=0}^{c}{c\choose i}(p_{K})^{i}(1-p_{K})^{c-i}(\sum_{j=0}^{i}\frac{{i\choose j}y_{K}^{i-j}(1-y_{K})^{j}}{i-j+1})
=epKdi=0((pKd)i(1(1yK)i+1)(i+1)!yK,\displaystyle=e^{-p_{K}d}\sum_{i=0}^{\infty}(\frac{(p_{K}d)^{i}(1-(1-y_{K})^{i+1})}{(i+1)!y_{K}}, (21)

after a summation by parts. Note that the term 1yK(1(1yK)i+1)\frac{1}{y_{K}}(1-(1-y_{K})^{i+1}) on the right-hand-side of the equation is decreasing in yKy_{K} when yK(0,1)y_{K}\in(0,1). Moreover, when yK=0y_{K}=0, the RHS of the equation is equal to 1. When yK=1y_{K}=1, the RHS of the equation is equal to 1pKd(1epKd)<1\frac{1}{p_{K}d}(1-e^{-p_{K}d})<1 for any d>0d>0 and 0<pK<10<p_{K}<1. Therefore, there exist a unique solution yKy_{K}^{\star} satisfying the above equation in the interval (0,1)(0,1).

Then, after the proposal probability yKy_{K} of the maximum weight has been decided, the proposal probability yK1y_{K-1} now have the highest priority to compute as wK1w_{K-1} becomes the maximum weight among the remaining weights. Similarly, for the proposal probability yK1y_{K-1}, we can derive the following recursive equation:

yk=e(pK+yKpK)di=0(pkd)i(1(1yk)i+1)(i+1)!yk.\displaystyle y_{k}=e^{-(p_{K}+y_{K}p_{K})d}\sum_{i=0}^{\infty}\frac{(p_{k}d)^{i}(1-(1-y_{k})^{i+1})}{(i+1)!y_{k}}. (22)

Using the similar argument for yKy_{K}, we prove that there exist a unique solution yK1y_{K-1}^{\star} satisfying the above equation in the interval (0,1)(0,1) after substituting the solution yKy_{K}^{\star} to (21) into (22).

Eventually, for any k=1,2,Kk=1,2,\cdots K, we can derive the following recursive equation:

yk=e(pK+j=k+1Kyjpj)di=0(pkd)i(1(1yk)i+1)(i+1)!yk.\displaystyle y_{k}=e^{-(p_{K}+\sum_{j=k+1}^{K}y_{j}p_{j})d}\sum_{i=0}^{\infty}\frac{(p_{k}d)^{i}(1-(1-y_{k})^{i+1})}{(i+1)!y_{k}}.

and prove that there exists a unique solution to the equation.

References

  • [1] X. Wang, L. Duan, and R. Zhang, “User-Initiated Data Plan Trading via a Personal Hotspot Market,” in IEEE Transactions on Wireless Communications, vol. 15, no. 11, pp. 7885-7898, Nov. 2016.
  • [2] L. Zheng, C. Joe-Wong, C. W. Tan, S. Ha, and M. Chiang, “Secondary markets for mobile data: Feasibility and benefits of traded data plans,” 2015 IEEE Conference on Computer Communications (INFOCOM), Kowloon, 2015, pp. 1580-1588.
  • [3] Y. Guo, L. Duan, and R. Zhang, “Cooperative Local Caching Under Heterogeneous File Preferences,” in IEEE Transactions on Communications, vol. 65, no. 1, pp. 444-457, Jan 2017.
  • [4] L. Keller, A. Le, B. Cici, H. Seferoglu, C. Fragouli, and A. Markopoulou, “Microcast: Cooperative video streaming on smartphones,” In Proceedings of the 10th International Conference on Mobile Systems, Applications, and Services (MobiSys ’12), New York, 2012, pp. 57–70.
  • [5] L. Jiang, H. Tian, Z. Xing, K. Wang, K. Zhang, S. Maharjan, S. Gjessing, and Y. Zhang, “Social-aware energy harvesting device-to-device communications in 5G networks,” in IEEE Wireless Communications, vol. 23, no. 4, pp. 20-27, August 2016.
  • [6] X. Chen, L. Pu, L. Gao, W. Wu, and D. Wu, “Exploiting Massive D2D Collaboration for Energy-Efficient Mobile Edge Computing,” in IEEE Wireless Communications, vol. 24, no. 4, pp. 64-71, Aug. 2017.
  • [7] A. Schrijver, “Combinatorial Optimization: Polyhedra and Efficiency,” Springer Science &\& Business Media, Vol. 24, 2003.
  • [8] R. Preis, “Linear time 1/2-approximation algorithm for maximum weighted matching in general graphs,” in Annual Symposium on Theoretical Aspects of Computer Science, Springer, 1999, pp. 259-269.
  • [9] JH. Hoepman, “Simple distributed weighted matchings,” 2004, [Online]. Available: https://arxiv.org/abs/cs/0410047
  • [10] Z. Lotker, B. Patt-Shamir, and S. Pettie, “Improved distributed approximate matching,” Journal of the ACM (JACM), vol. 62, no. 5, pp. 129-136, Nov 2015.
  • [11] HB. Lim, YM. Teo, P. Mukherjee, VT. Lam, WF. Wong, and S. See, “Sensor grid: integration of wireless sensor networks and the grid,” The IEEE Conference on Local Computer Networks 30th Anniversary (LCN’05), Sydney, NSW, 2005, pp. 91-99.
  • [12] W. Sun, H. Yamaguchi, K. Yukimasa, and S. Kusumoto, “GVGrid: A QoS Routing Protocol for Vehicular Ad Hoc Networks,” 200614th IEEE International Workshop on Quality of Service, New Haven, CT, 2006, pp. 130-139.
  • [13] S. Janson, T. Luczak, and A. Rucinski, “Random graphs,” John Wiley &\& Sons, Sep 2011.
  • [14] P. Erdős, and A. Rényi, “On the evolution of random graphs,” Publ. Math. Inst. Hung. Acad. Sci, 1960, pp. 17-61.
  • [15] Z. Tóth, and J. Tamás, “Miskolc IIS hybrid IPS: Dataset for hybrid indoor positioning,” In 2016 26th International Conference Radioelektronika (RADIOELEKTRONIKA), pp. 408-412. IEEE, 2016.
  • [16] T.S. Rappaport, “ Wireless communications: principles and practice,” Vol. 2. New Jersey: prentice hall PTR, 1996.