This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Evolution of weights on a connected finite graph

Jicheng Ma [email protected] Yunyan Yangcorresponding author [email protected] 1School of Mathematics, Renmin University of China, Beijing, 100872, China
Abstract

On a connected finite graph, we propose an evolution of weights including Ollivier’s Ricci flow as a special case. During the evolution process, on each edge, the speed of change of weight is exactly the difference between the Wasserstein distance related to two probability measures and certain graph distance. Here the probability measure may be chosen as an α\alpha-lazy one-step random walk, an α\alpha-lazy two-step random walk, or a general probability measure. Based on the ODE theory, we show that the initial value problem has a unique global solution.

A discrete version of the above evolution is applied to the problem of community detection. Our algorithm is based on such a discrete evolution, where probability measures are chosen as α\alpha-lazy one-step random walk and α\alpha-lazy two-step random walk respectively. Note that the later measure has not been used in previous works [23, 16, 2, 20]. Here, as in [20], only one surgery needs to be performed after the last iteration. Moreover, our algorithm is much easier than those of [16, 2, 20], which were all based on Lin-Lu-Yau’s Ricci curvature. The code is available at https://github.com/mjc191812/Evolution-of-weights-on-a-connected-finite-graph.

keywords:
weighted graph; Ollivier’s Ricci flow; Lin-Lu-Yau’s Ricci flow; community detection
MSC:
[2020] 05C21; 05C85; 35R02; 68Q06
journal: ***

1 Introduction

In 1982, Ricci flow was first introduced by Hamilton [14] to deform a metric on a Riemannian manifold (M,g)(M,g) according to the differential equation involving a one-parameter family of metrics g(t)g(t) on a time interval, namely

tg(t)=2Ric(g(t)),\partial_{t}g(t)=-2{\rm Ric}(g(t)),

where Ric(g(t)){\rm Ric}(g(t)) is the Ricci curvature of the metric g(t)g(t). Generally speaking, Ricci flow smooths the metric, but it may lead to singularities. It was used in a genius way by Perelman [26] for solving the Poincaré conjecture. Moreover, it was used by Brendle and Schoen [4] for solving the differentiable sphere theorem.

Also, Ricci flow can be applied to discrete geometry, which includes complex networks and weighted finite graphs. To be more specific, we assume that G=(V,E,𝐰)G=(V,E,\bf{w}) is a connected weighted finite graph. Here V={z1,z2,,zn}V=\{z_{1},z_{2},\cdots,z_{n}\} denotes the vertex set, E={e1,e2,,em}E=\{e_{1},e_{2},\cdots,e_{m}\} represents the edge set and 𝐰=(we1,we2,,wem)\mathbf{w}=(w_{e_{1}},w_{e_{2}},\cdots,w_{e_{m}}) stands for the weights on edges. In 2009, Ollivier [24] suggested a Ricci flow

we(t)=κeα(t)we(t),w_{e}^{\prime}(t)=-\kappa_{e}^{\alpha}(t)w_{e}(t), (1.1)

where, for each eEe\in E, κeα\kappa_{e}^{\alpha} is its Ollivier’s Ricci curvature, and α[0,1]\alpha\in[0,1] is a parameter. Ni et al [23] found that discrete Ricci flow can be applied to detect community structures in networks, similar to classical Ricci flow on manifolds.

Community detection is an important research area in network analysis, aiming to identify clusters or groups of nodes that are more densely connected internally than with the rest of the network. This concept is crucial in various fields, including sociology [28, 31], biology [12, 3], and computer science[29]. Lots of algorithms [32, 11, 21, 18, 5, 25, 12] have been developed to detect and separate communities. Most of these algorithms focus on identifying dense clusters in a graph, using randomized approaches like label propagation or random walks, optimizing centrality measures such as betweenness centrality, or considering measures like modularity. Ricci curvature, introduced by Forman [10], Ollivier [24] and Lin-Lu-Yau [19], acknowledged as an essential tool in graph theory, facilitates deeper insights into network structures and provides an innovative approach to community detection. In 2019, based on (1.1) and a surgery process, Ni et al [23] established a very effective method of community detection. In 2021, Bai et al [1] studied a star coupling Ricci curvature based on Olloivier’s Ricci curvature. In 2022, this community detection method was extended by Lai et al [16] to a normalized Ricci flow based on [19, 1]. Experiments in [16] had shown that their methods were still effective. Very recently, the authors in the current paper [20] proposed a modified Ricci flow and a quasi-normalized Ricci flow for arbitrary weighted graphs. Notably, the results on global existence and uniqueness of solutions in [20] do not rely on the exit condition suggested by Bai et al. [2].

Although literatures [23] and [16] have shown great success in their applications, they lack theoretical support. In order to make up for this shortcoming, recently, concerning Lin-Lu-Yau’s Ricci curvature κe\kappa_{e}, Bai et al [2] obtained the existence and uniqueness of global solutions to the Ricci flow

we(t)=κe(t)we(t)w_{e}^{\prime}(t)=-\kappa_{e}(t)w_{e}(t) (1.2)

and the normalized Ricci flow

we(t)=κe(t)we(t)+we(t)τEκτ(t)wτ(t),w_{e}^{\prime}(t)=-\kappa_{e}(t)w_{e}(t)+w_{e}(t)\sum_{\tau\in E}\kappa_{\tau}(t)w_{\tau}(t), (1.3)

under an exit condition: if we>ρe=infγτγwτw_{e}>\rho_{e}=\inf_{\gamma}\sum_{\tau\in\gamma}w_{\tau}, then the edge ee is deleted, where e=xyEe=xy\in E and γ\gamma is a path connecting xx and yy. Notice that in all of the above mentioned articles, the authors ask for surgery while extending solutions.

In [20], we find that if wew_{e} is replaced by ρe\rho_{e} on the right hand sides of (1.2) and analog of (1.3), namely

we(t)=κe(t)ρe(t)w_{e}^{\prime}(t)=-\kappa_{e}(t)\rho_{e}(t) (1.4)

and

we(t)=κe(t)ρe(t)+τEκτ(t)wτ(t)τwτρe(t),w_{e}^{\prime}(t)=-\kappa_{e}(t)\rho_{e}(t)+\frac{\sum_{\tau\in E}\kappa_{\tau}(t)w_{\tau}(t)}{\sum_{\tau}w_{\tau}}\rho_{e}(t), (1.5)

then both (1.4) and (1.5) have unique global solutions we(t)w_{e}(t) for each eEe\in E and all t[0,+)t\in[0,+\infty). In this setting, we removed the exit condition in [2], and did not require surgery in the process of flowing. Meanwhile, experiments show that (1.4) and (1.5) have the same impressive performance as (1.2) and (1.3), and have better modularity.

Currently, there is no existence result for solution to Ollivier’s Ricci flow (1.1). To ensure the global existence of the solution, we can modify the equation (1.1) by replacing the term κeαwe\kappa_{e}^{\alpha}w_{e} with κeαρe\kappa_{e}^{\alpha}\rho_{e}. Then we prove the existence and uniqueness of global solutions to the modified Ricci flow. In fact, we shall consider a more general evolution of weights on a connected weighted finite graph (V,E,𝐰)(V,E,\bf{w}), which reads as

we(t)=W(μ(x,),μ(y,))ρe,w_{e}^{\prime}(t)=W(\mu(x,\cdot),\mu(y,\cdot))-\rho_{e}, (1.6)

where e=xyEe=xy\in E and W(μ(x,),μ(y,))W(\mu(x,\cdot),\mu(y,\cdot)) denotes the Wasserstein distance between two probability measures μ(x,)\mu(x,\cdot) and μ(y,)\mu(y,\cdot). One can easily see that if μ(x,z)=μxα(z)\mu(x,z)=\mu_{x}^{\alpha}(z) is an α\alpha-lazy one-step random walk, then (1.6) reduces to we=κeαρew_{e}^{\prime}=-\kappa_{e}^{\alpha}\rho_{e}. Through a large number of experiments, we conclude that (1.6) is effective in community detection. We look at how key factors, like α\alpha and the number of iterations, affect the results. After being tested on real data and compared with existing methods, our approach demonstrates strong performance, especially in measuring modularity.

Before ending this introduction, we mention another interesting curvature flow, the Bakry-Émery curvature flow, studied by Cushing et al. Interested readers are referred to [7, 8] for details. The remaining part of this paper is organized as follows: In Section 2, we state our main results for the evolution of weights; In Section 3, we prove the Lipschitz property of the Wasserstein distance; The proof of Theorems 2.1 and 2.2 will be given in Section 4; In Section 5, We construct several examples of converging flow; Experiments will be done in Section 6.

2 Notations and main results

Let G=(V,E,𝐰)G=(V,E,\mathbf{w}) be a connected weighted graph, V={z1,z2,,zn}V=\{z_{1},z_{2},\cdots,z_{n}\} denotes the set of all vertices, E={e1,e2,,em}E=\{e_{1},e_{2},\cdots,e_{m}\} denotes the set of all edges, and 𝐰=(we1,we2,,wem)\mathbf{w}=(w_{e_{1}},w_{e_{2}},\cdots,w_{e_{m}}) denotes the vector of weights on edges. Clearly, each function f:Gf:G\rightarrow\mathbb{R} corresponds to a vector (f(z1),f(z2),,f(zn))(f(z_{1}),f(z_{2}),\cdots,f(z_{n})) in n\mathbb{R}^{n}. Nevertheless, when the weights 𝐰\mathbf{w} change, ff can be regarded as a vector-valued function, namely

𝐟:+m\displaystyle{\bf f}:\,\mathbb{R}^{m}_{+} \displaystyle\rightarrow n\displaystyle\mathbb{R}^{n}
𝐰\displaystyle{\bf{w}} \displaystyle\mapsto (f(z1),f(z2),,f(zn)).\displaystyle(f(z_{1}),f(z_{2}),\cdots,f(z_{n})).

Here and in the sequel, +m={𝐲=(y1,y2,,ym)m:yi>0,i=1,2,,m}\mathbb{R}^{m}_{+}=\{\mathbf{y}=(y_{1},y_{2},\cdots,y_{m})\in\mathbb{R}^{m}:y_{i}>0,i=1,2,\cdots,m\}. A function f:Gf:G\rightarrow\mathbb{R} is said to be locally Lipschitz in +m\mathbb{R}^{m}_{+} with respect to 𝐰\mathbf{w}, if for any fixed domain Ω+m\Omega\subset\subset\mathbb{R}^{m}_{+}, there exists a constant CC depending only on Ω\Omega such that for any vertex uu,

|f(u)f~(u)|C|ww~|,𝐰,𝐰~Ω,|f(u)-\widetilde{f}(u)|\leq C|w-\widetilde{w}|,\quad\forall\mathbf{w},\widetilde{\mathbf{w}}\in\Omega, (2.1)

where f~\widetilde{f} represents the function obtained by replacing 𝐰\mathbf{w} with 𝐰~\widetilde{\mathbf{w}} in the expression of ff.

Throughout this paper, we use the the distance between two vertices xx and zz defined by

d(x,z)=infγτγwτ,d(x,z)=\inf_{\gamma}\sum_{\tau\in\gamma}w_{\tau}, (2.2)

where the infimum is taken over all paths γ\gamma connecting xx and zz. Obviously, the function f=d(x,)f=d(x,\cdot) is considered not only as a vector (d(x,z1),d(x,z2),,d(x,zn))(d(x,z_{1}),d(x,z_{2}),\cdots,d(x,z_{n})), but also a vector-valued function 𝐰(d(x,z1),d(x,z2),,d(x,zn)){\bf{w}}\mapsto(d(x,z_{1}),d(x,z_{2}),\cdots,d(x,z_{n})), in particular, each d(x,zj)d(x,z_{j}) is a function of 𝐰\mathbf{w}. Given 𝐰\mathbf{w} and 𝐰~\widetilde{\mathbf{w}} in +m\mathbb{R}^{m}_{+}. Let dd and d~\widetilde{d} be two distance functions determined by 𝐰\mathbf{w} and 𝐰~\widetilde{\mathbf{w}} respectively. Then, by ([20], Lemma 2.1), for any two fixed vertices xx and zz,

|d(x,z)d~(x,z)|m|𝐰𝐰~|.|d(x,z)-\widetilde{d}(x,z)|\leq\sqrt{m}|\mathbf{w}-\widetilde{\mathbf{w}}|. (2.3)

Thus the distance function dd is Lipschitz in 𝐰\mathbf{w}.

We are now defining a set of functions

={μ:V×V[0,1]:zVμ(x,z)=1,xV}.\mathscr{M}=\left\{\mu:V\times V\rightarrow[0,1]:\sum_{z\in V}\mu(x,z)=1,\,\forall x\in V\right\}. (2.4)

Note that for any fixed xVx\in V, each μ(x,)\mu(x,\cdot) is a probability measure. In practical problems, probability measures involving the weights 𝐰\mathbf{w} are more meaningful. For example, given any number α[0,1]\alpha\in[0,1], the α\alpha-lazy one-step random walk reads as

μxα(z)={αifz=x(1α)wxzyxwxyifzx0ifotherwise.\mu_{x}^{\alpha}(z)=\left\{\begin{array}[]{lll}\alpha&{\rm if}&z=x\\[5.16663pt] (1-\alpha)\frac{w_{xz}}{\sum_{y\sim x}w_{xy}}&{\rm if}&z\sim x\\[5.16663pt] 0&{\rm if}&{\rm otherwise}.\end{array}\right. (2.5)

Obviously μ(x,)=μxα()\mu(x,\cdot)=\mu_{x}^{\alpha}(\cdot) belongs to \mathscr{M}. Denote the one-step neighbor of xx by Nx={y:yx}N_{x}=\{y:y\sim x\}, and the two-step neighbor of xx by N2,x={zx:zyforsomeyNx}N_{2,x}=\{z\not=x:z\sim y\,\,{\rm for\,\,some}\,\,y\in N_{x}\}. Similarly, an α\alpha-lazy two-step random walk is represented by

μ2,xα(z)={αifz=x(1α)αwxzyNxwxyifzNx(1α)2yNxwxyuNxwxuwyzvN2,xNxwyvifzN2,xNx0ifz{x}NxN2,x.\mu_{2,x}^{\alpha}(z)=\left\{\begin{array}[]{lll}\alpha&{\rm if}&z=x\\[5.16663pt] (1-\alpha)\alpha\frac{w_{xz}}{\sum_{y\in N_{x}}w_{xy}}&{\rm if}&z\in N_{x}\\[5.16663pt] (1-\alpha)^{2}\sum_{y\in N_{x}}\frac{w_{xy}}{\sum_{u\in N_{x}}w_{xu}}\frac{w_{yz}}{\sum_{v\in N_{2,x}\setminus N_{x}}w_{yv}}&{\rm if}&z\in N_{2,x}\setminus N_{x}\\[5.16663pt] 0&{\rm if}&z\not\in\{x\}\cup N_{x}\cup N_{2,x}.\end{array}\right. (2.6)

One can easily check that μ(x,)=μ2,xα()\mu(x,\cdot)=\mu_{2,x}^{\alpha}(\cdot)\in\mathscr{M} for all xVx\in V and all α[0,1]\alpha\in[0,1]. Clearly, μxα\mu_{x}^{\alpha} and μ2,xα\mu_{2,x}^{\alpha} are also two functions of 𝐰\mathbf{w}, and both of them are locally Lipschitz in +m\mathbb{R}^{m}_{+} with respect to 𝐰\mathbf{w}.

Let μ1\mu_{1} and μ2\mu_{2} be two probability measures. A coupling between μ1\mu_{1} and μ2\mu_{2} is defined as a map A:V×V[0,1]A:V\times V\rightarrow[0,1] satisfying for all x,yVx,y\in V,

uVA(x,u)=μ1(x),uVA(u,y)=μ2(y).\sum_{u\in V}A(x,u)=\mu_{1}(x),\quad\sum_{u\in V}A(u,y)=\mu_{2}(y).

While the Wasserstein distance between μ1\mu_{1} and μ2\mu_{2} reads

W(μ1,μ2)=infAu,vVA(u,v)d(u,v),W(\mu_{1},\mu_{2})=\inf_{A}\sum_{u,v\in V}A(u,v)d(u,v),

where the infimum is taken over all couplings between μ1\mu_{1} and μ2\mu_{2}.

Assuming that μ\mu\in\mathscr{M}, we consider an evolution of weights 𝐰=(we1,we2,,wem)\mathbf{w}=(w_{e_{1}},w_{e_{2}},\cdots,w_{e_{m}}) according to

{wei(t)=W(μ(xi,),μ(yi,))d(xi,yi)ei=xiyiEwei(0)=w0,i,i=1,2,,m.\left\{\begin{array}[]{lll}w_{e_{i}}^{\prime}(t)=W(\mu(x_{i},\cdot),\mu(y_{i},\cdot))-d(x_{i},y_{i})\\[5.16663pt] e_{i}=x_{i}y_{i}\in E\\[5.16663pt] w_{e_{i}}(0)=w_{0,i},\,\,i=1,2,\cdots,m.\end{array}\right. (2.7)

Our first result reads as follows.

Theorem 2.1.

Let G=(V,E,𝐰0)G=(V,E,\mathbf{w}_{0}) be a connected weighted finite graph, where VV is the vertex set, E={e1,e2,,em}E=\{e_{1},e_{2},\cdots,e_{m}\} is the edge set, and 𝐰0=(w0,1,w0,2,,w0,m)+m\mathbf{w}_{0}=(w_{0,1},w_{0,2},\cdots,w_{0,m})\in\mathbb{R}^{m}_{+} is an arbitrary weight on EE. If μ\mu\in\mathscr{M} satisfies that for each vertex xVx\in V, μ(x,)\mu(x,\cdot) is locally Lipschitz with respect to 𝐰0+m\mathbf{w}_{0}\in\mathbb{R}^{m}_{+}, then the flow (2.7) has a unique solution 𝐰(t)=(we1(t),we2(t),,wem(t))\mathbf{w}(t)=(w_{e_{1}}(t),w_{e_{2}}(t),\cdots,w_{e_{m}}(t)) for t[0,+)t\in[0,+\infty).

Analogous to [20], we consider a quasi-normalized flow

{wei(t)=W(μ(xi,),μ(yi,))d(xi,yi)j=1m(W(μ(xj,),μ(yj,))d(xj,yj))j=1mwejd(xi,yi)ei=xiyiEwei(0)=w0,i,i=1,2,,m.\left\{\begin{array}[]{lll}w_{e_{i}}^{\prime}(t)=W\left(\mu(x_{i},\cdot),\mu(y_{i},\cdot)\right)-d(x_{i},y_{i})-\frac{\sum_{j=1}^{m}\left(W(\mu(x_{j},\cdot),\mu(y_{j},\cdot))-d(x_{j},y_{j})\right)}{\sum_{j=1}^{m}w_{e_{j}}}d(x_{i},y_{i})\\[5.16663pt] e_{i}=x_{i}y_{i}\in E\\[5.16663pt] w_{e_{i}}(0)=w_{0,i},\,\,i=1,2,\cdots,m.\end{array}\right. (2.8)
Theorem 2.2.

Under the same assumptions as in Theorem 2.1, the flow (2.8) has a unique solution 𝐰(t)=(we1(t),we2(t),,wem(t))\mathbf{w}(t)=(w_{e_{1}}(t),w_{e_{2}}(t),\cdots,w_{e_{m}}(t)) for t[0,+)t\in[0,+\infty).

As we mentioned above, α\alpha-lazy random walks μxα(y)\mu_{x}^{\alpha}(y) and μ2,xα(y)\mu_{2,x}^{\alpha}(y) are locally Lipschitz in 𝐰0+m\mathbf{w}_{0}\in\mathbb{R}^{m}_{+}. As a consequence, both μ(x,y)=μxα(y)\mu(x,y)=\mu_{x}^{\alpha}(y) and ν(x,y)=μ2,xα(y)\nu(x,y)=\mu_{2,x}^{\alpha}(y) are appropriate candidates for μ(,)\mu(\cdot,\cdot) satisfying the assumptions in Theorems 2.1 and 2.2. Both the proofs of Theorems 2.1 and 2.2 are based on the following lines: Consider the differential system

{𝐰(t)=𝐟(𝐰(t))𝐰(0)=𝐰0,\left\{\begin{array}[]{lll}\mathbf{w}^{\prime}(t)=\mathbf{f}({\mathbf{w}}(t))\\[5.16663pt] \mathbf{w}(0)=\mathbf{w}_{0},\end{array}\right.

where 𝐟(𝐰(t))=(f1(𝐰(t)),f2(𝐰(t)),,fm(𝐰(t))){\bf{f}}(\mathbf{w}(t))=({f}_{1}(\mathbf{w}(t)),{f}_{2}(\mathbf{w}(t)),\cdots,{f}_{m}(\mathbf{w}(t))) represents the right hand side of the system (2.7) or (2.8). We first prove the local Lipschitz continuity of 𝐰\mathbf{w}. Together with the ODE theory ([30], Chapter 6), this implies the local existence and uniqueness of solutions. Next, we prove there exists a constant CC such that

Cwifi(𝐰)Cj=1mwj,i=1,2,,m.-Cw_{i}\leq{f}_{i}(\mathbf{w})\leq C\sum_{j=1}^{m}w_{j},\quad\forall i=1,2,\cdots,m.

This leads to long time existence of solutions. More details are left to Section 4.

Addressing the convergence of solutions to systems (2.7) or (2.8) poses a significant challenge, and we do not know how to obtain general convergence results except for a few special cases involving particular graphs (see Section 5 below). Even so, the global existence of solution is sufficient for practical applications, such as in community detection problems, which will be discussed in Section 6.

3 A key lemma

In this section, we prove a key lemma to be used later. Using the same notations as in Section 2. we have the following:

Lemma 3.1.

If μ\mu\in\mathscr{M} and, for each vertex xx, the probability measure μ(x,)\mu(x,\cdot) is locally Lipschitz in 𝐰=(we1,we2,,wem)+m\mathbf{w}=(w_{e_{1}},w_{e_{2}},\cdots,w_{e_{m}})\in\mathbb{R}^{m}_{+}, then the Wasserstein distance W(μ(x,),μ(y,))W\left(\mu(x,\cdot),\mu(y,\cdot)\right) is also locally Lipschitz in 𝐰+m\mathbf{w}\in\mathbb{R}^{m}_{+}.

Proof.

Given two vertices xx, yy and two weights 𝐰=(we1,,wem){\bf{w}}=(w_{e_{1}},\cdots,w_{e_{m}}), 𝐰~=(w~e1,,w~em)\widetilde{{\bf{w}}}=(\widetilde{w}_{e_{1}},\cdots,\widetilde{w}_{e_{m}}) in +m\mathbb{R}^{m}_{+}. Assume that Wasserstein distances W(μ(x,),μ(y,))W(\mu(x,\cdot),\mu(y,\cdot)) and W(μ~(x,),μ~(y,))W(\widetilde{\mu}(x,\cdot),\widetilde{\mu}(y,\cdot)) are determined by 𝐰{\bf{w}} and 𝐰~\widetilde{{\bf{w}}} respectively, as well as μ\mu and μ~\widetilde{\mu}. With no loss of generality, we assume there exist some constants Λ>0\Lambda>0 and δ>0\delta>0 such that for each i=1,2,,mi=1,2,\cdots,m,

Λ1weiΛ,Λ1w~eiΛ,|weiw~ei|δ.\Lambda^{-1}\leq w_{e_{i}}\leq\Lambda,\,\,\Lambda^{-1}\leq\widetilde{w}_{e_{i}}\leq\Lambda,\,\,|w_{e_{i}}-\widetilde{w}_{e_{i}}|\leq\delta. (3.1)

By the Kantorovich-Rubinstein duality formula,

W(μ(x,),μ(y,))=supψLip 1uVψ(u)(μ(x,u)μ(y,u)),\displaystyle W\left(\mu(x,\cdot),\mu(y,\cdot)\right)=\sup_{\psi\in{\rm Lip}\,1}\sum_{u\in V}\psi(u)(\mu(x,u)-\mu(y,u)), (3.2)
W(μ~(x,),μ~(y,))=supψLip~ 1uVψ(u)(μ~(x,u)μ~(y,u)),\displaystyle W(\widetilde{\mu}(x,\cdot),\widetilde{\mu}(y,\cdot))=\sup_{\psi\in\widetilde{{\rm Lip}}\,1}\sum_{u\in V}\psi(u)(\widetilde{\mu}(x,u)-\widetilde{\mu}(y,u)), (3.3)

where

Lip 1={f:V:|f(u)f(v)|d(u,v),u,vV},\displaystyle{\rm Lip}\,1=\left\{f:V\rightarrow\mathbb{R}:|f(u)-f(v)|\leq d(u,v),\,\forall u,v\in V\right\},
Lip~ 1={f:V:|f(u)f(v)|d~(u,v),u,vV},\displaystyle\widetilde{{\rm Lip}}\,1=\{f:V\rightarrow\mathbb{R}:|f(u)-f(v)|\leq\widetilde{d}(u,v),\,\forall u,v\in V\},

the distance functions dd and d~\widetilde{d} are determined by 𝐰\mathbf{w} and 𝐰~\widetilde{\mathbf{w}} respectively. Now we distinguish two cases to proceed.

Case 1. W(μ(x,),μ(y,))W(μ~(x,),μ~(y,))W(\mu(x,\cdot),\mu(y,\cdot))\geq W(\widetilde{\mu}(x,\cdot),\widetilde{\mu}(y,\cdot)).

In view of (3.2), there exists some fLip 1f\in{\rm Lip}\,1 such that

uVf(u)(μ(x,u)μ(y,u))=supψLip 1uVψ(u)(μ(x,u)μ(y,u)).\sum_{u\in V}f(u)(\mu(x,u)-\mu(y,u))=\sup_{\psi\in{\rm Lip}\,1}\sum_{u\in V}\psi(u)(\mu(x,u)-\mu(y,u)).

By ([20], Lemma 2.1), we have for all vertices uu and vv,

|d(u,v)d~(u,v)|m|𝐰𝐰~|.|d(u,v)-\widetilde{d}(u,v)|\leq\sqrt{m}|\mathbf{w}-\widetilde{\mathbf{w}}|.

It then follows that

d(u,v)d~(u,v)=1+d(u,v)d~(u,v)d~(u,v)1+mΛ|𝐰𝐰~|.\displaystyle\frac{d(u,v)}{\widetilde{d}(u,v)}=1+\frac{d(u,v)-\widetilde{d}(u,v)}{\widetilde{d}(u,v)}\leq 1+\sqrt{m}\Lambda|\mathbf{w}-\widetilde{\mathbf{w}}|.

Set

f~(u)=f(u)1+mΛ|𝐰𝐰~|,uV.\widetilde{f}(u)=\frac{f(u)}{1+\sqrt{m}\Lambda|\mathbf{w}-\widetilde{\mathbf{w}}|},\quad\forall u\in V.

Since

|f~(u)f~(v)|=|f(u)f(v)|1+mΛ|𝐰𝐰~|d~(u,v)|\widetilde{f}(u)-\widetilde{f}(v)|=\frac{|f(u)-f(v)|}{1+\sqrt{m}\Lambda|\mathbf{w}-\widetilde{\mathbf{w}}|}\leq\widetilde{d}(u,v)

for all u,vVu,v\in V, we have f~Lip~ 1\widetilde{f}\in\widetilde{{\rm Lip}}\,1. By our assumption μ(x,)\mu(x,\cdot) is locally Lipschitz in +m\mathbb{R}^{m}_{+} with respect to 𝐰\mathbf{w}, there exists a constant C0>0C_{0}>0, depending only on Λ\Lambda and δ\delta, such that for all 𝐰\mathbf{w} and 𝐰~\widetilde{\mathbf{w}} satisfying (3.1),

|μ(x,u)μ~(x,u)|C0|𝐰𝐰~|,x,uV.|\mu(x,u)-\widetilde{\mu}(x,u)|\leq C_{0}|\mathbf{w}-\widetilde{\mathbf{w}}|,\quad\forall x,u\in V.

As a consequence, we obtain

W(μ(x,),μ(y,))W(μ~(x,),μ~(y,))\displaystyle W(\mu(x,\cdot),\mu(y,\cdot))-W(\widetilde{\mu}(x,\cdot),\widetilde{\mu}(y,\cdot))
=\displaystyle= uVf(u)(μ(x,u)μ(y,u))supψLip~ 1uVψ(u)(μ~(x,u)μ~(y,u))\displaystyle\sum_{u\in V}f(u)(\mu(x,u)-\mu(y,u))-\sup_{\psi\in\widetilde{{\rm Lip}}\,1}\sum_{u\in V}\psi(u)(\widetilde{\mu}(x,u)-\widetilde{\mu}(y,u))
\displaystyle\leq uVf(u)(μ(x,u)μ(y,u))uVf~(u)(μ~(x,u)μ~(y,u))\displaystyle\sum_{u\in V}f(u)(\mu(x,u)-\mu(y,u))-\sum_{u\in V}\widetilde{f}(u)(\widetilde{\mu}(x,u)-\widetilde{\mu}(y,u))
\displaystyle\leq uV|f(u)|(|μ(x,u)μ~(x,u)|+|μ(y,u)μ~(y,u)|)+uV|f(u)f~(u)|(μ~(x,u)+μ~(y,u))\displaystyle\sum_{u\in V}|f(u)|(|\mu(x,u)-\widetilde{\mu}(x,u)|+|\mu(y,u)-\widetilde{\mu}(y,u)|)+\sum_{u\in V}|f(u)-\widetilde{f}(u)|(\widetilde{\mu}(x,u)+\widetilde{\mu}(y,u))
\displaystyle\leq 2n(C0+mΛ)fL(V)|𝐰𝐰~|,\displaystyle 2n(C_{0}+\sqrt{m}\Lambda)\|f\|_{L^{\infty}(V)}|\mathbf{w}-\widetilde{\mathbf{w}}|,

where nn is the number of all vertices of VV.

Case 2. W(μ(x,),μ(y,))<W(μ~(x,),μ~(y,))W(\mu(x,\cdot),\mu(y,\cdot))<W(\widetilde{\mu}(x,\cdot),\widetilde{\mu}(y,\cdot)).

By swapping the positions of 𝐰\mathbf{w} and 𝐰~\widetilde{\mathbf{w}}, we have by the same argument as in Case 1,

W(μ~(x,),μ~(y,))W(μ(x,),μ(y,))2n(C0+mΛ)fL(V)|𝐰𝐰~|.W(\widetilde{\mu}(x,\cdot),\widetilde{\mu}(y,\cdot))-W(\mu(x,\cdot),\mu(y,\cdot))\leq 2n(C_{0}+\sqrt{m}\Lambda)\|f\|_{L^{\infty}(V)}|\mathbf{w}-\widetilde{\mathbf{w}}|.

Combining Cases 1 and 2, we complete the proof of the lemma. \hfill\Box

4 Proof of Theorems 2.1 and 2.2

In this section, we prove Theorems 2.1 and 2.2 by using the ODE theory.

Proof of Theorem 2.1. We divide the proof into two parts.

Part 1. Short time existence.

Set wi=weiw_{i}=w_{e_{i}}, i=1,2,,mi=1,2,\cdots,m. Given any vector 𝐰0=(w0,1,w0,2,,w0,m)+m{\bf{w}}_{0}=(w_{0,1},w_{0,2},\cdots,w_{0,m})\in\mathbb{R}^{m}_{+}. In fact, the evolution of weights (2.7) is the ordinary differential system

{𝐰(t)=𝐟(𝐰(t))𝐰(0)=𝐰0,\left\{\begin{array}[]{lll}{\bf{w}}^{\prime}(t)={\bf f}({\bf{w}}(t))\\[6.45831pt] {{\bf{w}}(0)}={{\bf{w}}_{0}},\end{array}\right. (4.1)

where 𝐰=(w1,w2,,wm)+m{\bf w}=(w_{1},w_{2},\cdots,w_{m})\in\mathbb{R}^{m}_{+} and 𝐟=(f1,f2,,fm){\mathbf{f}}=(f_{1},f_{2},\cdots,f_{m}) is a map represented by

𝐟:+m\displaystyle{\bf f}:\mathbb{R}^{m}_{+} \displaystyle\rightarrow m\displaystyle\mathbb{R}^{m}
𝐰\displaystyle{\bf{w}} \displaystyle\mapsto (f1(𝐰),,fm(𝐰)),\displaystyle(f_{1}(\mathbf{w}),\cdots,f_{m}(\mathbf{w})),

where fi(𝐰)=W(μ(xi,),μ(yi,))d(xi,yi)f_{i}(\mathbf{w})=W(\mu(x_{i},\cdot),\mu(y_{i},\cdot))-d(x_{i},y_{i}) and ei=xiyie_{i}=x_{i}y_{i}, i=1,2,,mi=1,2,\cdots,m. From Lemma 3.1 and ([20], Lemma 2.1), we know that all fi(𝐰)f_{i}(\mathbf{w}), i=1,2,,mi=1,2,\cdots,m, are locally Lipschitz in +m\mathbb{R}^{m}_{+}, and so is 𝐟(𝐰){\bf f}({\bf w}). By the ODE theory ([30], Chapter 6), there exists a constant T>0T>0 such that the ordinary differential system (4.1) has a unique solution 𝐰(t){\bf{w}}(t) on [0,T][0,T].

Part 2. Long time existence.

According to the conclusion of the first part, we may define

T=sup{T>0:(4.1)hasauniquesolutionon[0,T]}.T^{\ast}=\sup\{T>0:(\ref{equiv})\,{\rm has\,a\,unique\,solution\,on\,}[0,T]\}.

If T<+T^{\ast}<+\infty, then (4.1) has a unique solution 𝐰(t){\bf w}(t) on the time interval [0,T)[0,T^{\ast}); moreover, according to the ODE theory ([30], Chapter 6), we have either

lim inftTϕ(t)=0\liminf_{t\rightarrow T^{\ast}}\phi(t)=0 (4.2)

or

lim suptTΦ(t)=+,\limsup_{t\rightarrow T^{\ast}}\Phi(t)=+\infty, (4.3)

where ϕ(t)=min{w1(t),w2(t),,wm(t)}\phi(t)=\min\{w_{1}(t),w_{2}(t),\cdots,w_{m}(t)\} and Φ(t)=max{w1(t),w2(t),,wm(t)}\Phi(t)=\max\{w_{1}(t),w_{2}(t),\cdots,w_{m}(t)\}.

Let e=xyEe=xy\in E be fixed. Since

W(μ(x,),μ(y,))ρe(t)ρe(t)we(t),W(\mu(x,\cdot),\mu(y,\cdot))-\rho_{e}(t)\geq-\rho_{e}(t)\geq-w_{e}(t), (4.4)

this together with (4.1) implies

we(t)=W(μ(x,),μ(y,))ρe(t)we(t).w_{e}^{\prime}(t)=W(\mu(x,\cdot),\mu(y,\cdot))-\rho_{e}(t)\geq-w_{e}(t).

Thus we have

we(t)we(0)eT,t[0,T).w_{e}(t)\geq w_{e}(0)e^{-T^{\ast}},\quad\forall t\in[0,T^{\ast}). (4.5)

Noting also that each coupling AA between μ(x,)\mu(x,\cdot) and μ(y,)\mu(y,\cdot) satisfies A(u,v)[0,1]A(u,v)\in[0,1] and u,vVA(u,v)=1\sum_{u,v\in V}A(u,v)=1, and that d(u,v)τEwτd(u,v)\leq\sum_{\tau\in E}w_{\tau} for all vertices u,vu,v, we obtain

W(μ(x,),μ(y,))\displaystyle W(\mu(x,\cdot),\mu(y,\cdot)) =\displaystyle= infBu,vVB(u,v)d(u,v)\displaystyle\inf_{B}\sum_{u,v\in V}B(u,v)d(u,v) (4.6)
\displaystyle\leq u,vVA(u,v)d(u,v)\displaystyle\sum_{u,v\in V}A(u,v)d(u,v)
\displaystyle\leq (u,vVA(u,v))(τEwτ)\displaystyle\left(\sum_{u,v\in V}A(u,v)\right)\left(\sum_{\tau\in E}w_{\tau}\right)
=\displaystyle= τEwτ,\displaystyle\sum_{\tau\in E}w_{\tau},

where, in the first equality, BB is taken from all couplings between probability measures μ(x,)\mu(x,\cdot) and μ(y,)\mu(y,\cdot). In view of (4.1), it follows that

ddtτEwτ(t)\displaystyle\frac{d}{dt}\sum_{\tau\in E}w_{\tau}(t) =\displaystyle= τ=xyE(W(μ(x,),μ(y,))d(x,y))\displaystyle\sum_{\tau=xy\,\in\,E}\left(W(\mu(x,\cdot),\mu(y,\cdot))-d(x,y)\right)
\displaystyle\leq τ=xyEW(μ(x,),μ(y,))\displaystyle\sum_{\tau=xy\,\in\,E}W(\mu(x,\cdot),\mu(y,\cdot))
\displaystyle\leq mτEwτ(t).\displaystyle m\sum_{\tau\in E}w_{\tau}(t).

Then integration by parts gives

τEwτ(t)emTτEwτ(0),t[0,T).\sum_{\tau\in E}w_{\tau}(t)\leq e^{mT^{\ast}}\sum_{\tau\in E}w_{\tau}(0),\quad\forall t\in[0,T^{\ast}). (4.7)

Combining (4.5) and (4.7), we have

ϕ(0)eTϕ(t)Φ(t)τEwτ(t)emTτEwτ(0)\phi(0)e^{-T^{\ast}}\leq\phi(t)\leq\Phi(t)\leq\sum_{\tau\in E}w_{\tau}(t)\leq e^{mT^{\ast}}\sum_{\tau\in E}w_{\tau}(0)

for all t[0,T)t\in[0,T^{\ast}), which contradicts (4.2) and (4.3). Therefore T=+T^{\ast}=+\infty, and thus (4.1) has a unique solution 𝐰(t)\mathbf{w}(t) on [0,+)[0,+\infty). \hfill\Box

Proof of Theorem 2.2.

Given 𝐰=(we1,we2,,wem)+m{\bf{w}}=(w_{e_{1}},w_{e_{2}},\cdots,w_{e_{m}})\in\mathbb{R}^{m}_{+}. Let 𝐠:+mm{\bf{g}}:\mathbb{R}^{m}_{+}\rightarrow\mathbb{R}^{m} be a vector-valued function written as 𝐠(𝐰)=(g1(𝐰),g2(𝐰),,gm(𝐰)){\bf{g}}({\bf{w}})=(g_{1}({\bf{w}}),g_{2}({\bf{w}}),\cdots,g_{m}({\bf{w}})), where

gi(𝐰)=W(μ(xi,),μ(yi,))ρeij=1m(W(μ(xj,),μ(yj,))ρej)j=1mwejρei{{g}_{i}}({\bf{w}})=W\left(\mu(x_{i},\cdot),\mu(y_{i},\cdot)\right)-\rho_{e_{i}}-\frac{\sum_{j=1}^{m}\left(W(\mu(x_{j},\cdot),\mu(y_{j},\cdot))-\rho_{e_{j}}\right)}{\sum_{j=1}^{m}w_{e_{j}}}\rho_{e_{i}}

and ei=xiyie_{i}=x_{i}y_{i}. From Lemma 3.1 and ([20], Lemma 2.1), we conclude that 𝐠{\bf{g}} is locally Lipschitz in +m\mathbb{R}^{m}_{+}, namely if Ω+m\Omega\subset\mathbb{R}^{m}_{+} satisfies Ω¯+m\overline{\Omega}\subset\mathbb{R}^{m}_{+}, then there exists a constant CC depending only on Ω\Omega such that

|𝐠(𝐰)𝐠(𝐰~)|C|𝐰𝐰~|,𝐰,𝐰~Ω.|{\bf{g}}({\bf{w}})-{\bf{g}}({\widetilde{\bf{w}}})|\leq C|{\bf{w}}-{\widetilde{\bf{w}}}|,\quad\forall{\bf{w}},{\widetilde{\bf{w}}}\in\Omega.

Hence, according to the ODE theory ([30], Chapter 6), there exists some T>0T>0 such that the flow (2.8) has a unique solution 𝐰(t){\bf{w}}(t) on [0,T][0,T]. Let

T=sup{T>0:(2.8)hasauniquesolutionon[0,T]}.T^{\ast}=\sup\{T>0:(\ref{evolution-norm})\,{\rm has\,a\,unique\,solution\,on\,}[0,T]\}.

If T<+T^{\ast}<+\infty, then (2.8) has a unique solution 𝐰(t)=(we1(t),we2(t),,wem(t)){\bf w}(t)=(w_{e_{1}}(t),w_{e_{2}}(t),\cdots,w_{e_{m}}(t)) for all t[0,T)t\in[0,T^{\ast}). However, the ODE theory ([30], Chapter 6) implies either

lim inftTmin{we1(t),we2(t),,wem(t)}=0\liminf_{t\rightarrow T^{\ast}}\min\{w_{e_{1}}(t),w_{e_{2}}(t),\cdots,w_{e_{m}}(t)\}=0 (4.8)

or

lim suptTmax{we1(t),we2(t),,wem(t)}=+.\limsup_{t\rightarrow T^{\ast}}\max\{w_{e_{1}}(t),w_{e_{2}}(t),\cdots,w_{e_{m}}(t)\}=+\infty. (4.9)

In view of (4.4) and (4.6), we obtain for each ii,

(m+1)weigi(𝐰)wei+j=1mwej.-(m+1)w_{e_{i}}\leq g_{i}(\mathbf{w})\leq w_{e_{i}}+\sum_{j=1}^{m}w_{e_{j}}.

This together with (2.8) gives for each ii,

(m+1)wei(t)wei(t)wei(t)+j=1mwej(t)-(m+1)w_{e_{i}}(t)\leq w_{e_{i}}^{\prime}(t)\leq w_{e_{i}}(t)+\sum_{j=1}^{m}w_{e_{j}}(t) (4.10)

and

ddti=1mwei(m+1)i=1mwei.\frac{d}{dt}\sum_{i=1}^{m}w_{e_{i}}\leq(m+1)\sum_{i=1}^{m}w_{e_{i}}. (4.11)

From the left side of (4.10), we conclude that wei(t)wei(0)e(m+1)Tw_{e_{i}}(t)\geq w_{e_{i}}(0)e^{-(m+1)T^{\ast}} for all t[0,T)t\in[0,T^{\ast}) and each ii, contradicting (4.8). While from (4.11), we conclude that

τEwτ(t)e(m+1)TτEwτ(0)\sum_{\tau\in E}w_{\tau}(t)\leq e^{(m+1)T^{\ast}}\sum_{\tau\in E}w_{\tau}(0)

for all t[0,T)t\in[0,T^{\ast}), contradicting (4.9). Therefore T=+T^{\ast}=+\infty, as we desired. \hfill\Box

5 Examples

According to Theorems 2.1 and 2.2, we know that both of evolutions of weights (2.7) and (2.8) have unique global solutions. However, in general, we do not ensure that these solutions converge. In this section, we shall only construct examples of convergent solutions to (2.7) for special graphs and constant initial weights, but leave the case (2.8) to interested readers.

Example 1. Let G=(V,E,𝐰0)G=(V,E,\mathbf{w}_{0}) be a line segment illustrated in Figure 1, where V={x,y}V=\{x,y\}, E={e=xy}E=\{e=xy\}, and 𝐰0=w0\mathbf{w}_{0}=w_{0} is the initial weight of the edge ee. For any α[0,1]\alpha\in[0,1], the α\alpha-lazy one-step random walks are written as

μxα(u)={αifu=x1αifu=y,μyα(u)={1αifu=xαifu=y.\mu_{x}^{\alpha}(u)=\left\{\begin{array}[]{lll}\alpha&{\rm if}&u=x\\ 1-\alpha&{\rm if}&u=y,\end{array}\right.\quad\mu_{y}^{\alpha}(u)=\left\{\begin{array}[]{lll}1-\alpha&{\rm if}&u=x\\ \alpha&{\rm if}&u=y.\end{array}\right.
xxyyw0{w}_{0}
Figure 1: A line segment

Write μ(x,)=μxα()\mu(x,\cdot)=\mu_{x}^{\alpha}(\cdot) and μ(y,)=μyα()\mu(y,\cdot)=\mu_{y}^{\alpha}(\cdot). For α[0,1/2]\alpha\in[0,1/2], one calculates the Wasserstein distance between μx\mu_{x} and μy\mu_{y} as W(μxα,μyα)=(12α)weW(\mu_{x}^{\alpha},\mu_{y}^{\alpha})=(1-2\alpha)w_{e}, and the graph distance d(x,y)=wed(x,y)=w_{e}. As a consequence, the flow (2.7) becomes

we(t)=W(μxα,μyα)d(x,y)=2αwe(t).w_{e}^{\prime}(t)=W(\mu_{x}^{\alpha},\mu_{y}^{\alpha})-d(x,y)=-2\alpha w_{e}(t).

Thus we(t)=w0e2αtw_{e}(t)=w_{0}e^{-2\alpha t} for all t[0,+)t\in[0,+\infty). Similarly, one derives that for α[1/2,1]\alpha\in[1/2,1], (2.7) has the unique solution we(t)=w0e2(α1)tw_{e}(t)=w_{0}e^{2(\alpha-1)t} for all t[0,+)t\in[0,+\infty). In both cases, along the flow (2.7), we(t)w_{e}(t) converges to zero exponentially.

Example 2. Let G=(V,E,𝐰0)G=(V,E,\mathbf{w}_{0}) be a three-point path in Figure 2, where V={x,y,z}V=\{x,y,z\}, E={xy,yz}E=\{xy,yz\}, and 𝐰0=(w0,w0)\mathbf{w}_{0}=(w_{0},w_{0}) is the initial weight. Let α[0,1]\alpha\in[0,1], μ(x,)=μxα()\mu(x,\cdot)=\mu_{x}^{\alpha}(\cdot) and μ(y,)=μyα()\mu(y,\cdot)=\mu_{y}^{\alpha}(\cdot), where

μxα(u)={αifu=x1αifu=y,μxα(u)={αifu=z1αifu=y,\mu_{x}^{\alpha}(u)=\left\{\begin{array}[]{lll}\alpha&{\rm if}&u=x\\ 1-\alpha&{\rm if}&u=y,\end{array}\right.\quad\mu_{x}^{\alpha}(u)=\left\{\begin{array}[]{lll}\alpha&{\rm if}&u=z\\ 1-\alpha&{\rm if}&u=y,\end{array}\right.

and

μyα(u)={(1α)wxywxy+wyzifu=xαifu=y(1α)wyzwxy+wyzifu=z.\mu_{y}^{\alpha}(u)=\left\{\begin{array}[]{lll}(1-\alpha)\frac{w_{xy}}{w_{xy}+w_{yz}}&{\rm if}&u=x\\ \alpha&{\rm if}&u=y\\ (1-\alpha)\frac{w_{yz}}{w_{xy}+w_{yz}}&{\rm if}&u=z.\end{array}\right.
xxyyw0{w}_{0}zzw0{w}_{0}
Figure 2: A path graph G with length 2

Assuming that wxy=wyzw_{xy}=w_{yz}, we have μy(u)=(1α)/2\mu_{y}(u)=(1-\alpha)/2 for u{x,z}u\in\{x,z\}, μy(u)=α\mu_{y}(u)=\alpha for u=yu=y, and the Wasserstein distance W(μx,μy)=W(μy,μz)=(12α)wxyW(\mu_{x},\mu_{y})=W(\mu_{y},\mu_{z})=(1-2\alpha)w_{xy} for α[0,1/3]\alpha\in[0,1/3], W(μx,μy)=W(μy,μz)=αwxyW(\mu_{x},\mu_{y})=W(\mu_{y},\mu_{z})=\alpha w_{xy} for α[1/3,1]\alpha\in[1/3,1], d(x,y)=wxyd(x,y)=w_{xy} and d(y,z)=wyzd(y,z)=w_{yz}. Obviously, the initial value problem

{wxy(t)=W(μxα,μyα)d(x,y)=2αwxywyz(t)=W(μy,μz)d(y,z)=2αwyzwxy(t)=wyz(t)wxy(0)=w0=wyz(0)\left\{\begin{array}[]{lll}w_{xy}^{\prime}(t)=W(\mu_{x}^{\alpha},\mu_{y}^{\alpha})-d(x,y)=-2\alpha w_{xy}\\ w_{yz}^{\prime}(t)=W(\mu_{y},\mu_{z})-d(y,z)=-2\alpha w_{yz}\\ w_{xy}(t)=w_{yz}(t)\\ w_{xy}(0)=w_{0}=w_{yz}(0)\end{array}\right.

has a solution

(wxy(t),wyz(t))={(w0e2αt,w0e2αt)forα[0,1/3](w0e(α1)t,w0e(α1)t)forα[1/3,1].(w_{xy}(t),w_{yz}(t))=\left\{\begin{array}[]{lll}(w_{0}e^{-2\alpha t},w_{0}e^{-2\alpha t})&{\rm for}&\alpha\in[0,1/3]\\[5.16663pt] (w_{0}e^{(\alpha-1)t},w_{0}e^{(\alpha-1)t})&{\rm for}&\alpha\in[1/3,1].\end{array}\right. (5.1)

From Theorem 2.1, we know that (2.7) has a unique global solution. Hence (5.1) is the solution of (2.7) with the initial weights (wxy(0),wyz(0))=(w0,w0)(w_{xy}(0),w_{yz}(0))=(w_{0},w_{0}).

Example 3. Let G=(V,E,𝐰0)G=(V,E,\mathbf{w}_{0}) be a triangle in Figure 3, where V={x,y,z}V=\{x,y,z\}, E={xy,yz,zx}E=\{xy,yz,zx\}, 𝐰0=(w0,w0,w0)\mathbf{w}_{0}=(w_{0},w_{0},w_{0}). Assume that 𝐰=(w,w,w)\mathbf{w}=(w,w,w) is another weight of GG, under which we have

μxα(u)={α,u=x1α2,u=y,z,μyα(u)={1α2,u=x,zα,u=y.\mu_{x}^{\alpha}(u)=\left\{\begin{array}[]{lll}\alpha,&u=x\\ \frac{1-\alpha}{2},&u=y,\,z,\end{array}\right.\quad\mu_{y}^{\alpha}(u)=\left\{\begin{array}[]{lll}\frac{1-\alpha}{2},&u=x,\,z\\ \alpha,&u=y.\end{array}\right.
xxyyzz
Figure 3: A triangle

This leads to

W(μxα,μyα)={13α2w0,α[0,1/3]3α12w0,α[1/3,1].W(\mu_{x}^{\alpha},\mu_{y}^{\alpha})=\left\{\begin{array}[]{lll}\frac{1-3\alpha}{2}w_{0},&\alpha\in[0,1/3]\\[5.16663pt] \frac{3\alpha-1}{2}w_{0},&\alpha\in[1/3,1].\end{array}\right.

For α[0,1/3]\alpha\in[0,1/3], the initial value problem

{w(t)=3α+12w(t)w(0)=w0\left\{\begin{array}[]{lll}w^{\prime}(t)=-\frac{3\alpha+1}{2}w(t)\\ w(0)=w_{0}\end{array}\right.

has a solution w(t)=w0e3α+12tw(t)=w_{0}e^{-\frac{3\alpha+1}{2}t}. For α[1/3,1]\alpha\in[1/3,1], the initial value problem

{w(t)=3(α1)2w(t)w(0)=w0\left\{\begin{array}[]{lll}w^{\prime}(t)=\frac{3(\alpha-1)}{2}w(t)\\ w(0)=w_{0}\end{array}\right.

has a solution w(t)=w0e3(α1)2tw(t)=w_{0}e^{\frac{3(\alpha-1)}{2}t}. Let μ(x,)=μxα()\mu(x,\cdot)=\mu_{x}^{\alpha}(\cdot), μ(y,)=μyα()\mu(y,\cdot)=\mu_{y}^{\alpha}(\cdot) and μ(z,)=μzα()\mu(z,\cdot)=\mu_{z}^{\alpha}(\cdot). Then the unique global solution of the flow (2.7) is (w(t),w(t),w(t))(w(t),w(t),w(t)).

Example 4. G=(V,E,𝐰0)G=(V,E,\mathbf{w}_{0}) is a square in Figure 4, V={x,y,z,u}V=\{x,y,z,u\}, E={xy,yz,zu,ux}E=\{xy,yz,zu,ux\}, 𝐰0=(w0,w0,w0,w0)\mathbf{w}_{0}=(w_{0},w_{0},w_{0},w_{0}). For weights 𝐰=(w,w,w,w)\mathbf{w}=(w,w,w,w), we have for α[0,1]\alpha\in[0,1],

μxα(v)={α,v=x1α2,v=y,u0,u=z,μyα(v)={α,v=y1α2,v=x,z0,v=u\mu_{x}^{\alpha}(v)=\left\{\begin{array}[]{lll}\alpha,&v=x\\ \frac{1-\alpha}{2},&v=y,\,u\\ 0,&u=z,\end{array}\right.\quad\mu_{y}^{\alpha}(v)=\left\{\begin{array}[]{lll}\alpha,&v=y\\ \frac{1-\alpha}{2},&v=x,\,z\\ 0,&v=u\end{array}\right.
xxyyzzuu
Figure 4: A square

and

W(μxα,μyα)={(12α)w,α[0,1/3]αw,α[1/3,1].W(\mu_{x}^{\alpha},\mu_{y}^{\alpha})=\left\{\begin{array}[]{lll}(1-2\alpha)w,&\alpha\in[0,1/3]\\ \alpha w,&\alpha\in[1/3,1].\end{array}\right.

This together with the differential equation

w(t)=wxy(t)=W(μxα,μyα)d(x,y)w^{\prime}(t)=w_{xy}^{\prime}(t)=W(\mu_{x}^{\alpha},\mu_{y}^{\alpha})-d(x,y)

gives

w(t)={w0e2αt,α[0,1/3]w0e(α1)t,α[1/3,1].w(t)=\left\{\begin{array}[]{lll}w_{0}e^{-2\alpha t},&\alpha\in[0,1/3]\\ w_{0}e^{(\alpha-1)t},&\alpha\in[1/3,1].\end{array}\right. (5.2)

Set μ(v,)=μv()\mu(v,\cdot)=\mu_{v}(\cdot) for all vVv\in V. The evolution problem (2.7) has a unique global solution 𝐰(t)=(w(t),w(t),w(t),w(t))\mathbf{w}(t)=(w(t),w(t),w(t),w(t)) with w(t)w(t) given as in (5.2).

Example 5. G=(V,E,𝐰0)G=(V,E,\mathbf{w}_{0}) is complete graph with 4 vertices K4K_{4} in Figure 5, V={x,y,z,u}V=\{x,y,z,u\}, E={xz,xy,xu,zy,zu,yu}E=\{xz,xy,xu,zy,zu,yu\}, 𝐰0=(w0,w0,w0,w0)\mathbf{w}_{0}=(w_{0},w_{0},w_{0},w_{0}). For weights 𝐰=(w,w,w,w)\mathbf{w}=(w,w,w,w), we have for α[0,1]\alpha\in[0,1],

μxα(v)={α,v=x1α3,v=y,z,u,μyα(v)={α,v=y1α3,v=x,z,u.\mu_{x}^{\alpha}(v)=\left\{\begin{array}[]{lll}\alpha,&v=x\\ \frac{1-\alpha}{3},&v=y,\,z,\,u,\end{array}\right.\mu_{y}^{\alpha}(v)=\left\{\begin{array}[]{lll}\alpha,&v=y\\ \frac{1-\alpha}{3},&v=x,\,z,\,u.\end{array}\right.
xxyyzzuu
Figure 5: A complete graph with 4 vertices K4K_{4}

Moreover, we calculate

W(μxα,μyα)d(x,y)={24α3w,α[0,1/4]4(α1)3w,α[1/4,1].W(\mu_{x}^{\alpha},\mu_{y}^{\alpha})-d(x,y)=\left\{\begin{array}[]{lll}\frac{-2-4\alpha}{3}w,&\alpha\in[0,1/4]\\ \frac{4(\alpha-1)}{3}w,&\alpha\in[1/4,1].\end{array}\right.

Setting μ(v,)=μv()\mu(v,\cdot)=\mu_{v}(\cdot) for all vVv\in V, we have that the evolution problem (2.7) has a unique global solution 𝐰(t)=(w(t),w(t),w(t),w(t))\mathbf{w}(t)=(w(t),w(t),w(t),w(t)), where

w(t)={w0e2+4α3t,α[0,1/4]w0e4(α1)3t,α[1/4,1].w(t)=\left\{\begin{array}[]{lll}w_{0}e^{-\frac{2+4\alpha}{3}t},&\alpha\in[0,1/4]\\ w_{0}e^{\frac{4(\alpha-1)}{3}t},&\alpha\in[1/4,1].\end{array}\right.

Example 6. Let G=(V,E,𝐰0)G=(V,E,\mathbf{w}_{0}) be a star in Figure 6, V={x,z1,z2,,zn}V=\{x,z_{1},z_{2},\cdots,z_{n}\}, E={xz1,xz2,,xzn}E=\{xz_{1},xz_{2},\cdots,xz_{n}\}, 𝐰0=(w0,w0,,w0)\mathbf{w}_{0}=(w_{0},w_{0},\cdots,w_{0}). For weights 𝐰=(w,w,,w)\mathbf{w}=(w,w,\cdots,w), we have for α[0,1]\alpha\in[0,1],

μxα(v)={α,v=x1αn,vV{x},μyiα(v)={α,v=zi1α,v=x0,vV{x,zi}.\mu_{x}^{\alpha}(v)=\left\{\begin{array}[]{lll}\alpha,&v=x\\ \frac{1-\alpha}{n},&v\in V\setminus\{x\},\end{array}\right.\,\,\mu_{y_{i}}^{\alpha}(v)=\left\{\begin{array}[]{lll}\alpha,&v=z_{i}\\ {1-\alpha},&v=x\\ 0,&v\in V\setminus\{x,z_{i}\}.\end{array}\right.
xxz1z_{1}z2z_{2}z3z_{3}z4z_{4}z5z_{5}z6z_{6}znz_{n}
Figure 6: A star with n+1n+1 vertices

We also have

W(μxα,μziα)={(12α)w,α[0,1/(n+1)]n+2α2nw,α[1/(n+1),1],W(\mu_{x}^{\alpha},\mu_{z_{i}}^{\alpha})=\left\{\begin{array}[]{lll}(1-2\alpha)w,&\alpha\in[0,1/(n+1)]\\ \frac{n+2\alpha-2}{n}w,&\alpha\in[1/(n+1),1],\end{array}\right.

and thus

W(μxα,μziα)d(x,zi)={2αw,α[0,1/(n+1)]2(α1)nw,α[1/(n+1),1],W(\mu_{x}^{\alpha},\mu_{z_{i}}^{\alpha})-d(x,z_{i})=\left\{\begin{array}[]{lll}-2\alpha w,&\alpha\in[0,1/(n+1)]\\ \frac{2(\alpha-1)}{n}w,&\alpha\in[1/(n+1),1],\end{array}\right.

i=1,2,,ni=1,2,\cdots,n. Setting μ(v,)=μv()\mu(v,\cdot)=\mu_{v}(\cdot) for all vVv\in V, we derive that the flow (2.7) has a unique global solution 𝐰(t)=(w(t),w(t),w(t),w(t))\mathbf{w}(t)=(w(t),w(t),w(t),w(t)), where

w(t)={w0e2αt,α[0,1/4]w0e2(α1)nt,α[1/4,1].w(t)=\left\{\begin{array}[]{lll}w_{0}e^{-2\alpha t},&\alpha\in[0,1/4]\\ w_{0}e^{\frac{2(\alpha-1)}{n}t},&\alpha\in[1/4,1].\end{array}\right.

6 Experimental results

In this section, we first discretize systems (2.7) and (2.8), and then apply the corresponding algorithms to community detection. Next, we use the usual criteria and real-world datasets to evaluate the accuracy of our method in community detection. Then, we examine how some key parameters influence the accuracy of algorithm. Finally, our approach is compared with existing methods.

6.1 Discretization and algorithm design

If we choose μ(x,z)\mu(x,z) to be α\alpha-lazy one-step random walk μxα(z)\mu_{x}^{\alpha}(z) (2.5), then the system (2.7) is a modification of (1.1), namely

{wei(t)=κeiα(t)ρei(t)ei=xiyi,κeiα(t)=1W(μxiα,μyiα)ρei(t)wei(0)=w0,i,i=1,2,,m.\left\{\begin{array}[]{lll}w_{e_{i}}^{\prime}(t)=-\kappa_{e_{i}}^{\alpha}(t)\rho_{e_{i}}(t)\\[5.16663pt] e_{i}=x_{i}y_{i},\,\kappa_{e_{i}}^{\alpha}(t)=1-\frac{W(\mu_{x_{i}}^{\alpha},\,\mu_{y_{i}}^{\alpha})}{\rho_{e_{i}}(t)}\\[5.16663pt] w_{e_{i}}(0)=w_{0,i},\,\,i=1,2,\cdots,m.\end{array}\right. (6.1)

While the system (2.8) becomes a quasi-normalized version of (6.1), which reads as

{wei(t)=κeiα(t)ρei(t)+j=1mκej(t)ρej(t)j=1mwej(t)ρei(t)ei=xiyi,κeiα(t)=1W(μxiα,μyiα)ρei(t)wei(0)=w0,i,i=1,2,,m.\left\{\begin{array}[]{lll}w_{e_{i}}^{\prime}(t)=-\kappa_{e_{i}}^{\alpha}(t)\rho_{e_{i}}(t)+\frac{\sum_{j=1}^{m}\kappa_{e_{j}}(t)\rho_{e_{j}}(t)}{\sum_{j=1}^{m}w_{e_{j}}(t)}\rho_{e_{i}}(t)\\[5.16663pt] e_{i}=x_{i}y_{i},\,\kappa_{e_{i}}^{\alpha}(t)=1-\frac{W(\mu_{x_{i}}^{\alpha},\,\mu_{y_{i}}^{\alpha})}{\rho_{e_{i}}(t)}\\[5.16663pt] w_{e_{i}}(0)=w_{0,i},\,\,i=1,2,\cdots,m.\end{array}\right. (6.2)

If we choose μ(x,z)\mu(x,z) as an α\alpha-lazy two-step random walk given by (2.6), then we have special cases of (2.7) and (2.8) as follows:

{wei(t)=W(μ2,xiα,μ2,yiα)d(xi,yi)ei=xiyiEwei(0)=w0,i,i=1,2,,m,\left\{\begin{array}[]{lll}w_{e_{i}}^{\prime}(t)=W(\mu_{2,x_{i}}^{\alpha},\mu_{2,y_{i}}^{\alpha})-d(x_{i},y_{i})\\[5.16663pt] e_{i}=x_{i}y_{i}\in E\\[5.16663pt] w_{e_{i}}(0)=w_{0,i},\,\,i=1,2,\cdots,m,\end{array}\right. (6.3)
{wei(t)=W(μ2,xiα,μ2,yiα)d(xi,yi)j=1m(W(μ2,xjα,μ2,yjα)d(xj,yj))j=1mwejd(xi,yi)ei=xiyiEwei(0)=w0,i,i=1,2,,m.\left\{\begin{array}[]{lll}w_{e_{i}}^{\prime}(t)=W\left(\mu_{2,x_{i}}^{\alpha},\mu_{2,y_{i}}^{\alpha}\right)-d(x_{i},y_{i})-\frac{\sum_{j=1}^{m}\left(W(\mu_{2,x_{j}}^{\alpha},\mu_{2,y_{j}}^{\alpha})-d(x_{j},y_{j})\right)}{\sum_{j=1}^{m}w_{e_{j}}}d(x_{i},y_{i})\\[5.16663pt] e_{i}=x_{i}y_{i}\in E\\[5.16663pt] w_{e_{i}}(0)=w_{0,i},\,\,i=1,2,\cdots,m.\end{array}\right. (6.4)

A discrete version of (6.1) can be written as

wei(t+s)wei(t)=sκeiα(t)ρei(t)),w_{e_{i}}(t+s)-w_{e_{i}}(t)=-s\kappa_{e_{i}}^{\alpha}(t)\rho_{e_{i}}(t)),

where s>0s>0 is a step size of discretization. Since other discrete versions of (6.2)-(6.4) are very similar, we omitted them here. To balance the computer’s calculation accuracy and error, we set step size s=0.01s=0.01.

The pseudo-code in Algorithm 1 below demonstrates the iterations process for the discrete versions of (6.1) and (6.2), and the surgery in the final iteration.

Input: an undirected connected finite network G=(V,E)G=\left({V,E}\right) ; parameter α\alpha ; maximum iteration TT ; step size ss .
Output: community detection results of GG
for  i=1,,Ti=1,\cdots,T do
        calculate the the distance d(x,y)d(x,y) between any two points xx and yy;
       compute the Ricci curvature κeα{\kappa}_{e}^{\alpha} for all edge;
      update all edge weights simultaneously through the corresponding flow process;
      
end for
for cutoff= wmax,,wminw_{max},\cdots,w_{min} do
       for  eEe\in E do
             if we>𝑐𝑢𝑡𝑜𝑓𝑓w_{e}>{\it cutoff} then
                   remove the edge ee;
             end if
            
       end for
      calculate the accuracy of community detection;
end for
Algorithm 1 Community detection using α\alpha-lazy one-step random walk evolution

Now we demonstrate a simple application of Algorithm 1. Let GG be as in Figure 7, ww be the weight of edge and κ\kappa be Ollivier’s Ricci curvature given by (6.2) with α=0.5\alpha=0.5. Red and blue represent the edges within the two communities, and green edges are the edges connecting the two communities. The initial weight on each edge is assumed to be 1. The weights and curvatures of edges not shown can be derived from the symmetry. The first graph is the initial calculated curvature and weight. The second graph is the result after thirty iterations by (6.2). The bottom graph is the community detection result after the surgery.

w=1w=1w=1κ{\kappa}=-0.33κ{\kappa}=0.75κ{\kappa}=0.42w=0.73w=1.19w=1.01κ{\kappa}=-0.15κ{\kappa}=0.90κ{\kappa}=0.35evolutioncut w>>1.1surgeryAAABCDEFCCDDBBEEFF
Figure 7: Demonstration of evolution of weights on a graph.

Similar to Algorithm 1, the discrete versions of (6.3) and (6.4) can be applied to design Algorithm 2.

Input: an undirected connected finite network G=(V,E)G=\left({V,E}\right) ; parameter α\alpha ; maximum iteration TT ; step size ss .
Output: community detection results of GG
for  i=1,,Ti=1,\cdots,T do
         calculate the the distance d(x,y)d(x,y) between any two points xx and yy;
       compute the Wasserstein distance W(μ2,xiα,μ2,yiα)W(\mu_{2,x_{i}}^{\alpha},\mu_{2,y_{i}}^{\alpha}) for all edge;
      update all edge weights simultaneously through the corresponding flow process;
      
end for
for cutoff= wmax,,wminw_{max},\cdots,w_{min} do
       for  eEe\in E do
             if we>𝑐𝑢𝑡𝑜𝑓𝑓w_{e}>{\it cutoff} then
                   remove the edge ee;
             end if
            
       end for
      calculate the accuracy of community detection;
end for
Algorithm 2 Community detection using α\alpha-lazy two-step random walk evolution

Our algorithms are primarily complex due to the tasks of finding the shortest path in the graph and solving a linear programming problem. The time complexities for these tasks are O(|E|log|V|)O(|E|\log|V|) and O(|E|D3)O(|E|D^{3}) respectively. Here, DD represents the average degree of the network, and |E||E| and |V||V| represent the number of edges and vertices in the network, respectively. Despite the sparse connectivity of the network, where |D||E||D|\ll|E|, O(|E|D3)O(|E|D^{3}) often surpasses O(|E|log|V|)O(|E|\log|V|) in most scenarios. Consequently, the computational complexity of our approach is O(|E|D3)O(|E|D^{3}).

6.2 Real-world datasets and criteria

For the real-world datasets, we select three distinct scale community graphs. We shall use three different metrics to assess the precision of community detection in real-world datasets. We choose the adjusted Rand index (ARI) [15] and normalized mutual information (NMI) [9] as the criteria for evaluating the quality of clustering accuracy when compared to the ground truth. Furthermore, we choose modularity [5, 22] to measure the robustness of the community structure of a given graph without relying on ground-truth clustering. Basic information for real-world networks is listed in Table 1.

Table 1: Summary of real-world network characteristics
networks vertexes edges #Class AvgDeg density Diameter
Karate 34 78 2 4.59 0.139 5
Football 115 613 12 10.66 0.094 4
Facebook 775 14006 18 36.15 0.047 8

The Karate dataset, referenced as [33], was collected from the members of a university karate club by Wayne Zachary in the 1970s. This network consists of 34 members and 78 edges, where the vertices represent individual members and the edges denote the connections between them. This data set is generally used to find the two groups of people into which the karate club fission after a conflict between two faculties.

In the football dataset, referenced as [12], the focus is on the NCAA football Division I games schedule for the 2000 season. It includes 115 teams (vertices) and 613 matches (edges). Each vertex represents a specific team, and the edges indicate the matches played between these teams. The dataset identifies 12 ground-truth communities or conferences. Since teams tend to play more frequently within their own conference, this network clearly exhibits a community structure.

After analyzing the NCAA Football League Network, we expanded our analysis to three larger-scale networks. These networks present additional challenges, particularly in terms of computational efficiency. The Facebook network dataset, known as [17], is derived from the Stanford Network Analysis Project and captures interaction networks within an online social networking platform. The benchmark community ground truth in this dataset is meticulously organized based on well-defined themes such as interests and affiliations.

Next, we describe the three criteria for evaluation employed in our assessment process. To be more specific, we let {U1,U2,,UI}\{U_{1},U_{2},\ldots,U_{I}\} and {W1,W2,,WJ}\{W_{1},W_{2},\ldots,W_{J}\} be two partitions of the set SS of nn vertices (nodes). Denote mij=|UiWj|m_{ij}=|U_{i}\cap W_{j}| the number of vertices in UiWjU_{i}\cap W_{j}, while cic_{i} and djd_{j} represent the numbers of vertices in UiU_{i} and WjW_{j}, respectively. All these quantities are listed in Table 2.

Table 2: Contingency table
U\WU\backslash W W1W_{1} W2W_{2} \cdots WJW_{J} sums
U1U_{1} m11m_{11} m12m_{12} \cdots m1Jm_{1J} c1c_{1}
U2U_{2} m21m_{21} m22m_{22} \cdots m2Jm_{2J} c2c_{2}
\vdots \vdots \vdots \ddots \vdots \vdots
UIU_{I} mI1m_{I1} mI2m_{I2} \cdots mIJm_{IJ} cIc_{I}
sums d1d_{1} d2d_{2} \cdots dJd_{J}

Then the explicit expressions of the above mentioned three criteria are written below.

  • 1.

    Adjusted Rand Index

    ARI=i=1Ij=1J(mij2)(i=1I(ci2)j=1J(dj2)/(n2))12(i=1I(ci2)+j=1J(dj2))(i=1I(ci2)j=1J(dj2)/(n2)),\text{ARI}=\frac{\sum_{i=1}^{I}\sum_{j=1}^{J}\binom{m_{ij}}{2}-\left(\sum_{i=1}^{I}\binom{c_{i}}{2}\sum_{j=1}^{J}\binom{d_{j}}{2}\big{/}\binom{n}{2}\right)}{\frac{1}{2}\left(\sum_{i=1}^{I}\binom{c_{i}}{2}+\sum_{j=1}^{J}\binom{d_{j}}{2}\right)-\left(\sum_{i=1}^{I}\binom{c_{i}}{2}\sum_{j=1}^{J}\binom{d_{j}}{2}\big{/}\binom{n}{2}\right)},

    where (n2)\binom{n}{2} is the number of different pairs from nn vertices, while symbols (mij2)\binom{m_{ij}}{2}, (ci2)\binom{c_{i}}{2} and (dj2)\binom{d_{j}}{2} have the same meaning. The Adjusted Rand Index (ARI) is a statistical measure used in data clustering analysis. It quantifies the similarity between two partitions of a dataset by comparing the assignments of data points to clusters. The ARI value ranges from -1 to 1, where a value of 1 indicates a perfect match between the partitions and a value close to 0 indicates a random assignment of data points to clusters.

  • 2.

    Normalized Mutual Information

    NMI=2i=1Ij=1Jmijlog(mijncidj)i=1Icilog(cin)+j=1Jdjlog(djn).\text{NMI}=\frac{-2\sum_{i=1}^{I}\sum_{j=1}^{J}m_{ij}\log\left(\frac{m_{ij}n}{c_{i}d_{j}}\right)}{\sum_{i=1}^{I}c_{i}\log\left(\frac{c_{i}}{n}\right)+\sum_{j=1}^{J}d_{j}\log\left(\frac{d_{j}}{n}\right)}.

    The Normalized Mutual Information (NMI) is a metric commonly utilized to evaluate the effectiveness of community detection algorithms. It facilitates the comparison of two clusters or communities by producing a value within the range of 0 to 1. A higher NMI value signifies a stronger resemblance between the two partitions or communities.

  • 3.

    Modularity

    Q=k=1M(Ck|E|β(Dk2|E|)2),Q=\sum_{k=1}^{M}\left(\frac{C_{k}}{|E|}-\beta\left(\frac{D_{k}}{2|E|}\right)^{2}\right),

    where MM represents the number of communities, CkC_{k} is the number of connections within the kkth community, DkD_{k} is the total degree of vertices in the kkth community, and β\beta is a resolution parameter, with a default value of 11. The parameter QQ spans from 0.5-0.5 to 11, with values approaching 11 signifying a more robust community structure and superior partition quality. A positive value indicates an excess of edges within groups compared to what would be expected by chance. Modularity assesses how edges connect within specific groups compared to random link distribution among all nodes.

The ARI is a pair-counting measure that compares the number of element pairs correctly grouped together or separated in two partitions. It adjusts for the likelihood that some agreement might occur by chance, thereby offering a more reliable comparison between partitions than raw accuracy scores. Its sensitivity to cluster size differences makes it a particularly useful tool in cases where clusters vary significantly in size.

NMI, in contrast, originates from information theory, where it captures the shared information content between two partitions. By normalizing this shared information with respect to the entropy of each partition, NMI ensures consistency even as the number of communities changes, providing robustness to fluctuations in community structure.

Modularity Q offers an edge-centric perspective. It measures how well a network is partitioned into communities by comparing the observed fraction of intra-community edges with what would be expected in a random network. A higher Q value implies a more distinct community structure, independent of ground truth labels. This makes modularity particularly valuable in exploratory analyses where community structure is unknown.

To ensure clarity and coherence in the discussion, the following terminology will be used throughout:

  • 1.

    one_evol designates the modified Ollivier’s Ricci flow (6.1), incorporating an α\alpha-lazy one-step random walk.

  • 2.

    The quasi-normalized form of this α\alpha-lazy one-step random walk (6.2 )will be referred to as qn1_evol.

  • 3.

    When the system (6.3) utilizes an α\alpha-lazy two-step random walk, it will be labeled two_evol.

  • 4.

    The quasi-normalized variant of two-step system (6.4) will be called qn2_evol.

6.3 Analysis of key parameters

In Algorithms 1 and 2, parameters α\alpha and the maximum iteration TT significantly influence the final results. In this section, we shall examine the impact of these two parameters on the experimental outcomes, detailed in Sections 6.3.1 and 6.3.2, respectively.

6.3.1 Impact of α\alpha on experimental results

Note that all of the aforementioned systems rely on the selection of α\alpha. Extensive experiments conducted by C. C. Ni et al [23] have shown that when applying community detection based on (1.1) with an α\alpha-lazy one-step random walk μxα(z)\mu_{x}^{\alpha}(z), α=0.5\alpha=0.5 yields the best results. Next, we shall study the impact of the alpha value on the α\alpha-lazy two-step random walk system (6.3) and (6.4). The value range of α\alpha is limited to [0,1)[0,1). The experimental results are listed in Figures 8 and 9.

Refer to caption
Figure 8: two_evol with different alpha setting on football network
Refer to caption
Figure 9: qn2_evol with different alpha setting on football network

In Figure 8, the graph shows that ARI remains stable around 0.9 across different α\alpha values, indicating that the clustering quality produced by the two_evol algorithm does not significantly change with variations in α\alpha. This suggests a high degree of consistency in clustering performance. Similar to ARI, the NMI values are almost constant at around 0.9, indicating that the clustering similarity remains high regardless of the α\alpha value, and thus the algorithm produces highly stable clustering results. The Modularity line shows minimal fluctuation, maintaining a value close to 0.9 across all α\alpha values. This indicates that the quality of community detection within the network remains steady even as α\alpha changes, and the modularity is robust to parameter variations. The cost time shows a distinct variation from α\alpha. At α=0.5\alpha=0.5, the cost time peaks sharply, reaching around 1000 ms. However, as α\alpha approaches 11, the cost time rapidly decreases to nearly 100 ms.

In Figure 9, ARI remains very stable, consistently near 0.9, indicating minimal impact of α\alpha on the clustering quality in the qn2_evol algorithm. NMI also stays steady at approximately 0.9, similar to the first graph, further reinforcing the stability of the algorithm with respect to clustering similarity. Modularity shows little variation, remaining close to 0.9 across different α\alpha values, suggesting that the quality of community detection in qn2_evol is largely unaffected by changes in α\alpha. Compared to two_evol, the cost time for qn2_evol is notably lower, hovering around 400 ms for most α\alpha values. Similar to two_evol, the computation time significantly decreases to nearly zero as α\alpha approaches 1, although the overall computational cost for qn2_evol remains much lower across the entire range of α\alpha values. This suggests that qn2_evol is generally more efficient than two_evol, especially in terms of computational cost, making it potentially more suitable for applications where efficiency is a priority.

The ARI, NMI, and modularity curves in both graphs remain consistently high and stable (around 0.9), indicating that both algorithms maintain strong clustering performance across all α\alpha values. This stability suggests that the clustering quality and the effectiveness of network partitioning are not highly sensitive to changes in the α\alpha parameter. Therefore, the choice of α\alpha does not substantially affect the algorithms’ ability to produce accurate and consistent clustering.

Cost time varies significantly with α\alpha, showing a sharp peak for two_evol around α\alpha = 0.5, while qn2_evol displays lower and more stable computational costs. For both algorithms, the computational cost drops sharply at α\alpha approaches 1, implying that the algorithms become much more efficient at this value. However, qn2_evol consistently exhibits lower computational complexity than two_evol, suggesting that it may be a more efficient algorithm overall.

The sharp drop in computation time at α\alpha approaches 1 suggests that this particular value of α\alpha could be leveraging some form of internal optimization in both algorithms, drastically reducing the time complexity. When α\alpha approaches 1, the expression in (2.6) becomes relatively straightforward. This phenomenon provides a guideline for selecting α\alpha values that optimize computational efficiency without sacrificing clustering performance.

In the subsequent sections, we stick to α=0.5\alpha=0.5 as our experiment parameter setting to maintain consistency in algorithms for α\alpha-lazy one-step random walk and two-step random walk.

6.3.2 Effect of iterations on experimental results

The impact of iterations on experimental results is illustrated in Figures 10 through 13 below. Our analysis indicates that the application of normalization had a negligible impact on the final results of community detection, regardless of whether it was implemented. In other words, Figures 10 and 11 display the same pattern, while Figures 12 and 13 are nearly identical. Based on this observation, we focus comparison on the qn1_evol and qn2_evol algorithms.

Refer to caption
(a) karate
Refer to caption
(b) football
Refer to caption
(c) facebook
Figure 10: one_evol over iterations
Refer to caption
(a) karate
Refer to caption
(b) football
Refer to caption
(c) facebook
Figure 11: qn1_evol over iterations

In Figure 11 (a), modularity starts below 0.8, increases over the first few iterations, and stabilizes around 0.85. ARI and NMI have some fluctuations in the initial few iterations, with ARI settling around 0.48 and NMI stabilizing at around 0.4. (b) indicates all three metrics (modularity, ARI, NMI) are stable throughout the iterations, starting at higher values (modularity \approx 0.9, ARI \approx 0.85, NMI \approx 0.8) and maintaining these levels across all iterations. While (c) shows that modularity starts slightly above 0.8 and stabilizes around 0.85, while ARI and NMI remain stable throughout (ARI \approx 0.85 and NMI \approx 0.75).

Refer to caption
(a) karate
Refer to caption
(b) football
Refer to caption
(c) facebook
Figure 12: two_evol over iterations
Refer to caption
(a) karate
Refer to caption
(b) football
Refer to caption
(c) facebook
Figure 13: qn2_evol over iterations

In Figure 13 (a), modularity starts around 0.8, increases and stabilizes around 0.85 after some oscillation. ARI and NMI exhibit minor fluctuations at the beginning, with ARI settling around 0.6 and NMI around 0.45. (b) shows all metrics are stable, with Modularity \approx 0.9, ARI \approx 0.85, and NMI \approx 0.8 from the beginning until the end of the iterations. (c) says modularity starts at about 0.9 and gradually decreases to stabilize around 0.85. ARI and NMI remain stable throughout, with ARI \approx 0.85 and NMI \approx 0.75.

When evaluating qn1_evol on the small karate network in Figure 11 (a), the modularity score exhibited rapid growth in the initial iterations, reaching a stable value of approximately 0.85 after five iterations. This suggests that the algorithm can efficiently optimize modularity, demonstrating a capacity for quick convergence. Such convergence implies that the algorithm is effective in detecting community structures within the network. However, the adjusted Rand index (ARI) metric displays noticeable volatility during the early stages, with an initial increase followed by a sharp decline, eventually stabilizing at a lower value near 0.4. This indicates that the algorithm undergoes substantial changes early on, leading to an initial improvement in performance but later succumbing to overfitting, which results in diminished accuracy in the long term. In practical scenarios, this issue could be mitigated by fine-tuning the number of iterations. Similarly, the normalized mutual information (NMI) measure follows a comparable pattern to ARI, experiencing minor fluctuations at the start but generally stabilizing around 0.5.

For the qn2_evol algorithm on the same karate network in Figure 13 (a), the modularity indicator shows a slower increase compared to qn1_evol, but it eventually stabilizes at a similar final value of 0.85. This more gradual rise can be attributed to the algorithm’s conservative optimization strategy, likely incorporating regularization to prevent excessive fluctuations caused by rapid adjustments. While the slower growth may seem less efficient, the steadier performance implies greater robustness in optimizing modularity. The ARI score for this algorithm remains consistently around 0.4, with little variation, suggesting a stable, though less adaptable, approach. Similarly, the NMI score exhibits minimal change from the outset, hovering around 0.5, indicating limitations in the algorithm’s ability to maintain cluster consistency in smaller networks.

In the evaluation of medium-sized networks, such as the football network depicted in Figures 11 (b) and 13 (b), as well as large networks, exemplified by Facebook in Figures 11 (c) and 13 (c), both qn1_evol and qn2_evol demonstrate comparable performance across the three primary metrics. In both cases, the algorithms converge on optimal results after relatively few iterations, underscoring their efficiency in community detection. Despite minor fluctuations in certain indicators as the number of iterations increases, both algorithms demonstrate strong overall consistency, highlighting their stability and robustness in handling networks of varying sizes.

In summary, tests on networks of different sizes show that the qn1_evol and qn2_evol algorithms work similarly for community detection. Both algorithms reach their results quickly and require fewer iterations, which makes them efficient and stable. Although there are slight changes in the indicators during the process, they maintain overall consistency, demonstrating their reliability. The qn1_evol algorithm optimizes modularity faster in small networks but can overfit. In contrast, the qn2_evol algorithm takes a more careful approach, providing better stability. These findings suggest that both algorithms perform well in community detection for various network sizes, especially in medium to large networks.

6.4 α\alpha-lazy one-step random walk

In this subsection, the experimental results are shown in various charts comprising three parts: (a) histograms of Ricci curvatures and edge weights before discrete flow; (b) histograms of Ricci curvatures and edge weights after discrete flow; (c) evaluation metrics after surgery. The results of the α\alpha-lazy one-step random walk on three real-world datasets are presented from Figures 14 to 16 below.

Refer to caption
Refer to caption
Refer to caption
Figure 14: one_evol on karate

In Figure 14 (a), the initial curvature distribution across edges spans from approximately -70 to 0, with most values clustering near 0 and a few showing highly negative curvatures, around -70. These significantly negative curvatures may highlight regions within the graph where structural imbalance is prominent. Additionally, most edge weights are near zero, while a smaller subset approaches values of 3 to 3.5, signaling varying connectivity strengths across the graph.

After five iterations, Figure 14 (b) illustrates a markedly narrower curvature distribution, now ranging between -25 and 0. The Ricci flow process effectively mitigates extreme negative curvatures, bringing more edges close to zero curvature, which suggests an improved structural balance. The edge weight distribution, though still skewed toward low values, demonstrates a slight shift toward higher weights, indicating that iterative weight adjustments may contribute to stabilizing graph connectivity.

Figure 14 (c) tracks the behavior of various metrics as the edge weight threshold decreases. Initially, three metrics show consistency at higher thresholds, rising to an optimal point before dropping off as the cutoff approaches zero. This suggests that excessive removal of low-weight edges can disrupt community structure, emphasizing the importance of balanced pruning in preserving the graph’s integrity.

Refer to caption
Refer to caption
Refer to caption
Figure 15: one_evol on football

In Figure 15 (a), initial Ricci curvature values range widely from approximately -140 to 0, with most edges exhibiting curvatures near 0 and a smaller portion reaching significantly negative values around -140. This distribution suggests areas of substantial geometric imbalance within the graph, as these highly negative curvatures indicate regions of structural tension or weak connectivity. The distribution of edge weights is also skewed, with most weights close to zero, and a few reaching approximately 5, which implies a sparse, weakly connected network before applying the discrete flow.

After the Ricci flow, as illustrated in Figure 15 (b), the curvature values become more compressed, spanning a reduced range from -35 to 0. This reduction in extreme negative values and more even curvature distribution signal that the flow has effectively smoothed the graph’s geometry, diminishing pronounced imbalances. Though edge weights remain predominantly low, there is a minor shift towards higher weights, indicating a subtle increase in connectivity strength and enhanced structural balance across the graph.

In Figure 15 (c), modularity sharply rises as edge weight cutoffs decrease from 5 to around 3, then stabilizes at an elevated level, suggesting that removing weaker edges sharpens community clarity. Beyond a weight threshold of about 3, further pruning yields diminishing modularity gains. A similar trend is seen in the adjusted Rand index (ARI), which increases with edge removal, aligning with the effects seen in normalized mutual information (NMI). NMI increases as weak edges are eliminated but stabilizes near 0.9, before declining when edge removal is too aggressive. This suggests that excessive pruning may compromise the integrity of community structure, emphasizing the need for balanced edge retention to preserve overall connectivity.

Refer to caption
Refer to caption
Refer to caption
Figure 16: one_evol on facebook

In Figure 16 (a), the initial Ricci curvature distribution is notably narrow, with most values clustering near zero, suggesting a largely uniform structural balance in the Facebook graph. A limited number of edges have slightly negative curvatures, yet the overall variation is minimal before the flow application. The edge weight distribution is skewed toward lower values, with the majority of weights near zero and only a few higher-weight edges reaching up to 8 or 9. This distribution implies that, although the graph has a few strong connections, most edges represent weak connections prior to the flow.

After Ricci flow, as depicted in Figure 16 (b), the range of Ricci curvatures expands considerably, now extending from -50 to 0. This broader distribution introduces more negative curvature values, suggesting that the Ricci flow enhances geometric diversity and potentially strengthens structural balance within the graph. Edge weights remain largely consistent with their initial spread; low-weight edges are still prevalent, while a small portion maintains higher weights. This pattern implies that, with limited iterations, the Ricci flow mainly adjusts curvature while leaving the edge weight distribution largely unchanged, reflecting a refined structural equilibrium.

Figure 16 (c) illustrates that modularity increases as weak edges are pruned, signifying enhanced community clarity. However, further pruning beyond certain thresholds yields diminishing returns, with modularity eventually stabilizing. The adjusted Rand index (ARI) also rises as weaker edges are removed but drops sharply at very low cutoffs, indicating a sensitivity of community structure to aggressive edge removal. Normalized mutual information (NMI) follows a similar trend, rising with edge pruning and stabilizing over a range of cutoffs, suggesting that moderate pruning strengthens community integrity. However, excessive removal ultimately weakens the structure, underscoring the need for balanced edge retention to maintain cohesive community structures.

6.5 α\alpha-lazy two-step random walk

Based on equations (6.3), (6.4), and Algorithm 2, we only focus on the distance d(xi,yi)d(x_{i},y_{i}) and Wasserstein distance W(μ2,xiα,μ2,yiα)W(\mu_{2,x_{i}}^{\alpha},\mu_{2,y_{i}}^{\alpha}), where xiyi=eix_{i}y_{i}=e_{i} belongs to EE. To maintain consistency with previous discussions, throughout this subsection, we continue to use the symbol of Ricci curvature

κeiα=1W(μ2,xiα,μ2,yiα)d(xi,yi).\kappa_{e_{i}}^{\alpha}=1-\frac{W(\mu_{2,x_{i}}^{\alpha},\mu_{2,y_{i}}^{\alpha})}{{d(x_{i},y_{i})}}.

It is important to note that introducing Ricci curvature is not essential in this context. The results of the α\alpha-lazy two-step random walk on three real-world datasets are presented from Figures 17 to 19. Since the experimental results for the three datasets exhibit similar trends, we only discuss the experimental results of two_evol on football. Figure 18 presents the histograms of Ricci curvatures and edge weights before and after applying the discrete flow, as well as the changes in modularity, ARI, and NMI with respect to the edge weight cutoff. Readers interested in further analysis will find similar patterns in Figures 17 and 19, suggesting a consistent effect of two_evol across different networks.

Refer to caption
Refer to caption
Refer to caption
Figure 17: two_evol on karate
Refer to caption
Refer to caption
Refer to caption
Figure 18: two_evol on football
Refer to caption
Refer to caption
Refer to caption
Figure 19: two_evol on facebook

Before applying the discrete Flow, both the distribution of Ricci curvatures and the edge weights exhibit notable imbalances in Figure 18 (a). The Ricci curvatures are broadly distributed, with most values concentrated in the negative range between -100 and 0, indicating significant negative curvature across the network. This suggests that the community structures in the initial network are sharply divided, with uneven geometric characteristics. Similarly, the edge weights show a concentration near zero, pointing to weak associations across much of the network. Only a few edges have higher weights, reflecting stronger connections in certain local areas, but overall, the network is characterized by sparse and weakly connected components.

After the application of Discrete Ricci Flow, the network structure undergoes a significant transformation. According to Figure 18 (b), the curvature distribution becomes more balanced, with values mostly ranging between -35 and 0, indicating a homogenization of the network’s geometric properties. This suggests that the flow has smoothed out the community boundaries, leading to a more cohesive and uniform structure. In parallel, the edge weight distribution remains centered around lower values, but there is an increase in the number of edges with higher weights. This shift indicates that Discrete Ricci Flow not only balances the network but also strengthens the associations within it, particularly in areas that previously had weak connections.

In terms of evaluation metrics in Figure 18 (c), modularity, Adjusted Rand Index (ARI), and Normalized Mutual Information (NMI) all display clear improvements following the application of Discrete Ricci Flow, as illustrated in the corresponding plots. As the edge weight cutoff decreases, modularity rapidly increases before stabilizing, indicating a more defined and pronounced community structure. Similarly, ARI shows a sharp increase before leveling off, reflecting improved clustering consistency, where the detected community structures align more closely with the true community divisions. Finally, the NMI follows a comparable trajectory, with its values rising and stabilizing at a high level, highlighting the enhanced information-theoretic similarity between the observed and actual community structures. These results collectively demonstrate that Discrete Ricci Flow not only optimizes the network’s geometric structure but also improves its functional characteristics, as evidenced by the refined community detection metrics.

Initially, all these indicators are zero because only a few edges are removed, and there is minimal community structure. However, the indicators gradually increase, reach a peak, and then stabilize as the cutoff value increases. When the cutoff approaches wminw_{min}, most of the edges are deleted, leading to a rapid drop in these indicators, essentially reaching zero.

The application of discrete Ricci Flow leads to a significant improvement in the network’s geometric structure, as evidenced by the more uniform distribution of Ricci curvatures and the better-balanced edge weight distribution. The evaluation metrics demonstrate that community detection improves substantially as the edge weights are adjusted, with the metrics stabilizing at high values. This confirms the effectiveness of discrete Ricci Flow in enhancing both the performance and consistency of community detection algorithms.

6.6 Comparison with other methods

We shall compare our methods of community detection with three traditional machine learning methods, namely the Girvan Newman algorithm based on edge betweenness [12], the greedy modularity algorithm based on modularity maximization [5, 27], and the label propagation algorithm based on stochastic methods [6]. We also use another five different models based on Ricci curvature for comparison, including unnormalized discrete Ollivier’s Ricci flow (DORF) [23], normalized discrete Ollivier’s Ricci flow (NDORF), normalized discrete Lin-Lu-Yau’s Ricci flow (NDSRF) [16], and modifications of discrete Lin-Lu-Yau’s Ricci flow (Rho; RhoN) [20]. The results in Table 3 demonstrate the effectiveness of our evolutionary algorithms (the last four methods) in detecting communities in real-world scenarios.

Table 3: Performance of various algorithms on real-world networks
Methods\Networks Karate club Football Facebook
ARI NMI Q ARI NMI Q ARI NMI Q
Girvan Newman 0.77 0.73 0.48 0.14 0.36 0.50 0.03 0.16 0.01
Greedy Modularity 0.57 0.56 0.58 0.47 0.70 0.82 0.49 0.68 0.55
Label Propagation 0.38 0.36 0.54 0.75 0.87 0.90 0.39 0.65 0.51
DORF 0.59 0.57 0.69 0.93 0.94 0.91 0.67 0.73 0.68
NDORF 0.59 0.57 0.69 0.93 0.94 0.91 0.68 0.73 0.68
NDSRF 0.59 0.57 0.68 0.93 0.94 0.91 0.68 0.73 0.68
Rho 0.77 0.68 0.82 0.89 0.92 0.90 0.64 0.72 0.63
RhoN 0.77 0.68 0.84 0.89 0.93 0.92 0.69 0.72 0.95
one_evol 0.59 0.57 0.87 0.90 0.93 0.91 0.70 0.72 0.95
qn1_evol 0.59 0.57 0.87 0.90 0.93 0.91 0.70 0.72 0.95
two_evol 0.38 0.49 0.85 0.90 0.93 0.91 0.70 0.72 0.95
qn2_evol 0.38 0.49 0.85 0.90 0.93 0.91 0.70 0.72 0.95
Refer to caption
Figure 20: ARI Performance across Algorithms
Refer to caption
Figure 21: NMI Performance across Algorithms
Refer to caption
Figure 22: Q Performance across Algorithms

The comparative performance of various algorithms on real-world networks, as depicted in the above Table 3 and Figures 20 to 22 below, underscores the distinct advantages of evolutionary algorithms, particularly in large scale networks. Algorithms such as one_evol, qn1_evol, two_evol, and qn2_evol consistently achieve superior results in modularity, as seen in Figure 22, reaching a peak modularity value of 0.95 on the Facebook network. This performance highlights evolutionary algorithms’ efficacy in optimizing community structures, especially where well-defined, connected communities are essential.

Traditional algorithms like Girvan Newman, though effective on smaller networks such as Karate Club (ARI = 0.77, NMI = 0.73), show limited scalability. For instance, on the Facebook network, its ARI drops to 0.03, and modularity Q falls to 0.01, underscoring its difficulty in managing the complexity of larger social networks. Similarly, though algorithms of DORF, NDORF, and NDSRF perform robustly on medium networks like Football (ARI = 0.93, NMI = 0.94), they fall short of the modularity optimization achieved by our evolutionary methods on larger network Facebook.

Evolutionary algorithms excel due to their ability to balance multiple performance metrics, consistently achieving the highest modularity values without sacrificing accuracy on smaller networks. On the Facebook and Football networks, they maintain strong ARI and NMI values alongside top modularity scores, indicating both accurate community detection and quality optimization of partitions. Despite their less optimal performance on smaller networks like Karate Club, evolutionary algorithms remain resilient across varying network sizes, demonstrating adaptability and superior optimization for large-scale, real-world network analysis.

In summary, traditional methods such as Girvan Newman and those based on Ricci curvature, like the DORF series, Rho, and RhoN, have their strengths in specific contexts. However, evolutionary algorithms stand out as versatile and effective option, particularly for complex networks. Their consistent performance in achieving the highest modularity scores across various networks, along with competitive accuracy, makes them the preferred choice for network analysis when the primary goal is to optimize community structure.

7 Concluding remarks

In this study, we propose an evolution system for evolving weight of edge on connected finite graph according to the difference between two distances. One is the Wasserstein distance between two probability measures, the other is the distance between two vertices on the edge. Moreover, the corresponding initial value problem has been proven to have a unique global solution. Discrete version of this kind of evolution system can provide more effective algorithms for community detection than many algorithms in the same topic. Note that our systems do not require Ricci curvature on the graph. Extensive experiments on real-world datasets demonstrate that our algorithm is both easy to implement and robust across a range of parameter choices. This robustness underscores the practical applicability of our approach in accurately identifying community structures with minimal tuning. Future work may extend this method to larger, more complex networks, further enhancing its utility across diverse applications in network science.

Acknowledgements

This research was partly supported by Public Computing Cloud, Renmin University of China.

Data availability All data needed are available freely. One can find the codes of our algorithms at https://github.com/mjc191812/Evolution-of-weights-on-a-connected-finite-graph.

Declarations

Conflict of interest The authors declared no potential conflicts of interest with respect to the research, authorship, and publication of this article.

Ethics approval The research does not involve humans and/or animals. The authors declare that there are no ethics issues to be approved or disclosed.

References

  • [1] S. Bai, A. Huang, L. Lu, S. T. Yau, On the sum of ricci-curvatures for weighted graphs, Pure Appl. Math. Q. 17 (2021) 1599-1617.
  • [2] S. Bai, Y. Lin, L. Lu, Z. Wang, S. Yau, Ollivier Ricci-flow on weighted graphs, arXiv: 2010.01802v8, 2024.
  • [3] S. S. Bhowmick, B. S. Seah, Clustering and summarizing protein-protein interaction networks: a survey, IEEE Trans. Knowl. Data Eng. 28 (2015) 638-658.
  • [4] S. Brendle, R. Schoen, Manifolds with 1/41/4-pinched curvature are space forms, J. Amer. Math. Soc. 22 (2009) 287-307.
  • [5] A. Clauset, M. Newman, C. Moore, Finding community structure in very large networks, Phys. Rev. E 70 (2004) 066111.
  • [6] G. Cordasco, L. Gargano, Community detection via semi-synchronous label propagation algorithms, 2010 IEEE International Workshop on: Business Applications of Social Network Analysis (BASNA), 1-8.
  • [7] D. Cushing, S. Kamtue, S. Liu, F. Münch, N. Peyerimhoff, H. Snodgrass, Bakry-Émery curvature sharpness and curvature flow in finite weighted graphs, I. Theory, arXiv: 2204.10064, 2022.
  • [8] D. Cushing, S. Kamtue, S. Liu, F. Münch, N. Peyerimhoff, H. Snodgrass, Bakry-Émery curvature sharpness and curvature flow in finite weighted graphs, II. Implementation, arXiv: 2212.12401, 2022.
  • [9] L. Danon, J. Duch, A. Díaz-Guilera, A. Arenas, Comparing community structure identification, J. Stat. Mech. Theory Exp. 2005 (2005) P09008.
  • [10] R. Forman, Bochner’s method for cell complexes and combinatorial Ricci curvature, Discrete Comput. Geom. 29 (2003) 323-374.
  • [11] S. Fortunato, Community detection in graphs, Phys. Rep. 486 (2010) 75-174.
  • [12] M. Girvan, M. E. J. Newman, Community structure in social and biological networks, Proc. Natl. Acad. Sci. 99 (2002) 7821-7826.
  • [13] L. Guillaume, Fast unfolding of communities in large networks, J. Stat. Mech. Theory Exp. 2008 (2008) P1008.
  • [14] R. Hamilton, Three-manifolds with positive ricci curvature, J. Differ. Geom. 17 (1982) 255-306.
  • [15] L. Hubert, P. Arabie, Comparing partitions, J. Classif. 2 (1985) 193-218.
  • [16] X. Lai, S. Bai, Y. Lin, Normalized discrete Ricci flow used in community detection, Phys. A 597 (2022) 127251.
  • [17] J. Leskovec, SNAP datasets: Stanford large network dataset collection, http://snap.stanford.edu/data, 2014.
  • [18] J. Leskovec, K. J. Lang, M. Mahoney, Empirical comparison of algorithms for network community detection, Proc. 19th Int. Conf. World Wide Web 2010 (2010) 631-640.
  • [19] Y. Lin, L. Lu, S. T. Yau, Ricci curvature of graphs, Tohoku Math. J. 63 (2011) 605-627.
  • [20] J. Ma, Y. Yang, A modified Ricci flow on arbitrary weighted graph, arXiv: 2408.09435v1, 2024.
  • [21] M. Newman, Modularity and community structure in networks, Proc. Natl. Acad. Sci. 103 (2006) 8577-8582.
  • [22] M. Newman, Networks: an introduction, Oxford Univ. Press, 2010.
  • [23] C. C. Ni, Y. Y. Lin, F. Luo, J. Gao, Community detection on networks with ricci flow, Sci. Rep. 9 (2019) 9984.
  • [24] Y. Ollivier, Ricci curvature of markov chains on metric spaces, J. Funct. Anal. 256 (2009) 810-864.
  • [25] L. Peel, D. Larremore, A. Clauset, The ground truth about metadata and community detection in networks, Sci. Adv. 3 (2016) 08.
  • [26] G. Perelman, The entropy formula for the ricci flow and its geometric applications, arXiv: math/0211159, 2002.
  • [27] J. Reichardt, S. Bornholdt, Statistical mechanics of community detection, Phys. Rev. E 74 (2006) 016110.
  • [28] O. Serrat, Knowledge solutions: Tools, methods, and approaches to drive organizational performance, Springer, 2017.
  • [29] S. Tauro, C. Palmer, G. Siganos, et al., A simple conceptual model for the internet topology, GLOBECOM’01 IEEE Global Telecommun. Conf. 3 (2001) 1667-1671.
  • [30] G. Wang, Z. Zhou, S. Zhu, S. Wang, Ordinary differential equations, (in Chinese), Higher Education Press, 2006.
  • [31] S. Wasserman, Social network analysis: Methods and applications, Cambridge Univ. Press, 1994.
  • [32] Z. Yang, R. Algesheimer, C. Tessone, A comparative analysis of community detection algorithms on artificial networks, Sci. Rep. 6 (2016) 30750.
  • [33] W. Zachary, An information flow model for conflict and fission in small groups, J. Anthropol. Res. 33 (1977) 452-473.