This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Deep Reinforcement Learning for Voltage Control and Renewable Accommodation Using Spatial-Temporal Graph Information

Jinhao Li, Ruichang Zhang, Hao Wang,    
Zhi Liu,   Hongyang Lai, Yanru Zhang
This work was supported by the National Natural Science Foundation of China (NSFC) under Grant 62001085, the General Program of Shenzhen Science and Technology Research and Development Fund under Grant JCYJ20220530164813029, and the Australian Research Council (ARC) Discovery Early Career Researcher Award (DECRA) under Grant DE230100046.J. Li is with the School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu, China, 611731. He is also with the Department of Data Science and AI, Faculty of Information Technology, Monash University, Melbourne, VIC 3800, Australia (e-mail: [email protected])R. Zhang is with the Department of Computer Science, University of Manchester (e-mail: [email protected])H. Wang is with the Department of Data Science and AI, Faculty of Information Technology, Monash University, Melbourne, VIC 3800, Australia. He is also affiliated with the Monash Energy Institute (e-mail: [email protected])Z. Liu is with the Department of Computer and Network Engineering, University of Electro-Communications, Tokyo, Japan (e-mail:[email protected]).H. Lai is with the School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China, 611731 (e-mail: [email protected]).Y. Zhang is with the University of Electronic Science and Technology of China (UESTC), Chengdu, China, 611731. She is also affiliated with Shenzhen Institute for Advanced Study of UESTC (e-mail:[email protected]). Y. Zhang is the corresponding author.
Abstract

Renewable energy resources (RERs) have been increasingly integrated into distribution networks (DNs) for decarbonization. However, the variable nature of RERs introduces uncertainties to DNs, frequently resulting in voltage fluctuations that threaten system security and hamper the further adoption of RERs. To incentivize more RER penetration, we propose a deep reinforcement learning (DRL)-based strategy to dynamically balance the trade-off between voltage fluctuation control and renewable accommodation. To further extract multi-time-scale spatial-temporal (ST) graphical information of a DN, our strategy draws on a multi-grained attention-based spatial-temporal graph convolution network (MG-ASTGCN), consisting of ST attention mechanism and ST convolution to explore the node correlations in the spatial and temporal views. The continuous decision-making process of balancing such a trade-off can be modeled as a Markov decision process optimized by the deep deterministic policy gradient (DDPG) algorithm with the help of the derived ST information. We validate our strategy on the modified IEEE 33, 69, and 118-bus radial distribution systems, with outcomes significantly outperforming the optimization-based benchmarks. Simulations also reveal that our developed MG-ASTGCN can to a great extent accelerate the convergence speed of DDPG and improve its performance in stabilizing node voltage in an RER-rich DN. Moreover, our method improves the DN’s robustness in the presence of generator failures.

Index Terms:
Renewable energy resources (RERs), voltage control, renewable accommodation, deep reinforcement learning (DRL), attention mechanism, graph convolution.

I Introduction

There has been an exponential growth of distributed renewable energy resources (RERs), e.g, wind and solar energy, in distribution networks (DNs) for mitigating global climate change and providing affordable electricity to customers [1]. From 2007 to 2021, the global installed capacity of solar photovoltaic (PV) has increased from 8 GW to 940 GW, so as the wind power (from 94 GW to 837 GW) [2]. Despite various benefits brought by RERs, e.g., decarbonization and power supply cost reduction, the increasing adoption of RERs introduces considerable uncertainties to DNs due to their intermittent nature [3]. Uncontrollable factors of RERs, such as solar irradiation and wind speed, can produce stochastic and non-dispatchable RER generation, which makes generation forecasts extremely challenging, leads to frequent voltage fluctuations, threatens system security, and may cause potentially economic losses [4].

Voltage control in an RER-rich DN has been widely discussed in the literature and can be basically sorted into three classes [5]. 1) Distributed-optimization-based methods: the optimal voltage control strategy is often derived from a non-convex optimal power flow problem and then relaxed into a centralized convex problem through semi-definite programming or second-order cone programming, which is solved by distributed algorithms. Typical consensus algorithms [6, 7, 8, 9] and alternating direction method of multipliers [10, 11, 12, 13] are among the most prevalent solutions for distributed optimization problems. However, both methods suffer from heavy computational costs, in particular in large-scale DNs. 2) Decentralized-based methods: based on network partitions, an optimization agent supported by conventional numerical algorithms [14, 15, 16] and heuristic methods (e.g., genetic algorithm [17], particle swarm optimization [18], harris hawk optimization [19], and grey wolf optimization [20]) can be applied to each divided zone to achieve the overall voltage control in a decentralized manner. Nevertheless, numerical algorithms are challenging to converge in face of high uptake of RERs, while heuristic algorithms highly rely on accurate prior knowledge of a specific DN and can easily trap into local optimums. 3) Learning-based methods: Unlike the aforementioned optimization-based methods, deep reinforcement learning (DRL)-based methods have drawn increasing attention to voltage control due to its model-free characteristic [21, 22, 23, 24, 25, 26, 27], enabling the strategy to learn totally from historical experiences without any prior knowledge of both the uncertainty of renewables and the DN. Moreover, the DRL, benefiting from its interactive learning manner, is also well-suitable to capture the uncertain dynamics in an RER-rich DN and thus better control voltage fluctuations. However, learning a stable and well-performed DRL-based control strategy in complex physical systems is notoriously difficult due to its slow learning convergence, such as the voltage control problem in DNs. Furthermore, ensuring the increasing accommodation of renewables in DNs while mitigating voltage fluctuations has been inadequately discussed in the literature.

To bridge the research gap, we are motivated to propose a DRL-based strategy to balance the trade-off between voltage fluctuation control and renewable accommodation, leveraging spatial-temporal (ST) graphical information of the DN. Specifically, providing that correlations among node pairs in DN are mutually influenced in the spatial view while each node features are inherently dependent in the temporal view, we develop a novel multi-grained attention-based spatial-temporal convolution network (MG-ASTGCN) to explore ST correlations through ST attention mechanism and extract ST features through ST convolution in multiple time scales. The derived ST information indicates time-varying patterns of the DN’s power flow, which can be potentially utilized by the DRL to accelerate its learning process, simultaneously improve its performance, and effectively interpret the graphical correlations among nodes in the DN. The main contributions of our work are summarized as follows.

  • ST Information Extraction: We develop a novel MG-ASTGCN to fully extract ST information from an RER-rich DN graph. Specifically, attention mechanism and graph convolution inside the MG-ASTGCN are facilitated for exploring ST correlations and features, respectively. Moreover, since the power flow exhibits periodic patterns in multiple time scales, we construct multi-grained power flow time series to better capture temporal ST information.

  • DRL for Balancing Voltage Control and Renewable Accommodation: We propose a DRL-based strategy leveraging the derived ST graphical information to dynamically balance the trade-off between voltage fluctuation control and renewable accommodation. The consecutive control process is modeled as a Markov decision process (MDP) which is optimized via the cutting-edge off-policy DRL algorithm, namely the deep deterministic policy gradient (DDPG).

  • Numerical Simulations and Implications: We validate our DRL-based method on modified IEEE 33, 69, and 118-bus radial distribution systems. Simulations demonstrate the effectiveness of our approach with outcomes significantly outperforming benchmark algorithms, i.e., harris hawk optimization (HHO), grey wolf optimization (GWO), interior-point (IP)-based method, and linear/quadratic programming (LQP)-based method. Moreover, our DRL-based strategy improves network stability in the presence of generator failures, especially for large-scale DNs.

The key insights drawn from simulation results are summarized as follows.

  • DDPG converges faster with the assistance of MG-ASTGCN: The developed MG-ASTGCN substantially accelerates the convergence speed of the DDPG, compared to other graphical correlation extraction methods, which demonstrates the effectiveness of our MG-ASTGCN in capturing the underlying ST graphical information of the DN.

  • Effectiveness of the ST attention mechanism inside the MG-ASTGCN: The spatial attention mechanism in the MG-ASTGCN captures mutual correlations among node pairs in the DN, while temporal attention exploits temporal correlations of each node features among multiple time steps. Simulations suggest that node pairs with more generator integration tend to have stronger spatial correlations, while each node features are highly self-correlated in historical recent time steps from the temporal view.

  • Overemphasizing voltage fluctuation control or renewable accommodation undermines DDPG’s performance: Our DRL-based strategy encodes all objectives and operational constraints of the derived optimization problem into reward functions as feedback from the DN. Modeling results present that, if the reward function for voltage fluctuation control or renewable accommodation is overemphasized, the DDPG’s performance correspondingly degenerates, resulting in sub-optimal operations of the DN, highlighting that striking an effective balance between these objectives is essential for the optimal control of the DN.

The remainder of this paper is organized as follows. Section II formulates an optimization problem balancing the trade-off between voltage fluctuation control and renewable accommodation, followed by Section III, where our DRL-based strategy, consisting of the MG-ASTGCN and DDPG, is proposed. Experimental results are presented in Section IV. Section V concludes.

Refer to caption
Figure 1: The system model and the presented work.

II System Model

We consider a radial DN highly integrated with RERs. We here utilize wind and solar PV generators as RERs since they are the most representative RERs with the largest installed capacities worldwide [2]. The high uptake of RERs introduces considerable uncertainties in the DN and continuously leads to voltage fluctuations, hindering RER’s further adoption and often causing curtailments. We formulate an optimization problem for mitigating voltage fluctuations while accommodating renewable integration, along with minimizing generation costs, from Section II-A to II-B. An overview of the presented work is illustrated in Fig. 1.

II-A Voltage Fluctuation Control

The increasing presence of RERs frequently triggers voltage fluctuations, even voltage violations, at load nodes in the DN, which may severely degrade the performance of electronic equipment and pose potential security risks to electricity consumers. To mitigate voltage fluctuations, we consider voltage fluctuation control with an L2-norm-based metric JtvolJ^{\text{vol}}_{t} to describe voltage stability, which can be defined as

Jtvol=[n=1NL(1|𝕍t,nL|)2]12,J^{\text{vol}}_{t}=\left[\sum_{n=1}^{N^{\text{L}}}\left(1-\left|\mathbb{V}_{t,n}^{\text{L}}\right|\right)^{2}\right]^{\frac{1}{2}}, (1)

where tt represents the current time step, NLN^{\text{L}} is the number of load nodes, and 𝕍n\mathbb{V}_{n} is the phasor form of load voltage expressed as 𝕍t,nL=vt,nLδt,nL\mathbb{V}_{t,n}^{\text{L}}=v_{t,n}^{\text{L}}\angle\delta_{t,n}^{\text{L}}, where vt,nLv_{t,n}^{\text{L}} is the voltage magnitude and δt,nL\delta_{t,n}^{\text{L}} is the phase angle.

II-B Renewable Accommodation

Accommodating the increasing penetration of RERs is of great importance for an orderly energy transition in the power grid and functions as the main pillar for net-zero emissions. The metric reflecting the effectiveness of renewable accommodation, denoted by JtRERJ_{t}^{\text{RER}}, can be formulated as

JtRER=j=1NWpt,jW,actp¯t,jW+k=1NSpt,kS,actp¯t,kS,J_{t}^{\text{RER}}=\sum_{j=1}^{N^{\text{W}}}\frac{p_{t,j}^{\text{W,act}}}{\bar{p}_{t,j}^{\text{W}}}+\sum_{k=1}^{N^{\text{S}}}\frac{p_{t,k}^{\text{S,act}}}{\bar{p}_{t,k}^{\text{S}}}, (2)

where NWN^{\text{W}} and NSN^{\text{S}} are the number of wind and solar PV generators, pt,jW,actp_{t,j}^{\text{W,act}} and pt,kS,actp_{t,k}^{\text{S,act}} are the actual outputs of wind and solar PV generation while p¯t,jW\bar{p}_{t,j}^{\text{W}} and p¯t,kS\bar{p}_{t,k}^{\text{S}} are correspondingly maximum power outputs at the current time step, which are usually obtained via onsite monitoring devices.

Moreover, the inherent variability of RERs can lead to mismatch costs between scheduled power generation and actual power output [28], which can be further divided into reserve cost (under power overestimation circumstance) and penalty cost (under power underestimation circumstance). The reserve costs for wind and solar PV power can be formulated as

Ct,jW,r\displaystyle C^{\text{W,r}}_{t,j} =𝔼[cjW,r(p¯t,jWpt,jW,ava)],\displaystyle=\mathbb{E}\left[c^{\text{W,r}}_{j}\left(\bar{p}_{t,j}^{\text{W}}-p^{\text{W,ava}}_{t,j}\right)\right], (3)
=cjW,rp¯t,jWp¯t,jW(p¯t,jWpt,jW,ava)fW(pt,jW,ava)d(pt,jW,ava),\displaystyle=c^{\text{W,r}}_{j}\int_{\underaccent{\bar}{p}_{t,j}^{\text{W}}}^{\bar{p}_{t,j}^{\text{W}}}\left(\bar{p}_{t,j}^{\text{W}}-p_{t,j}^{\text{W,ava}}\right)f^{\text{W}}\left(p_{t,j}^{\text{W,ava}}\right)d\left(p_{t,j}^{\text{W,ava}}\right),
Ct,kS,r\displaystyle C^{\text{S,r}}_{t,k} =𝔼[ckS,r(p¯t,kSpt,kS,ava)],\displaystyle=\mathbb{E}\left[c^{\text{S,r}}_{k}\left(\bar{p}_{t,k}^{\text{S}}-p^{\text{S,ava}}_{t,k}\right)\right], (4)
=ckS,rp¯t,kSp¯t,kS(p¯t,kSpt,kS,ava)fS(pt,kS,ava)d(pt,kS,ava),\displaystyle=c^{\text{S,r}}_{k}\int_{\underaccent{\bar}{p}_{t,k}^{\text{S}}}^{\bar{p}_{t,k}^{\text{S}}}\left(\bar{p}_{t,k}^{\text{S}}-p_{t,k}^{\text{S,ava}}\right)f^{\text{S}}\left(p_{t,k}^{\text{S,ava}}\right)d\left(p_{t,k}^{\text{S,ava}}\right),

where jj and kk are indices of wind and solar PV generators, cjW,rc^{\text{W,r}}_{j} and ckS,rc^{\text{S,r}}_{k} are constant coefficients, pt,jW,avap^{\text{W,ava}}_{t,j} and pt,kS,avap^{\text{S,ava}}_{t,k} are the available power outputs of wind and solar PV generation, p¯t,jW\underaccent{\bar}{p}_{t,j}^{\text{W}} and p¯t,kS\underaccent{\bar}{p}_{t,k}^{\text{S}} are the minimum power outputs of corresponding generators, fW()f^{\text{W}}(\cdot) and fS()f^{\text{S}}(\cdot) are the probability density functions of wind and solar PV generation, where we assume the wind speed and solar irradiation conform with the Weibull and lognormal distribution, respectively [28].

Similarly, the penalty costs of wind and solar PV generators can be formulated as

Ct,jW,p\displaystyle C^{\text{W,p}}_{t,j} =𝔼[cjW,p(pt,jW,avap¯t,jW)],\displaystyle=\mathbb{E}\left[c^{\text{W,p}}_{j}\left(p^{\text{W,ava}}_{t,j}-\bar{p}_{t,j}^{\text{W}}\right)\right], (5)
Ct,kS,p\displaystyle C^{\text{S,p}}_{t,k} =𝔼[ckS,p(pt,kS,avap¯t,kS)],\displaystyle=\mathbb{E}\left[c^{\text{S,p}}_{k}\left(p^{\text{S,ava}}_{t,k}-\bar{p}_{t,k}^{\text{S}}\right)\right], (6)

where cjW,pc^{\text{W,p}}_{j} and ckS,pc^{\text{S,p}}_{k} are constant coefficients.

Additionally, fuel energy resources still play an irreplaceable role in maintaining DN’s stability, especially in face of RER generation shortage. The generation costs of thermoelectric generators can be formulated as

Ct,iT=ai(pt,iT)2+bipt,iT+ci,C_{t,i}^{\text{T}}=a_{i}\left(p_{t,i}^{\text{T}}\right)^{2}+b_{i}p_{t,i}^{\text{T}}+c_{i}, (7)

where ii is the index of the thermoelectric generator, aia_{i}, bib_{i}, and cic_{i} are constant coefficients, pt,iTp_{t,i}^{\text{T}} is the actual power output of the thermoelectric generator.

Combining the generation costs of thermoelectric and RER generators, the overall generation cost in the DN at one time step can be formulated as

Jtgen=i=1NTCt,iT+j=1NW(Ct,jW,r+Ct,jW,p)+k=1NS(Ct,kS,r+Ct,kS,p),J_{t}^{\text{gen}}=\sum_{i=1}^{N^{\text{T}}}C_{t,i}^{\text{T}}+\sum_{j=1}^{N^{\text{W}}}\left(C^{\text{W,r}}_{t,j}+C^{\text{W,p}}_{t,j}\right)+\sum_{k=1}^{N^{\text{S}}}\left(C^{\text{S,r}}_{t,k}+C^{\text{S,p}}_{t,k}\right), (8)

where NTN^{\text{T}} is the number of thermoelectric generators.

II-C Optimization Formulation

We formulate the objectives of our control strategy as the weighted summation of metrics regarding voltage fluctuation mitigation, renewable accommodation maximization, and generation cost minimization, which can be expressed as

mint=1T\displaystyle\min\hskip 2.5pt\sum_{t=1}^{T} (wvolNLJtvolwRERNW+NSJtRER,\displaystyle\left(\frac{w^{\text{vol}}}{N^{\text{L}}}J_{t}^{\text{vol}}-\frac{w^{\text{RER}}}{N^{\text{W}}+N^{\text{S}}}J_{t}^{\text{RER}}\right., (9)
+wgenNT+NW+NSJtgen),\displaystyle\left.+\frac{w^{\text{gen}}}{N^{\text{T}}+N^{\text{W}}+N^{\text{S}}}J_{t}^{\text{gen}}\right),

where TT is the time horizon, wvolw^{\text{vol}}, wRERw^{\text{RER}}, and wgenw^{\text{gen}} are weights for their corresponding objectives, i.e., JtvolJ_{t}^{\text{vol}}, JtRERJ_{t}^{\text{RER}}, and JtgenJ_{t}^{\text{gen}}. The objective defined in Eq. (9) is subject to physical constraints that ensure safe operations of the DN. In particular, the equality constraints, namely power balance equations of active and reactive power in the DN, are formulated as

i=1NTpt,iT+j=1NWpt,jW,act+k=1NSpt,kK,act+ptG=n=1NLpt,nL+LtP,\sum_{i=1}^{N^{\text{T}}}p_{t,i}^{\text{T}}+\sum_{j=1}^{N^{\text{W}}}p_{t,j}^{\text{W,act}}+\sum_{k=1}^{N^{\text{S}}}p_{t,k}^{\text{K,act}}+p_{t}^{\text{G}}=\sum_{n=1}^{N^{\text{L}}}p_{t,n}^{\text{L}}+L_{t}^{P}, (10)
i=1NTqt,iT+j=1NWqt,jW,act+k=1NSqt,kK,act+qtG=n=1NLqt,nL+LtQ,\sum_{i=1}^{N^{\text{T}}}q_{t,i}^{\text{T}}+\sum_{j=1}^{N^{\text{W}}}q_{t,j}^{\text{W,act}}+\sum_{k=1}^{N^{\text{S}}}q_{t,k}^{\text{K,act}}+q_{t}^{\text{G}}=\sum_{n=1}^{N^{\text{L}}}q_{t,n}^{\text{L}}+L_{t}^{Q}, (11)

with the active/reactive power losses LtP/LtQL_{t}^{P}/L_{t}^{Q} on the branch defined as

LtP=m,nNBgm,n[vt,m2+vt,n22vt,mvt,ncos(δt,mn)],L_{t}^{P}=\sum_{m,n}^{N^{\text{B}}}g_{m,n}\left[v_{t,m}^{2}+v_{t,n}^{2}-2v_{t,m}v_{t,n}\cos\left(\delta_{t,mn}\right)\right], (12)
LtQ=m,nNBbm,n[vt,m2+vt,n22vt,mvt,ncos(δt,mn)],L_{t}^{Q}=\sum_{m,n}^{N^{\text{B}}}b_{m,n}\left[v_{t,m}^{2}+v_{t,n}^{2}-2v_{t,m}v_{t,n}\cos\left(\delta_{t,mn}\right)\right], (13)

where gm,ng_{m,n} and bm,nb_{m,n} are the conductance and susceptance of the branch, ptGp_{t}^{G} and qtGq_{t}^{G} are the active/reactive power from the power grid.

Moreover, inequality constraints that limit both load and generator voltages can be defined as

v¯nL\displaystyle\underaccent{\bar}{v}_{n}^{\text{L}} vt,nLv¯nL,\displaystyle\leq v_{t,n}^{\text{L}}\leq\bar{v}_{n}^{\text{L}}, n=1,,NL,\displaystyle n=1,\cdots,N^{\text{L}}, (14)
v¯iT\displaystyle\underaccent{\bar}{v}_{i}^{\text{T}} vt,iTv¯iT,\displaystyle\leq v_{t,i}^{\text{T}}\leq\bar{v}_{i}^{\text{T}}, i=1,,NT,\displaystyle i=1,\cdots,N^{\text{T}}, (15)
v¯jW\displaystyle\underaccent{\bar}{v}_{j}^{\text{W}} vt,jWv¯jW,\displaystyle\leq v_{t,j}^{\text{W}}\leq\bar{v}_{j}^{\text{W}}, j=1,,NW,\displaystyle j=1,\cdots,N^{\text{W}}, (16)
v¯kS\displaystyle\underaccent{\bar}{v}_{k}^{\text{S}} vt,kSv¯kS,\displaystyle\leq v_{t,k}^{\text{S}}\leq\bar{v}_{k}^{\text{S}}, k=1,,NS,\displaystyle k=1,\cdots,N^{\text{S}}, (17)

where v¯nL\underaccent{\bar}{v}_{n}^{\text{L}}, v¯iT\underaccent{\bar}{v}_{i}^{\text{T}}, v¯jW\underaccent{\bar}{v}_{j}^{\text{W}}, v¯kS\underaccent{\bar}{v}_{k}^{\text{S}} and v¯nL\bar{v}_{n}^{\text{L}}, v¯iT\bar{v}_{i}^{\text{T}}, v¯jW\bar{v}_{j}^{\text{W}}, v¯kS\bar{v}_{k}^{\text{S}} are the minimum and maximum voltages of load nodes and generators, respectively.

The active and reactive power output limits of generators can be defined as

p¯iT\displaystyle\underaccent{\bar}{p}_{i}^{\text{T}} pt,iTp¯iT,\displaystyle\leq p_{t,i}^{\text{T}}\leq\bar{p}_{i}^{\text{T}}, i=1,,NT,\displaystyle i=1,\cdots,N^{\text{T}}, (18)
piT,Down\displaystyle-p_{i}^{\text{T,Down}} pt+1,iTpt,iTpiT,Up,\displaystyle\leq p_{t+1,i}^{\text{T}}-p_{t,i}^{\text{T}}\leq p_{i}^{\text{T,Up}}, i=1,,NT,\displaystyle i=1,\cdots,N^{\text{T}}, (19)
p¯jW\displaystyle\underaccent{\bar}{p}_{j}^{\text{W}} pt,jWp¯jW,\displaystyle\leq p_{t,j}^{\text{W}}\leq\bar{p}_{j}^{\text{W}}, j=1,,NW,\displaystyle j=1,\cdots,N^{\text{W}}, (20)
p¯kS\displaystyle\underaccent{\bar}{p}_{k}^{\text{S}} pt,kSp¯kS,\displaystyle\leq p_{t,k}^{\text{S}}\leq\bar{p}_{k}^{\text{S}}, k=1,,NS,\displaystyle k=1,\cdots,N^{\text{S}}, (21)
q¯iT\displaystyle\underaccent{\bar}{q}_{i}^{\text{T}} qt,iTq¯iT,\displaystyle\leq q_{t,i}^{\text{T}}\leq\bar{q}_{i}^{\text{T}}, i=1,,NT,\displaystyle i=1,\cdots,N^{\text{T}}, (22)
q¯jW\displaystyle\underaccent{\bar}{q}_{j}^{\text{W}} qt,jWq¯jW,\displaystyle\leq q_{t,j}^{\text{W}}\leq\bar{q}_{j}^{\text{W}}, j=1,,NW,\displaystyle j=1,\cdots,N^{\text{W}}, (23)
q¯kS\displaystyle\underaccent{\bar}{q}_{k}^{\text{S}} qt,kSq¯kS,\displaystyle\leq q_{t,k}^{\text{S}}\leq\bar{q}_{k}^{\text{S}}, k=1,,NS,\displaystyle k=1,\cdots,N^{\text{S}}, (24)

where p¯iT\underaccent{\bar}{p}_{i}^{\text{T}}, p¯jW\underaccent{\bar}{p}_{j}^{\text{W}}, p¯kS\underaccent{\bar}{p}_{k}^{\text{S}} and p¯iT\bar{p}_{i}^{\text{T}}, p¯jW\bar{p}_{j}^{\text{W}}, p¯kS\bar{p}_{k}^{\text{S}} are the minimum and maximum active power outputs while q¯iT\underaccent{\bar}{q}_{i}^{\text{T}}, q¯jW\underaccent{\bar}{q}_{j}^{\text{W}}, q¯kS\underaccent{\bar}{q}_{k}^{\text{S}} and q¯iT\bar{q}_{i}^{\text{T}}, q¯jW\bar{q}_{j}^{\text{W}}, q¯kS\bar{q}_{k}^{\text{S}} are the minimum and maximum reactive power outputs. Constraint (19) describes the active power ramp rate constraint of the thermoelectric generators, where piT,Downp_{i}^{\text{T,Down}} and piT,Upp_{i}^{\text{T,Up}} are the ramp-down and ramp-up limits of the ii-th thermoelectric generator, which are set as 25%25\% of its rated power by default [29, 30]. Moreover, for the shut-down constraint of the thermoelectric generator, before the shut-down, the thermoelectric generator must adjust its current power output to its lower limit p¯iT\underaccent{\bar}{p}_{i}^{\text{T}}. Once shut down, the thermoelectric generator is not allowed to restart within 44 time steps by default. For the start-up constraint, the unavailable thermoelectric generator must adjust its power output to its lower limit before connecting to the DN again. Furthermore, once functioning in the DN, the generator is not allowed to be shut down within 44 time steps by default.

Also, the power flow constraint on the branch can be expressed as

|𝕊t,bB|s¯bB,b=1,,NB,|\mathbb{S}_{t,b}^{\text{B}}|\leq\bar{s}_{b}^{\text{B}},b=1,\cdots,N^{\text{B}}, (25)

where bb is the index of the branch, 𝕊t,bB=pt,bB+jqt,bB\mathbb{S}_{t,b}^{\text{B}}=p_{t,b}^{\text{B}}+jq_{t,b}^{\text{B}} is the complex form of apparent power on the branch, and s¯bB\bar{s}_{b}^{\text{B}} represents the maximum power flow on the branch.

III Methodology

To solve the optimization problem derived from Eq. (9) to (25), we first develop the MG-ASTGCN to extract ST graphical information of the DN in Section III-A, with the aim of delivering prior graphical information of the DN for its sequential DDPG. The consecutive control problem is modeled as an MDP in Section III-B, where we introduce the DDPG to learn the optimal strategy for controlling voltage fluctuations and accommodating renewable generation.

III-A ST Information Extraction via MG-ASTGCN

III-A1 MG-ASTGCN Preliminaries

A DN can be modeled as an undirected graph 𝒢=(𝒱,,𝑨)\mathcal{G}=(\mathcal{V},\mathcal{E},\bm{A}), as illustrated in Fig. 2, where 𝒱\mathcal{V}, \mathcal{E}, and 𝑨\bm{A} represent the node set, edge set, and the adjacency matrix, respectively. Each load node vnv_{n} generates a feature vector 𝒙t,n\bm{x}_{t,n} for gathering local information, e.g., load voltage (𝕍t,nL=vt,nLδt,nL\mathbb{V}_{t,n}^{\text{L}}=v_{t,n}^{\text{L}}\angle\delta_{t,n}^{\text{L}}), load power (pt,nL,qt,nLp_{t,n}^{\text{L}},q_{t,n}^{\text{L}}), and connected branch power (𝕊t,bB=pt,bB+jqt,bB\mathbb{S}_{t,b}^{\text{B}}=p_{t,b}^{\text{B}}+jq_{t,b}^{\text{B}}), at each time step, which can be formulated as

𝒙t,n=[vt,nL,δt,nL,pt,nL,qt,nL,pt,b1B,qt,b1B,]TFn×1,\bm{x}_{t,n}=\left[v_{t,n}^{\text{L}},\delta_{t,n}^{\text{L}},p_{t,n}^{\text{L}},q_{t,n}^{\text{L}},p_{t,b1}^{\text{B}},q_{t,b1}^{\text{B}},\cdots\right]^{T}\in\mathbb{R}^{F_{n}\times 1}, (26)

where FnF_{n} is the dimension of node features. The feature matrix of graph 𝒢\mathcal{G} can be aggregated as

𝑿t=[𝒙t,1,𝒙t,2,,𝒙t,NL]F×NL,\bm{X}_{t}=\left[\bm{x}_{t,1},\bm{x}_{t,2},\cdots,\bm{x}_{t,N^{\text{L}}}\right]\in\mathbb{R}^{F^{\prime}\times N^{\text{L}}}, (27)

where FF^{\prime} is the highest node feature dimension defined as F=maxnFnF^{\prime}=\max_{n}F_{n}.

Refer to caption
Figure 2: The undirected graph of a DN.

Considering periodic patterns in the power flow, especially the daily and weekly ones [2], we develop a multi-grained vector constructor to better capture temporal correlations of the DN, where the recent, daily, and weekly graph segments can be formulated as

𝓧r\displaystyle\bm{\mathcal{X}}^{\text{r}} =[𝑿tTr+1,,𝑿t1,𝑿t]F×NL×Tr,\displaystyle=\left[\bm{X}_{t-T^{\text{r}}+1},\cdots,\bm{X}_{t-1},\bm{X}_{t}\right]\in\mathbb{R}^{F^{\prime}\times N^{\text{L}}\times T^{\text{r}}}, (28)
𝓧d\displaystyle\bm{\mathcal{X}}^{\text{d}} =[𝑿tTd×Nd,,𝑿tNd,𝑿t]F×NL×Td,\displaystyle=\left[\bm{X}_{t-T^{\text{d}}\times N^{\text{d}}},\cdots,\bm{X}_{t-N^{\text{d}}},\bm{X}_{t}\right]\in\mathbb{R}^{F^{\prime}\times N^{\text{L}}\times T^{\text{d}}}, (29)
𝓧w\displaystyle\bm{\mathcal{X}}^{\text{w}} =[𝑿t7×Tw×Nd,,𝑿t7×Nd,𝑿t]F×NL×Tw,\displaystyle=\left[\bm{X}_{t-7\times T^{\text{w}}\times N^{\text{d}}},\cdots,\bm{X}_{t-7\times N^{\text{d}}},\bm{X}_{t}\right]\in\mathbb{R}^{F^{\prime}\times N^{\text{L}}\times T^{\text{w}}}, (30)

where TrT^{\text{r}}, TdT^{\text{d}}, and TwT^{\text{w}} indicate the length of recent, daily, and weekly segments, respectively, and NdN^{\text{d}} represents the number of resolving the optimization problem per day.

The framework of the proposed MG-ASTGCN is illustrated in Fig. 3, consisting of graph conversion operation, multi-grained vector constructor, and the most essential ASTGCN which takes multi-grained segments as inputs and employs stacked ST components to extract ST information. The structure of one ST component is depicted in Fig. 4, including ST attention mechanism and ST convolution, which are presented in detail in Section III-A2 and III-A3, respectively.

III-A2 Spatial-Temporal Attention Mechanism

The key idea of the ST attention mechanism is to pay more attention to valuable graphical information in both spatial and temporal perspectives, assisting its sequential ST convolution for extracting more useful features for the DRL.

Spatial Attention: Mutual influences of each neighboring node pair in the DN dynamically vary due to the changes in power flow. To explore such mutual influences, an attention mechanism in spatial dimension is thus developed to capture the dynamic correlations [31], which can be formulated as (we omit the notations of multi-grained segments in the rest of this section for brevity)

𝑺=𝑽sσ[(𝓧𝑾T)T𝑾FT(𝑾F𝓧)T+𝒃s],\displaystyle\bm{S}=\bm{V}_{s}\odot\sigma\left[\left(\bm{\mathcal{X}}\bm{W}_{T}\right)^{T}\bm{W}_{FT}\left(\bm{W}_{F}\bm{\mathcal{X}}\right)^{T}+\bm{b}_{s}\right], (31)
𝑺Softmax(𝑺),\displaystyle\bm{S}\leftarrow\text{Softmax}\left(\bm{S}\right), (32)

where \odot represents element-wise multiplication, σ()\sigma(\cdot) is the sigmoid activation function, 𝑽s\bm{V}_{s}, 𝑾T\bm{W}_{T}, 𝑾FT\bm{W}_{FT}, 𝑾F\bm{W}_{F}, and 𝒃s\bm{b}_{s} are all learnable parameters, and 𝑺\bm{S} represents the spatial attention matrix, whose element si,js_{i,j}, namely the attention weight, semantically describes the correlation strength between the ii-th and jj-th nodes. The derived spatial attention matrix can be adopted for spatial graph convolution to adjust spatial correlation strengths of node pairs, as shown in Fig. 4.

Refer to caption
Figure 3: The framework of the proposed MG-ASTGCN.
Refer to caption
Figure 4: The structure of one ST component.

Temporal Attention: Similar to the spatial attention, the temporal attention mechanism [32] aims to track temporal correlations of the changing node features, which can be formulated as

𝑬=𝑽eσ[(𝑼N𝓧tatt)T𝑼FN(𝑼F𝓧)+𝒃e],\displaystyle\bm{E}=\bm{V}_{e}\odot\sigma\left[\left(\bm{U}_{N}\bm{\mathcal{X}}_{\text{tatt}}\right)^{T}\bm{U}_{FN}\left(\bm{U}_{F}\bm{\mathcal{X}}\right)+\bm{b}_{e}\right], (33)
𝑬Softmax(𝑬),\displaystyle\bm{E}\leftarrow\text{Softmax}\left(\bm{E}\right), (34)

where 𝑽e\bm{V}_{e}, 𝑼N\bm{U}_{N}, 𝑼FN\bm{U}_{FN}, 𝑼F\bm{U}_{F}, and 𝒃e\bm{b}_{e} are learnable parameters, 𝓧tatt\bm{\mathcal{X}}_{\text{tatt}} is the transposed form of the input segment 𝓧\bm{\mathcal{X}}, and 𝑬\bm{E} is referred to as the temporal attention matrix whose attention weight ei,je_{i,j} represents the temporal dependencies between two graph feature vectors 𝑿ti\bm{X}_{t-i} and 𝑿tj\bm{X}_{t-j}. The temporal attention matrix is used to add temporal correlation information to the original input segment 𝓧\bm{\mathcal{X}} as shown in Fig. 4, which can be expressed as 𝓧~=𝓧𝑬\tilde{\bm{\mathcal{X}}}=\bm{\mathcal{X}}\bm{E}.

III-A3 Spatial-Temporal Convolution

The ST convolution consists of spatial graph convolution and temporal convolution, aiming to compress the three-dimensional input segments and extract ST features that can be integrated with the DRL algorithm.

Spatial Graph Convolution: Graph convolution is defined as a convolution operation implemented by using linear operators diagonalizing in the Fourier domain to replace the classical convolution operator [33], which can be expressed as

ReLU(gθG𝓧~)\displaystyle\text{ReLU}\left(g_{\theta}*_{G}\tilde{\bm{\mathcal{X}}}\right) =ReLU[gθ(𝑳)𝓧~],\displaystyle=\text{ReLU}\left[g_{\theta}\left(\bm{L}\right)\tilde{\bm{\mathcal{X}}}\right], (35)
=ReLU[𝚲T(𝚲𝓧~𝚲gθ)],\displaystyle=\text{ReLU}\left[\bm{\Lambda}^{T}\left(\bm{\Lambda}\tilde{\bm{\mathcal{X}}}\odot\bm{\Lambda}g_{\theta}\right)\right],

where G*_{G} represents the graph convolution operator, gθg_{\theta} is a convolution filter, 𝑳\bm{L} is the Laplacian matrix of the derived undirected graph 𝒢\mathcal{G}, 𝚲\bm{\Lambda} is the result of eigenvalue decomposition, and the rectified linear unit (ReLU) is adopted as the activation function. Providing that conducting eigenvalue decomposition on the Laplacian matrix is computationally expensive, Chebyshev polynomials are often used for approximating the solution of eigenvalue decomposition in practice [34]. With the addition of spatial correlation information from the spatial attention matrix 𝑺\bm{S} as shown in Fig. 4, the spatial graph convolution can be rewritten as

ReLU(gθG𝓧~)k=0K1θk[Tk(𝑳~)𝑺]x,\text{ReLU}\left(g_{\theta}*_{G}\tilde{\bm{\mathcal{X}}}\right)\approx\sum_{k=0}^{K-1}\theta_{k}\left[T_{k}(\tilde{\bm{L}})\odot\bm{S}\right]x, (36)

where 𝑳~\tilde{\bm{L}} is the normalized Laplacian matrix, θk\theta_{k} is the coefficient of Chebyshev polynomials, KK is the highest order of Chebyshev polynomials, and TkT_{k} is the kk-th order Chebyshev polynomial.

Temporal Convolution and Feature Compression: The temporal convolution operation takes the result of the spatial graph convolution as input and performs convolution along the temporal dimension, i.e., TrT^{\text{r}}, TdT^{\text{d}}, and TwT^{\text{w}}, which can be formulated as

𝓧next=ReLU{hθ[ReLU(gθG𝓧~)]},\bm{\mathcal{X}}^{\text{next}}=\text{ReLU}\left\{h_{\theta}*\left[\text{ReLU}\left(g_{\theta}*_{G}\tilde{\bm{\mathcal{X}}}\right)\right]\right\}, (37)

where hθh_{\theta} is the temporal convolution filter and 𝓧next\bm{\mathcal{X}}^{\text{next}} is the input for the following ST component.

To pass the extracted ST information to the DRL algorithm, the multi-time-scale outputs are fused and then fed into a fully-connected neural network layer (FCNNL) [35] for feature compression as shown in Fig. 3, which can be formulated as

𝒚=ReLU{FCNNL[concat(𝓧r,𝓧d,𝓧w)]},\bm{y}=\text{ReLU}\left\{\textbf{FCNNL}\left[\textbf{concat}\left(\bm{\mathcal{X}}^{\text{r}},\bm{\mathcal{X}}^{\text{d}},\bm{\mathcal{X}}^{\text{w}}\right)\right]\right\}, (38)

where concat represents the concatenating operation for the multi-time-scale outputs.

In summary, ST graphical information can be fully exploited by our MG-ASTGCN. The spatial attention mechanism explores spatial correlations of neighboring node pairs, while the temporal attention focuses on mining self-correlations of node features in the temporal view. Furthermore, based on ST correlations provided by the preceding ST attention, ST convolution extracts ST features of the underlying DN. The detailed process of the recent graph segment 𝓧r\bm{\mathcal{X}}^{\text{r}} passing through one ST component is illustrated in Fig. 5.

Refer to caption
Figure 5: The detailed process of one recent graph segment passing through one whole ST component.

III-B DRL-based Control Strategy

III-B1 MDP Modeling

Balancing the trade-off between voltage control and renewable accommodation in the DN can be considered as a consecutive decision-making process, which can be further modeled as an MDP [36] consisting of four critical parts: state space 𝒮\mathcal{S}, action space 𝒜\mathcal{A}, probability space 𝒫\mathcal{P}, and reward space \mathcal{R}.

State Space 𝒮\mathcal{S}: The state of the nn-th load node, denoted by 𝒔t,n\bm{s}_{t,n}, is its feature vector 𝒙t,n\bm{x}_{t,n} defined in Eq. (26). Moreover, the extracted ST features 𝒚t\bm{y}_{t} is aggregated with the DN’s state. Thus, the state of the DN can be expressed as

𝒔t=[𝒔t,1,,𝒔t,NL,𝒚t].\bm{s}_{t}=\left[\bm{s}_{t,1},\cdots,\bm{s}_{t,N^{\text{L}}},\bm{y}_{t}\right]. (39)

Action Space 𝒜\mathcal{A}: For each power generator, only its active power pt,ip_{t,i} and voltage magnitude vt,iv_{t,i} can be manipulated. Thus, the actions of the ii-th power generator can be expressed as 𝒂t,i=[pt,i,vt,i]\bm{a}_{t,i}=\left[p_{t,i},v_{t,i}\right]. Actions of all generators in the DN can be defined as

𝒂t=[𝒂t,1T,,𝒂t,NTT,𝒂t,1W,,𝒂t,NWW,𝒂t,1S,,𝒂t,NSS],\bm{a}_{t}=\left[\bm{a}_{t,1}^{\text{T}},\cdots,\bm{a}_{t,N^{\text{T}}}^{\text{T}},\bm{a}_{t,1}^{\text{W}},\cdots,\bm{a}_{t,N^{\text{W}}}^{\text{W}},\bm{a}_{t,1}^{\text{S}},\cdots,\bm{a}_{t,N^{\text{S}}}^{\text{S}}\right], (40)

where 𝒂t,iT,𝒂t,jW,𝒂t,kS\bm{a}_{t,i}^{\text{T}},\bm{a}_{t,j}^{\text{W}},\bm{a}_{t,k}^{\text{S}} are actions of thermoelectric, wind, and solar PV generators. Note that, our proposed strategy is also applicable to generators operating in the P/Q control mode by changing the generator’s action space into active power pt,ip_{t,i} and reactive power qt,iq_{t,i}.

We would like to clarify that our proposed DRL-based strategy can be easily applicable to generators operating in the P/Q control mode by slightly modifying the generator’s action space. In our original manuscript, the action space of each generator is defined as 𝒂t,i=[pt,i,vt,i]\bm{a}_{t,i}=[p_{t,i},v_{t,i}], where pt,ip_{t,i} and vt,iv_{t,i} are its active power and voltage magnitude, respectively. The generator working in the P/Q control mode can be defined as 𝒂t,i=[pt,i,qt,i]\bm{a}_{t,i}=[p_{t,i},q_{t,i}], where qt,iq_{t,i} is the reactive power of the generator.

Probability Space 𝒫\mathcal{P}: 𝒫\mathcal{P} is the probability set of transitioning to the next state 𝒔t+1,n\bm{s}_{t+1,n} from the current state 𝒔t\bm{s}_{t} after taking a deterministic action 𝒂t\bm{a}_{t}.

Reward Space \mathcal{R}: A reward rtr_{t} is obtained after taking action 𝒂t\bm{a}_{t} at state 𝒔t\bm{s}_{t}, which indicates the effectiveness of the selected action. The goal of DRL is to learn an optimal action strategy π(𝒂t|𝒔t)\pi(\bm{a}_{t}|\bm{s}_{t}) to maximize the expected cumulative rewards. Hence, it is essential to encode the objective of the optimization problem into a reward function to facilitate the DRL training. The reward function of the MDP can be formulated as

rt\displaystyle r_{t} =wvol{n=1NLexp[(1|𝕍t,nL|)2]}12,\displaystyle=w^{\text{vol}}\left\{\sum_{n=1}^{N^{\text{L}}}\exp\left[-\left(1-\left|\mathbb{V}_{t,n}^{\text{L}}\right|\right)^{2}\right]\right\}^{\frac{1}{2}}, (41)
+wRER[j=1NWexp(pt,jW,actp¯t,jW)+k=1NSexp(pt,kS,actp¯t,kS)],\displaystyle+w^{\text{RER}}\left[\sum_{j=1}^{N^{\text{W}}}\exp\left(\frac{p_{t,j}^{\text{W,act}}}{\bar{p}_{t,j}^{\text{W}}}\right)+\sum_{k=1}^{N^{\text{S}}}\exp\left(\frac{p_{t,k}^{\text{S,act}}}{\bar{p}_{t,k}^{\text{S}}}\right)\right],
+wgen{i=1NTexp(Ct,iT)+j=1NWexp[(Ct,jW,r+Ct,jW,p)]\displaystyle+w^{\text{gen}}\left\{\sum_{i=1}^{N^{\text{T}}}\exp\left(-C_{t,i}^{\text{T}}\right)+\sum_{j=1}^{N^{\text{W}}}\exp\left[-\left(C^{\text{W,r}}_{t,j}+C^{\text{W,p}}_{t,j}\right)\right]\right.
+k=1NSexp[(Ct,kS,r+Ct,kS,p)]},\displaystyle\left.\qquad\qquad\qquad\qquad+\sum_{k=1}^{N^{\text{S}}}\exp\left[-\left(C^{\text{S,r}}_{t,k}+C^{\text{S,p}}_{t,k}\right)\right]\right\},

where the above three reward terms correspond to the objective functions of the optimization problem, i.e., voltage control JvolJ^{\text{vol}}, renewable accommodation JRERJ^{\text{RER}}, and generation cost minimization JgenJ^{\text{gen}}.

We then introduce how we handle the optimization constraints defined from Eq. (10) to (25) and encode them as rewards into the DRL training process. For the power balance constraints in Eq. (10) and (11), if these two constraints cannot be satisfied during the DRL training process, a constant penalty term (10-10 in our simulation by default) will be added to the reward for constraint violation. Moreover, the current training episode will be terminated. Subsequently, the DN environment will be reset and the input MDP state for the next episode will be randomly initialized. For the load voltage constraint, generator reactive power constraints, and branch flow constraint presented in Eq. (14), (22)-(24), and (25), respectively, if these constraints are violated, the same constant penalty term (10-10 by default) is added to the reward for violation feedback. For the generator’s voltage and power constraints defined in Eq. (15)-(17) and (18)-(21), respectively, their corresponding values will not violate these inequality constraints. This is because the actions of each generator are its power and voltage defined as 𝒂t,i=[pt,i,vt,i]\bm{a}_{t,i}=[p_{t,i},v_{t,i}], indicating that the lower and upper limits of these constraints are encoded as the bounds of the MDP’s action space. As a result, the action value of each generator will always meet these constraints.

III-B2 Solving MDP by DDPG

The objective of DRL is to maximize the expected cumulative rewards, denoted by R¯θ\bar{R}_{\theta}, which can be formulated as

R¯θ\displaystyle\bar{R}_{\theta} =𝔼𝒂tπ,rt,𝒔t+1[R(τ)],\displaystyle=\mathbb{E}_{\bm{a}_{t}\sim\pi,r_{t},\bm{s}_{t+1}\sim\mathbb{P}}\left[R(\tau)\right], (42)
=τR(τ)P(τθ),\displaystyle=\sum_{\tau}R(\tau)P(\tau\mid\theta),

where τ\tau represents the trajectory of the MDP transitions, recording all 4-tuple transitions denoted by {𝒔t,𝒂t,rt,𝒔t+1}\{\bm{s}_{t},\bm{a}_{t},r_{t},\bm{s}_{t+1}\} from the beginning of τ\tau to its end, R(τ)R(\tau) represents the cumulative rewards of the trajectory, θ\theta represents the parameters of our action strategy π\pi, P(τ|θ)P\left(\tau|\theta\right) is the occurrence probability of the trajectory τ\tau.

We then introduce DDPG [37] to maximize R¯θ\bar{R}_{\theta}. DDPG is the most representative actor-critic DRL algorithm to optimize the derived MDP. The major difference between DDPG and other DRL algorithms is that the action policy π(𝒂t|𝒔t)\pi(\bm{a}_{t}|\bm{s}_{t}) deterministically outputs the values of actions instead of the probability distribution of actions, which dramatically decreases the computation cost and makes it much easier to implement. Specifically, policy gradient method is applied in DDPG to update our action policy, which can be formulated as

θθ+ηθR¯θ,\theta\leftarrow\theta+\eta_{\theta}\nabla\bar{R}_{\theta}, (43)

with the gradient of the DRL objective defined as

R¯θ=1NτNτTτ𝒜θ(𝒔,𝒂)logp(𝒂|𝒔,θ),\nabla\bar{R}_{\theta}=\frac{1}{N_{\tau}}\sum^{N_{\tau}}\sum^{T_{\tau}}\mathcal{A}_{\theta}\left(\bm{s},\bm{a}\right)\nabla\log p\left(\bm{a}|\bm{s},\theta\right), (44)

where ηθ\eta_{\theta} is the learning rate, NτN_{\tau} is the number of trajectories, TτT_{\tau} is the length of each trajectory, and 𝒜θ(𝒔,𝒂)\mathcal{A}_{\theta}\left(\bm{s},\bm{a}\right) represents the advantage function assessing the effectiveness of the state-action pair compared to a certain baseline, which can be formulated as

𝒜θ(𝒔t,𝒂t)\displaystyle\mathcal{A}_{\theta}\left(\bm{s}_{t},\bm{a}_{t}\right) =Gtb,\displaystyle=G_{t}-b, (45)
=t=tTτγttrt𝔼s[R(𝒔)],\displaystyle=\sum_{t^{\prime}=t}^{T_{\tau}}\gamma^{t^{\prime}-t}r_{t^{\prime}}-\mathbb{E}_{s\sim\mathbb{P}}\left[R(\bm{s})\right],

where γ\gamma is a discounting factor, bb represents the baseline of reward considering all possible actions, and 𝔼s[R(𝒔)]\mathbb{E}_{s\sim\mathbb{P}}\left[R(\bm{s})\right] is the expected cumulative rewards considering all possible states at the current time step. Due to the uncertainty of the MDP transitions, both GtG_{t} and bb are random variables. To accurately estimate the advantage function, DDPG introduces a critic network Qϕ(𝒔t,𝒂t)Q_{\phi}(\bm{s}_{t},\bm{a}_{t}) formulated as

Qϕ(𝒔t,𝒂t)=𝔼rt,𝒔t+1{rt+γQϕ[(𝒔t+1,πθ(𝒔t+1)]},Q_{\phi}(\bm{s}_{t},\bm{a}_{t})=\mathbb{E}_{r_{t},\bm{s}_{t+1}\sim\mathbb{P}}\left\{r_{t}+\gamma Q_{\phi}\left[(\bm{s}_{t+1},\pi_{\theta}\left(\bm{s}_{t+1}\right)\right]\right\}, (46)

where ϕ\phi represents parameters of the critic networks. With the critic network, the advantage function in Eq. (45) can be rewritten as

𝒜θ(𝒔t,𝒂t)=Qϕ(𝒔t,𝒂t)𝒂tQϕ(𝒔t,𝒂t).\mathcal{A}_{\theta}\left(\bm{s}_{t},\bm{a}_{t}\right)=Q_{\phi}\left(\bm{s}_{t},\bm{a}_{t}\right)-\sum_{\bm{a}_{t}}Q_{\phi}\left(\bm{s}_{t},\bm{a}_{t}\right). (47)
Algorithm 1 The DRL-based Control Strategy
  Initialize action policy πθ(𝒂|𝒔)\pi_{\theta}(\bm{a}|\bm{s}) and critic network Qϕ(𝒔,𝒂)Q_{\phi}(\bm{s},\bm{a})
  Initialize target networks πθ\pi_{\theta^{\prime}} and QϕQ_{\phi^{\prime}}
  Initialize the replay buffer \mathcal{B}
  for n=1,2,,Nτn=1,2,\cdots,N_{\tau} do
     Initialize a Gaussian noise 𝒩\mathcal{N} for action exploration
     for t=1,2,,Tτt=1,2,\cdots,T_{\tau} do
        Receive the output of MG-ASTGCN 𝒚t\bm{y}_{t}
        Prepare the input state 𝒔t\bm{s}_{t}
        Get action 𝒂t\bm{a}_{t} = πθ(𝒔t)\pi_{\theta}(\bm{s}_{t}) + 𝒩(0,0.1)\mathcal{N}\sim(0,0.1).
        Get reward rtr_{t} and transit to the next state 𝒔t+1\bm{s}_{t+1}
        Store a transition {𝒔t,𝒂t,rt,𝒔t+1}\{\bm{s}_{t},\bm{a}_{t},r_{t},\bm{s}_{t+1}\} in \mathcal{B}
        Randomly sample a batch of NN transitions from \mathcal{B}.
        Calculate qtq_{t} using the target networks.
        Update the critic network QϕQ_{\phi}:
        ϕϕηϕϕL(Qϕ)\qquad\qquad\qquad\phi\leftarrow\phi-\eta_{\phi}\nabla_{\phi}L(Q_{\phi})
        Update the action policy πθ\pi_{\theta}:
        θθ+ηθθR¯θ\qquad\qquad\qquad\theta\leftarrow\theta+\eta_{\theta}\nabla_{\theta}\bar{R}_{\theta}
        Update target networks via soft update :
        θρθ+(1ρ)θ\qquad\qquad\qquad\theta^{\prime}\leftarrow\rho\theta+(1-\rho)\theta^{\prime}
        ϕρϕ+(1ρ)ϕ\qquad\qquad\qquad\phi^{\prime}\leftarrow\rho\phi+(1-\rho)\phi^{\prime}.
     end for
  end for

Providing that calculating the output value of the critic network depends on environments rather than our action policy π\pi, it is applicable to learn the critic network QϕQ_{\phi} in an off-policy manner taking advantage of transitions generated from a different action policy denoted by πθ\pi_{\theta^{\prime}}. The critic network can be updated by minimizing the root mean square error formulated as

L(Qϕ)=𝔼𝒔t,rt,𝒂tπθ(𝒔t){[Qϕ(𝒔t,𝒂t)qt]2},L\left(Q_{\phi}\right)=\mathbb{E}_{\bm{s}_{t},r_{t}\sim\mathbb{P},\bm{a}_{t}\sim\pi_{\theta}(\bm{s}_{t})}\left\{\left[Q_{\phi}\left(\bm{s}_{t},\bm{a}_{t}\right)-q_{t}\right]^{2}\right\}, (48)

with the estimated critic network via the target action and critic networks πθ,Qϕ\pi_{\theta^{\prime}},Q_{\phi^{\prime}} defined as

qt=rt+γQϕ[𝒔t+1,πθ(𝒔t+1)].q_{t}=r_{t}+\gamma Q_{\phi^{\prime}}\left[\bm{s}_{t+1},\pi_{\theta^{\prime}}\left(\bm{s}_{t+1}\right)\right]. (49)

Similarly, the critic network is updated via gradient descent formulated as

ϕϕηϕϕL(Qϕ),\phi\leftarrow\phi-\eta_{\phi}\nabla_{\phi}L(Q_{\phi}), (50)

where ηϕ\eta_{\phi} is the learning rate for updating the critic network.

The proposed two target networks are updated in an exponential moving average manner using parameters of their corresponding action and critic networks, which can be formulated as

θ\displaystyle\theta^{\prime} ρθ+(1ρ)θ,\displaystyle\leftarrow\rho\theta+(1-\rho)\theta^{\prime}, (51)
ϕ\displaystyle\phi^{\prime} ρϕ+(1ρ)ϕ,\displaystyle\leftarrow\rho\phi+(1-\rho)\phi^{\prime}, (52)

where ρ\rho is the smoothing parameter.

In summary, DDPG follows the policy-gradient way to optimize the derived MDP, where a critic network is introduced to assess the learned action policy. The workflow illustration and algorithmic procedure of the DDPG are presented in Fig. 6 and Algorithm 1, respectively.

Refer to caption
Figure 6: The workflow of DDPG.
TABLE I: System characteristics of IEEE 33, 69, and 118-bus RDSs.
RDSs Charateristics 𝟑𝟑\bm{33}-Bus 𝟔𝟗\bm{69}-Bus 𝟏𝟏𝟖\bm{118}-Bus
Total Buses (NLN^{\text{L}}) 3333 6969 118118
Thermoelectric Generators (NTN^{\text{T}}) 22 33 44
Wind Turbines (NWN^{\text{W}}) 55 1010 1515
Solar PV Generators (NSN^{\text{S}}) 55 1010 1515
Baseline Voltage (kV) 12.6612.66 12.6612.66 1111
Baseline Apparent Power (MVA) 100100 100100 100100
Total Load Active Power (MW) 3.7153.715 3.8003.800 22.71022.710
Total Load Reactive Power (MVAR) 2.3002.300 2.6902.690 17.04117.041
TABLE II: The initialized parameters.
aia_{i} 0.01750.0175 bib_{i} 1.751.75
cic_{i} 0 cjW,r,ckS,rc_{j}^{\text{W,r}},c_{k}^{\text{S,r}} 1.51.5
cjW,p,ckS,pc_{j}^{\text{W,p}},c_{k}^{\text{S,p}} 33 wvolw^{\text{vol}} 11
wRERw^{\text{RER}} 11 wgenw^{\text{gen}} 0.010.01
TrT^{\text{r}} 3232 TdT^{\text{d}} 1616
TwT^{\text{w}} 44 NdN^{\text{d}} 2424
γ\gamma 0.990.99 ρ\rho 0.010.01
ηθ\eta_{\theta} 0.00030.0003 ηϕ\eta_{\phi} 0.00030.0003
Refer to caption
(a) Power Output Profiles.
Refer to caption
(b) Part of 118-Bus System.
Figure 7: The power profiles of one wind generator and one solar generator throughout one day in a part of the IEEE 118-bus test system.

IV Experiments and Results

IV-A Experimental Settings

IV-A1 Evaluation Scenario

The proposed DRL-based strategy is tested on the modified IEEE 33-bus, 69-bus, and 118-bus radial distribution systems (RDSs) [38] with their detailed statistics provided in Table I. The time interval between two consecutively operational time steps is one hour, aligning with previous studies in DN’s voltage control [22, 39, 25, 40]. The simulation lasts for a one-year duration to ensure sufficient DRL training. One Nvidia TITAN RTX graphics processing unit is used for DRL training. The batch size of the DDPG algorithm is set as 256256. The random noise employed in the DDPG is a Gaussian noise, denoted by 𝒩(μ,σ)\mathcal{N}\sim(\mu,\sigma) with its mean μ\mu and standard deviation σ\sigma setting as 0 and 0.10.1, respectively. The number of hidden layers for the actor and critic networks is 33, each of which consists of 400400 neurons. We adopt the Adam optimizer to update our actor, i.e., πθ\pi_{\theta} and critic, i.e., QϕQ_{\phi}, networks. The number of ST components inside our MG-ASTGCN is 33. The initialized parameters of our DRL-based control strategy are provided in Table II.

Note that we use the same reserve cost coefficients and penalty cost coefficients for wind and solar generators provided by [41], i.e., cjW,r=ckS,r=1.5c_{j}^{\text{W,r}}=c_{k}^{\text{S,r}}=1.5 and cjW,p=ckS,p=3c_{j}^{\text{W,p}}=c_{k}^{\text{S,p}}=3. For the cost coefficients of the thermoelectric generator, we use the same coefficients provided by [28], i.e., ai=0.0175a_{i}=0.0175, bi=1.75b_{i}=1.75, and ci=0c_{i}=0. Additionally, we provide the power output profile of one wind power generator and one solar generator throughout one day in a part of the IEEE 118118-bus test system in Fig. 7.

IV-A2 Algorithm Performance Metric

In our experiments, the reward function defined in Eq. (41) is adopted to measure the performance of both DDPG and benchmark algorithms. Specifically, we introduce three metrics, namely SCORE, voltage fluctuation rate denoted by αvol\alpha^{\text{vol}}, and RER accommodation rate denoted by αRER\alpha^{\text{RER}}, to examine the effectiveness of the learned control strategy, voltage control, and renewable accommodation, respectively, which are defined as

SCORE =1Nevaln=1Nevalt=1Tendrt,\displaystyle=\frac{1}{N^{\text{eval}}}\sum_{n=1}^{N^{\text{eval}}}\sum_{t=1}^{T^{\text{end}}}r_{t}, (53)
αvol\displaystyle\alpha^{\text{vol}} =1NevalTendn=1Nevalt=1TendJtvol×100%,\displaystyle=\frac{1}{N^{\text{eval}}T^{\text{end}}}\sum_{n=1}^{N^{\text{eval}}}\sum_{t=1}^{T^{\text{end}}}J_{t}^{\text{vol}}\times 100\%, (54)
αRER\displaystyle\alpha^{\text{RER}} =1NevalTend1NW+NSn=1Nevalt=1TendJtRER×100%,\displaystyle=\frac{1}{N^{\text{eval}}T^{\text{end}}}\frac{1}{N^{\text{W}}+N^{\text{S}}}\sum_{n=1}^{N^{\text{eval}}}\sum_{t=1}^{T^{\text{end}}}J_{t}^{\text{RER}}\times 100\%, (55)

where NevalN^{\text{eval}} is the number of episodes for evaluation and TendT^{\text{end}} is the length of each episode. Both NevalN_{\text{eval}} and TendT_{\text{end}} are initialized as 100100.

TABLE III: The evaluation results of four benchmarks, i.e., HHO, GWO, IP, and LQP, and our DRL-based method.
HHO GWO IP LQP Ours
33-Bus Time Cost per Step (secs) 2.42.4 3.43.4 2.12.1 3.23.2 0.9\bm{0.9}
SCORE 35313531 30143014 36813681 37783778 𝟒𝟎𝟏𝟓\bm{4015}
Voltage Fluctuation Rate 1.52%1.52\% 2.36%2.36\% 0.95%0.95\% 0.58%0.58\% 0.22%\bm{0.22\%}
RER Accommodation Rate 78.9%78.9\% 77.1%77.1\% 86.2%86.2\% 84.4%84.4\% 94.2%\bm{94.2\%}
69-Bus Time Cost per Step (secs) 6.16.1 8.18.1 5.55.5 9.29.2 1.3\bm{1.3}
SCORE 61056105 58225822 67166716 67046704 𝟕𝟕𝟓𝟕\bm{7757}
Voltage Fluctuation Rate 1.98%1.98\% 2.59%2.59\% 0.69%0.69\% 0.85%0.85\% 0.19%\bm{0.19\%}
RER Accommodation Rate 80.5%80.5\% 82.8%82.8\% 84.5%84.5\% 89.2%89.2\% 93.4%\bm{93.4\%}
118-Bus Time Cost per Step (secs) 12.712.7 15.815.8 9.29.2 17.017.0 1.6\bm{1.6}
SCORE 82328232 74817481 1161411614 1217112171 𝟏𝟒𝟑𝟖𝟒\bm{14384}
Voltage Fluctuation Rate 2.48%2.48\% 3.04%3.04\% 0.58%0.58\% 0.52%0.52\% 0.20%\bm{0.20\%}
RER Accommodation Rate 82.9%82.9\% 77.6%77.6\% 90.3%90.3\% 86.4%86.4\% 94.5%\bm{94.5\%}
Refer to caption
(a) 33-bus
Refer to caption
(b) 69-bus
Refer to caption
(c) 118-bus
Figure 8: The evaluation rewards of HHO, GWO, IP, LQP, and DRL-based strategy on three RDSs in the first 100100 time steps.

IV-B Experimental Results

IV-B1 Optimization-based Benchmark Comparisons

Two representative optimization-based algorithms—HHO [19] and GWO [20], are adopted to compare with our DRL-based strategy. Benefiting from their meta-heuristic characteristic, both the HHO and GWO algorithms can approach the optimal control strategy without modifying or relaxing the optimization formulation. Moreover, the meta-heuristic-based methods also present good convergence and robustness abilities in the previous studies [41, 42, 43]. Moreover, we also introduced two relaxation-based optimization methods, namely interior-point (IP)-based method [44] and linear/quadratic programming (LQP)-based method [45] as benchmarks. The evaluation results derived by these methods on IEEE 33, 69, and 118-bus RDSs are illustrated in Fig. 8 with associated statistics presented in Table III. The results reveal that our proposed DRL-based strategy outperforms all benchmarks by significant margins in terms of faster testing time, higher SCOREs, lower voltage fluctuation rates, and more renewable accommodation. The reason for such performance gaps is twofold:

  • The optimization-based benchmarks do not need training, resulting in longer computation time for evaluation at each time step. Meanwhile, since they tend to get stuck in local optimums, their output actions are less likely to be the optimal ones, resulting in smaller rewards.

  • Tremendous data collected by the DRL-based strategy contributes to its more effective and efficient searching in the action space. Therefore, the DRL reacts much faster at the beginning of evaluation and gradually obtains a higher reward, especially in large-scale DNs.

TABLE IV: The voltage fluctuation rates and renewable accommodation rates using different weight coefficients of the objective functions.
wvolw^{\text{vol}} wRERw^{\text{RER}} wgenw^{\text{gen}} Voltage Fluctuation Renewable Accommodation
33 Bus 1 1 0.01 0.22%\bm{0.22\%} 94.2%\bm{94.2\%}
11 11 0.0050.005 0.89%0.89\% 92.1%92.1\%
11 11 0.050.05 2.05%2.05\% 88.4%88.4\%
0.50.5 0.50.5 0.010.01 1.98%1.98\% 84.9%84.9\%
55 55 0.010.01 0.41%0.41\% 85.2%85.2\%
69 Bus 1 1 0.01 0.19%\bm{0.19\%} 93.4%\bm{93.4\%}
11 11 0.0050.005 1.14%1.14\% 88.5%88.5\%
11 11 0.050.05 0.67%0.67\% 85.2%85.2\%
0.50.5 0.50.5 0.010.01 2.45%2.45\% 89.2%89.2\%
55 55 0.010.01 0.39%0.39\% 81.9%81.9\%
118 Bus 1 1 0.01 0.20%\bm{0.20\%} 94.5%\bm{94.5\%}
11 11 0.0050.005 0.41%0.41\% 85.2%85.2\%
11 11 0.050.05 0.89%0.89\% 90.1%90.1\%
0.50.5 0.50.5 0.010.01 2.23%2.23\% 87.6%87.6\%
55 55 0.010.01 0.48%0.48\% 82.9%82.9\%
TABLE V: The average training time cost per episode of ACER, A2C, PPO, SAC, and DDPG.
33-Bus 69-Bus 118-Bus
A2C 3131 secs 7878 secs 105105 secs
ACER 3939 secs 9595 secs 136136 secs
PPO 3636 secs 8080 secs 106106 secs
SAC 3838 secs 8686 secs 126126 secs
DDPG 𝟑𝟎\bm{30} secs 𝟕𝟒\bm{74} secs 𝟏𝟎𝟐\bm{102} secs

Additionally, we have conducted parameter searching for the weight coefficients of each objective function, i.e., wvolw^{\text{vol}}, wRERw^{\text{RER}}, and wgenw^{\text{gen}}, to determine their optimal values, We trained and evaluated our DRL-based strategy with different weight combinations. The associated voltage fluctuation rates and renewable accommodation rates are presented in Table IV. The reason we always set the same weight for voltage control and renewable accommodation is that we assume that the two objectives are of the same significance in our simulation. The parameter searching results demonstrate the effectiveness of our current weight setting, i.e., wvol=1w^{\text{vol}}=1, wRER=1w^{\text{RER}}=1, and wgen=0.01w^{\text{gen}}=0.01, since it achieves the best performance in terms of lower voltage fluctuation rate and higher renewable accommodation rate.

Refer to caption
(a) 33-Bus
Refer to caption
(b) 69-Bus
Refer to caption
(c) 118-Bus
Figure 9: The average episode rewards for each 100 episode during training of ACER, A2C, PPO, SAC, and DDPG algorithms
Refer to caption
(a) 33-bus
Refer to caption
(b) 69-bus
Refer to caption
(c) 118-bus
Figure 10: The average episode rewards for each 100100 episode during training of MLP-DRL, Conv-DRL, JS-DRL, CS-DRL, and STA-DRL.
Refer to caption
(a) 33-bus
Refer to caption
(b) 69-bus
Refer to caption
(c) 118-bus
Figure 11: The evaluation rewards of MLP-DRL, Conv-DRL, JS-DRL, CS-DRL, and STA-DRL on three RDSs in the first 100100 time steps.

IV-B2 DRL-based Baseline Comparisons

We also conducted comparisons of other state-of-the-art DRL algorithms, including the PPO [46] and SAC [47]. Moreover, we have also added two more baselines – actor-critic with experience replay (ACER) [48] and advantage actor-critic (A2C) [49]. The training reward curves derived from these DRL algorithms are depicted in Fig. 9. The results show that the SAC algorithm performs slightly better than our DDPG algorithm, while the performances of the DDPG and the PPO algorithms are quite close. The reason why we still choose the DDPG algorithm is twofold: a) DDPG presents better in lower training time cost compared to the PPO and SAC algorithms, as presented in Table V. Specifically, the DDPG algorithm has the lowest average training time cost per episode (3030, 7474, 102102 seconds for three respective test systems), outperforming the PPO and SAC algorithms by significant margins; b) the DDPG is relatively easy to implement with fewer hyper-parameters to tune.

IV-B3 Effectiveness of ST Attention and ST Graphical Information of the DN

To evaluate the effectiveness of the ST attention and its impact on the sequential DDPG, the ST attention mechanism is substituted with several other correlation extraction techniques, including cosine similarity (CS) and Jaccard similarity (JS). Their corresponding DRL strategies are termed “CS-DRL” and “JS-DRL”, respectively. We term our method with ST attention as “STA-DRL”. Moreover, we also design a baseline without the ST attention, i.e., only keeping the ST convolution module in our proposed MG-ASTGCN, which is termed “Conv-DRL”. Furthermore, to fully verify the extracted graphical information, we additionally introduce a multilayer-perceptron-based (MLP-based) network structure for the DDPG algorithm, which is unable to extract the ST graphical information of the DN. Each layer of the MLP network is a fully-connected neural network layer. We term such a method as “MLP-DRL”. The associated training and evaluation results of the aforementioned methods are depicted in Fig. 10 and 11, respectively. Note that each data point in Fig. 10 is calculated by taking an average of 100100 episodes’ rewards. For each episode, the maximum time step is 128128, and the episode reward is the summation of each step’s reward.

We can summarize several noteworthy observations regarding the effectiveness of ST attention and the extracted graphical information based on the simulation results as follows.

  • The outstanding training performance of our STA-DRL method indicates that the adoption of ST attention can dramatically increase DDPG’s convergence speed, leading to a more effective search in the underlying action space based on the extracted ST information. As a result, our STA-DRL method significantly outperforms all baselines during evaluation as shown in Fig. 11.

  • The alternative correlation extraction methods, i.e., CS-DRL and JS-DRL, are less effective than the ST attention, since they only focus on degree correlations among different nodes, ignoring node inner features and their temporal dependencies. Interestingly, the performance of the MLP-DRL method surpasses the CS-DRL, JS-DRL, and Conv-DRL, suggesting that adopting effective and suitable graphical information extraction, e.g., our proposed MG-ASTGCN, also seems to be crucial in deriving well-performed strategies for voltage control and renewable accommodation.

Additionally, we observe two interesting phenomena from spatial and temporal attention matrices, which are illustrated in Fig. 12 and Fig. 13, respectively.

  • In Fig. 12, the spatial attention mechanism tends to focus on node pairs (larger attention weights) with more generator integration. For instance, although Bus 22 and 114114 are not adjacent, they can be connected by Bus 11 (connected with three generators) and 100100 (connected with two generators). Therefore, it is reasonable that the correlation strength between Bus 22 and 114114 is more significant than other nonadjacent node pairs.

  • In the temporal attention for the recent graph segment, the correlation strengths between current and historical node features drop sharply when it comes to the 1010-th historical time step, which may indicate that the latest 1010 feature vectors of one node contain more significant temporal correlation information.

Refer to caption
Figure 12: The partial spatial attention matrix and its corresponding sub-graph in IEEE 118-bus RDS.
Refer to caption
Figure 13: The partial temporal attention matrix of Bus 11’s recent segment in IEEE 118-bus RDS.

IV-B4 Stability Test of the DRL-based Strategy

We here define the average response time, calculated via counting how many time steps the DN takes to recover voltage back to its normal level, to assess the stability of the DN in the event of generator faults. Fig. 14a illustrates the response time of HHO, GWO, and the proposed DRL-based strategy with different numbers of faulted generators, where the DRL’s response time grows linearly compared to the exponential growth of two heuristic algorithms. Besides, Fig. 14b shows a more detailed case study with one faulted generator in the IEEE 69-bus RDS, indicating that the stability of DNs can be significantly improved through our DRL-based strategy.

Refer to caption
(a)
Refer to caption
(b)
Figure 14: (a) Response time when facing different number of faulted generators in IEEE 69-bus RDS; (b) Case study of generator fault: one generator failed in IEEE 69-bus RDS.

IV-B5 Trade-Off Between Voltage Fluctuation Control and Renewable Accommodation

The weights wvolw^{\text{vol}}, wRERw^{\text{RER}}, and wgenw^{\text{gen}} in our designed reward function represent their relative importance when learning the optimal control strategy. We trained and evaluated our DRL-based strategy with different voltage control weight wvolw^{\text{vol}} to investigate our method’s capability in voltage fluctuation control. The results are presented in Table VI. Specifically, Bus 1515’s voltage fluctuation profiles in IEEE 69-bus RDS are illustrated in Fig. 15. Surprisingly, better voltage control (with the increase of wvolw^{\text{vol}}) leads to performance degradation of our DRL-based strategy, with lower SCOREs as shown in Table VI. Such results may suggest that overemphasizing voltage fluctuation control may undermine the strategy’s performance.

Furthermore, we found that, while the performance of voltage control αvol\alpha^{\text{vol}} improves with the increase of the weight coefficient wvolw^{\text{vol}}, the renewable accommodation rate αRER\alpha^{\text{RER}} conversely decreases, as shown in Table VII. Given that the DDPG’s performance is related to both voltage control and renewable accommodation, the decreasing renewable accommodation rate may lead to the degradation of the DDPG’s performance. Similarly, overemphasizing renewable accommodation also undermines DDPG’s performance due to the increasing voltage fluctuation rates, as shown in Table VIII. Based on the evaluation results presented in Table VII and VIII, we may conclude that controlling voltage fluctuation and accommodating renewable is a trade-off. Hence, striking an effective balance between voltage control and renewable accommodation seems to be the best solution. According to the parameter searching results presented in Table IV, our current weight setting, i.e., wvol=1w^{\text{vol}}=1, wRER=1w^{\text{RER}}=1, and wgen=0.01w^{\text{gen}}=0.01, seems to achieve the optimal balance between voltage control and renewable accommodation. Additionally, in real practice, if more strict voltage control is required, we can always increase the weight coefficient wvolw^{\text{vol}} to more effectively mitigate voltage fluctuations, though the renewable accommodation rate will inevitably decrease. Similarly, more renewable generation can be integrated into the DN with the increase of the weight coefficient wRERw^{\text{RER}}.

Refer to caption
Figure 15: Voltage fluctuation of Bus 1515 in IEEE 6969-bus RDS.
TABLE VI: The SCOREs and voltage fluctuation rate with different voltage fluctuation control weight.
wvolw^{\text{vol}} 33-Bus 69-Bus 118-Bus
Value SCORE αvol\alpha^{\text{vol}} SCORE αvol\alpha^{\text{vol}} SCORE αvol\alpha^{\text{vol}}
11 48184818 0.22%0.22\% 85338533 0.19%0.19\% 1582215822 0.20%0.20\%
22 47754775 0.20%0.20\% 83008300 0.18%0.18\% 1535215352 0.18%0.18\%
33 47604760 0.18%0.18\% 83948394 0.15%0.15\% 1572015720 0.16%0.16\%
44 48144814 0.16%0.16\% 84818481 0.14%\bm{0.14\%} 1573115731 0.15%0.15\%
55 48244824 0.15%\bm{0.15\%} 85138513 0.15%0.15\% 1584215842 0.14%\bm{0.14\%}
TABLE VII: The voltage fluctuation rates and renewable accommodation rates with different voltage fluctuation control weights.
wvolw^{\text{vol}} 33-Bus 69-Bus 118-Bus
Value αRER\alpha^{\text{RER}} αvol\alpha^{\text{vol}} αRER\alpha^{\text{RER}} αvol\alpha^{\text{vol}} αRER\alpha^{\text{RER}} αvol\alpha^{\text{vol}}
11 94.2%\bm{94.2\%} 0.22%0.22\% 93.4%\bm{93.4\%} 0.19%0.19\% 94.5%\bm{94.5\%} 0.20%0.20\%
22 90.8%90.8\% 0.22%0.22\% 91.4%91.4\% 0.15%0.15\% 91.0%91.0\% 0.23%0.23\%
33 88.2%88.2\% 0.21%0.21\% 89.9%89.9\% 0.17%0.17\% 89.5%89.5\% 0.18%0.18\%
44 86.6%86.6\% 0.15%0.15\% 86.1%86.1\% 0.14%0.14\% 87.2%87.2\% 0.15%0.15\%
55 84.3%84.3\% 0.14%\bm{0.14\%} 85.7%85.7\% 0.15%\bm{0.15\%} 86.1%86.1\% 0.16%\bm{0.16\%}
TABLE VIII: The voltage fluctuation rates and renewable accommodation rates with different renewable accommodation weights.
wRERw^{\text{RER}} 33-Bus 69-Bus 118-Bus
Value αRER\alpha^{\text{RER}} αvol\alpha^{\text{vol}} αRER\alpha^{\text{RER}} αvol\alpha^{\text{vol}} αRER\alpha^{\text{RER}} αvol\alpha^{\text{vol}}
11 94.2%94.2\% 0.22%\bm{0.22\%} 93.4%93.4\% 0.19%\bm{0.19\%} 94.5%94.5\% 0.20%\bm{0.20\%}
22 93.7%93.7\% 0.84%0.84\% 93.0%93.0\% 0.93%0.93\% 94.2%94.2\% 1.26%1.26\%
33 93.7%93.7\% 0.88%0.88\% 93.5%93.5\% 1.27%1.27\% 95.1%95.1\% 1.75%1.75\%
44 94.0%94.0\% 1.39%1.39\% 93.6%93.6\% 1.89%1.89\% 95.6%95.6\% 2.04%2.04\%
55 94.5%\bm{94.5\%} 1.52%1.52\% 94.4%94.4\% 1.92%1.92\% 95.8%95.8\% 2.31%2.31\%

V Conclusion and Future Works

In this paper, we proposed a DRL-based strategy to balance the trade-off between voltage fluctuations and renewable accommodation in the DN, with the aim of promoting the further adoption of RERs in the DNs. We first derive the optimization formulation considering voltage control and efficient renewable accommodation, along with generation cost minimization. A novel MG-ASTGCN is then proposed to fully explore the ST correlations among node pairs through ST attention mechanism and extract ST information of the DN through ST convolution. The extracted multi-time-scale ST information is finally delivered to the DDPG to learn the optimal control strategy. We can draw several conclusions based on experimental results: 1) extracting ST correlations in the DN plays an essential role in improving DDPG’s performance and convergence speed, while the strategy’s performance without ST attention mechanism or ST graphical information significantly degenerates; 2) compared with optimization-based benchmarks, the proposed DRL-based strategy achieves better performance along with less computation time. Besides, the adoption of the DRL-based strategy improves the network’s stability in terms of shorter response time in the event of generator failures; 3) node pairs in the DN with more generator integration seem to have stronger spatial correlations. Moreover, in the temporal view, the most recent node feature vectors tend to contain more valuable temporal correlation information; 4) experimental results indicate that overemphasizing voltage fluctuation or renewable accommodation control may result in sub-optimal operations.

In our future work, designing a suitable incentive mechanism to accommodate more RERs in smart grids will be studied.

References

  • [1] M. Meinshausen, J. Lewis, C. McGlade, J. Gütschow, Z. Nicholls, R. Burdon, L. Cozzi, and B. Hackmann, “Realization of Paris Agreement pledges may limit warming just below 2 °C,” Nature, vol. 604, no. 7905, pp. 304–309, apr 2022.
  • [2] L. Ranalder, H. Busch, T. Hansen, M. Brommer, T. Couture, D. Gibb, F. Guerra, J. Nana, Y. Reddy, J. Sawin, K. Seyboth, and F. Sverrisson, Renewables in Cities 2021 Global Status Report.   REN21 Secretariat, Mar. 2021.
  • [3] IPCC, Climate Change 2022: Mitigation of Climate Change. Contribution of Working Group III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change, P. Shukla, J. Skea, R. Slade, A. A. Khourdajie, R. van Diemen, D. McCollum, M. Pathak, S. Some, P. Vyas, R. Fradera, M. Belkacemi, A. Hasija, G. Lisboa, S. Luz, and J. Malley, Eds.   Cambridge, UK and New York, NY, USA: Cambridge University Press, 2022.
  • [4] S. R. Sinsel, R. L. Riemke, and V. H. Hoffmann, “Challenges and solution technologies for the integration of variable renewable energy sources—a review,” Renewable Energy, vol. 145, pp. 2271–2285, 2020.
  • [5] K. E. Antoniadou-Plytaria, I. N. Kouveliotis-Lysikatos, P. S. Georgilakis, and N. D. Hatziargyriou, “Distributed and decentralized voltage control of smart distribution networks: Models, methods, and future research,” IEEE Transactions on Smart Grid, vol. 8, no. 6, pp. 2999–3008, 2017.
  • [6] S. Bolognani, G. Cavraro, R. Carli, and S. Zampieri, “Distributed reactive power feedback control for voltage regulation and loss minimization,” IEEE Transactions on Automatic Control, vol. 60, 03 2014.
  • [7] B. Zhang, A. Y. Lam, A. D. Domínguez-García, and D. Tse, “An optimal and distributed method for voltage regulation in power distribution systems,” IEEE Transactions on Power Systems, vol. 30, no. 4, pp. 1714–1726, 2015.
  • [8] A. Maknouninejad and Z. Qu, “Realizing unified microgrid voltage profile and loss minimization: A cooperative distributed optimization and control approach,” IEEE Transactions on Smart Grid, vol. 5, no. 4, pp. 1621–1630, 2014.
  • [9] K. Utkarsh, A. Trivedi, D. Srinivasan, and T. Reindl, “A consensus-based distributed computational intelligence technique for real-time optimal control in smart distribution grids,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 1, pp. 1–1, 12 2016.
  • [10] E. Dall’Anese, H. Zhu, and G. B. Giannakis, “Distributed optimal power flow for smart microgrids,” IEEE Transactions on Smart Grid, vol. 4, no. 3, pp. 1464–1475, 2013.
  • [11] W. Zheng, W. Wu, B. Zhang, H. Sun, and Y. Liu, “A fully distributed reactive power optimization and control method for active distribution networks,” IEEE Transactions on Smart Grid, vol. 7, no. 2, pp. 1021–1033, 2016.
  • [12] B. A. Robbins, H. Zhu, and A. D. Domínguez-García, “Optimal tap setting of voltage regulation transformers in unbalanced distribution systems,” IEEE Transactions on Power Systems, vol. 31, no. 1, pp. 256–267, 2016.
  • [13] B. A. Robbins and A. D. Domínguez-García, “Optimal reactive power dispatch for voltage regulation in unbalanced distribution systems,” IEEE Transactions on Power Systems, vol. 31, no. 4, pp. 2903–2913, 2016.
  • [14] L. Yu, D. Czarkowski, and F. de Leon, “Optimal distributed voltage regulation for secondary networks with dgs,” IEEE Transactions on Smart Grid, vol. 3, no. 2, pp. 959–967, 2012.
  • [15] A. R. Di Fazio, G. Fusco, and M. Russo, “Decentralized control of distributed generation for voltage profile optimization in smart feeders,” IEEE Transactions on Smart Grid, vol. 4, no. 3, pp. 1586–1596, 2013.
  • [16] E. Dall’Anese, S. V. Dhople, and G. B. Giannakis, “Photovoltaic inverter controllers seeking ac optimal power flow solutions,” IEEE Transactions on Power Systems, vol. 31, no. 4, pp. 2809–2823, 2016.
  • [17] A. Abessi, V. Vahidinasab, and M. S. Ghazizadeh, “Centralized support distributed voltage control by using end-users as reactive power support,” IEEE Transactions on Smart Grid, vol. 7, no. 1, pp. 178–188, 2016.
  • [18] M. Nayeripour, H. Sobhani, E. Waffenschmidt, and S. Hasanvand, “Coordinated online voltage management of distributed generation using network partitioning,” Electric Power Systems Research, vol. 141, pp. 202–209, 12 2016.
  • [19] K. Mahmoud, M. M. Hussein, M. Abdel-Nasser, and M. Lehtonen, “Optimal voltage control in distribution systems with intermittent pv using multiobjective grey-wolf-lévy optimizer,” IEEE Systems Journal, vol. 14, no. 1, pp. 760–770, 2020.
  • [20] A. Routray, R. K. Singh, and R. Mahanty, “Harmonic reduction in hybrid cascaded multilevel inverter using modified grey wolf optimization,” IEEE Transactions on Industry Applications, vol. 56, no. 2, pp. 1827–1838, 2020.
  • [21] W. Wang, N. Yu, Y. Gao, and J. Shi, “Safe off-policy deep reinforcement learning algorithm for volt-var control in power distribution systems,” IEEE Transactions on Smart Grid, vol. 11, no. 4, pp. 3008–3018, 2020.
  • [22] D. Cao, W. Hu, J. Zhao, Q. Huang, Z. Chen, and F. Blaabjerg, “A multi-agent deep reinforcement learning based voltage regulation using coordinated pv inverters,” IEEE Transactions on Power Systems, vol. 35, no. 5, pp. 4120–4123, 2020.
  • [23] Q. Yang, G. Wang, A. Sadeghi, G. B. Giannakis, and J. Sun, “Two-timescale voltage control in distribution grids using deep reinforcement learning,” IEEE Transactions on Smart Grid, vol. 11, no. 3, pp. 2313–2323, 2020.
  • [24] Y. Zhang, X. Wang, J. Wang, and Y. Zhang, “Deep reinforcement learning based volt-var optimization in smart distribution systems,” IEEE Transactions on Smart Grid, vol. 12, no. 1, pp. 361–371, 2021.
  • [25] D. Cao, J. Zhao, W. Hu, F. Ding, Q. Huang, Z. Chen, and F. Blaabjerg, “Data-driven multi-agent deep reinforcement learning for distribution system decentralized voltage control with high penetration of pvs,” IEEE Transactions on Smart Grid, vol. 12, no. 5, pp. 4137–4150, 2021.
  • [26] X. Sun and J. Qiu, “Two-stage volt/var control in active distribution networks with multi-agent deep reinforcement learning method,” IEEE Transactions on Smart Grid, vol. 12, no. 4, pp. 2903–2912, 2021.
  • [27] H. Liu and W. Wu, “Two-stage deep reinforcement learning for inverter-based volt-var control in active distribution networks,” IEEE Transactions on Smart Grid, vol. 12, no. 3, pp. 2037–2047, 2021.
  • [28] A. Panda and M. Tripathy, “Security constrained optimal power flow solution of wind-thermal generation system using modified bacteria foraging algorithm,” Energy, vol. 93, pp. 816–827, 12 2015.
  • [29] R. Ma, X. Li, Y. Luo, X. Wu, and F. Jiang, “Multi-objective dynamic optimal power flow of wind integrated power systems considering demand response,” CSEE Journal of Power and Energy Systems, vol. 5, no. 4, pp. 466–473, 2019.
  • [30] J. Woo, L. wu, J.-B. Park, and J. Roh, “Real-time optimal power flow using twin delayed deep deterministic policy gradient algorithm,” IEEE Access, vol. 8, pp. 213 611–213 618, 01 2020.
  • [31] X. Shi, H. Qi, Y. Shen, G. Wu, and B. Yin, “A spatial–temporal attention approach for traffic prediction,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 8, pp. 4909–4918, 2021.
  • [32] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, vol. 30, 2017.
  • [33] S. Zhang, H. Tong, J. Xu, and R. Maciejewski, “Graph convolutional networks: a comprehensive review,” Computational Social Networks, vol. 6, no. 1, p. 11, Nov 2019.
  • [34] M. Simonovsky and N. Komodakis, “Dynamic edge-conditioned filters in convolutional neural networks on graphs,” 2017.
  • [35] I. J. Goodfellow, Y. Bengio, and A. Courville, Deep Learning.   Cambridge, MA, USA: MIT Press, 2016.
  • [36] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction.   Cambridge, MA, USA: A Bradford Book, 2018.
  • [37] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” 2019.
  • [38] K. P. Schneider, B. A. Mather, B. C. Pal, C.-W. Ten, G. J. Shirek, H. Zhu, J. C. Fuller, J. L. R. Pereira, L. F. Ochoa, L. R. de Araujo, R. C. Dugan, S. Matthias, S. Paudyal, T. E. McDermott, and W. Kersting, “Analytic considerations and design basis for the ieee distribution test feeders,” IEEE Transactions on Power Systems, vol. 33, no. 3, pp. 3181–3188, 2018.
  • [39] D. Cao, J. Zhao, W. Hu, F. Ding, Q. Huang, Z. Chen, and F. Blaabjerg, “Model-free voltage regulation of unbalanced distribution network based on surrogate model and deep reinforcement learning,” 2020.
  • [40] D. Cao, J. Zhao, W. Hu, F. Ding, N. Yu, Q. Huang, and Z. Chen, “Model-free voltage control of active distribution system with pvs using surrogate model-based deep reinforcement learning,” Applied Energy, vol. 306, p. 117982, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S030626192101285X
  • [41] I. U. Khan, N. Javaid, K. A. Gamage, C. J. Taylor, S. Baig, and X. Ma, “Heuristic algorithm based optimal power flow model incorporating stochastic renewable energy sources,” IEEE Access, vol. 8, pp. 148 622–148 643, 2020.
  • [42] T. Dokeroglu, A. Deniz, and H. E. Kiziloz, “A robust multiobjective harris’ hawks optimization algorithm for the binary classification problem,” Knowledge-Based Systems, vol. 227, p. 107219, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0950705121004810
  • [43] P. Hao and B. Sobhani, “Application of the improved chaotic grey wolf optimization algorithm as a novel and efficient method for parameter estimation of solid oxide fuel cells model,” International Journal of Hydrogen Energy, vol. 46, no. 73, pp. 36 454–36 465, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0360319921034194
  • [44] F. Capitanescu, M. Glavic, D. Ernst, and L. Wehenkel, “Interior-point based algorithms for the solution of optimal power flow problems,” Electric Power Systems Research, vol. 77, no. 5, pp. 508–517, 2007. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0378779606001209
  • [45] P. Fortenbacher and T. Demiray, “Linear/quadratic programming-based optimal power flow using linear power flow and absolute loss approximations,” International Journal of Electrical Power & Energy Systems, vol. 107, pp. 680–689, 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0142061518325377
  • [46] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” 2017.
  • [47] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in Proceedings of the 35th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 80.   PMLR, 10–15 Jul 2018, pp. 1861–1870.
  • [48] Z. Wang, V. Bapst, N. Heess, V. Mnih, R. Munos, K. Kavukcuoglu, and N. de Freitas, “Sample efficient actor-critic with experience replay,” 2017.
  • [49] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu, “Asynchronous methods for deep reinforcement learning,” in Proceedings of The 33rd International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 48.   PMLR, 20–22 Jun 2016, pp. 1928–1937.