This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Large Language Model Assisted Optimal Bidding of BESS in FCAS Market: An AI-agent based Approach

Borui Zhang,  Chaojie Li,  Guo Chen,  and Zhaoyang Dong B. Zhang, C. Li and G. Chen are with the School of Electrical Engineering and Telecommunications, UNSW Sydney, NSW 2052, Australia. (borui.zhang;chaojie.li;[email protected])Z. Dong is with the Department of Electrical Engineering, City University of Hong Kong, Hong Kong. ([email protected])
Abstract

To incentivize flexible resources such as Battery Energy Storage Systems (BESSs) to offer Frequency Control Ancillary Services (FCAS), Australia’s National Electricity Market (NEM) has implemented changes in recent years towards shorter-term bidding rules and faster service requirements. However, firstly, existing bidding optimization methods often overlook or oversimplify the key aspects of FCAS market procedures, resulting in an inaccurate depiction of the market bidding process. Thus, the BESS bidding problem is modeled based on the actual bidding records and the latest market specifications and then formulated as a deep reinforcement learning (DRL) problem. Secondly, the erratic decisions of the DRL agent caused by imperfectly predicted market information increases the risk of profit loss. Hence, a Conditional Value at Risk (CVaR)-based DRL algorithm is developed to enhance the risk resilience of bidding strategies. Thirdly, well-trained DRL models still face performance decline in uncommon scenarios during online operations. Therefore, a Large Language Models (LLMs)-assisted artificial intelligence (AI)-agent interactive decision-making framework is proposed to improve the strategy timeliness, reliability and interpretability in uncertain new scenarios, where conditional hybrid decision and self-reflection mechanisms are designed to address LLMs’ hallucination challenge. The experiment results demonstrate that our proposed framework has higher bidding profitability compared to the baseline methods by effectively mitigating the profit loss caused by various uncertainties.

Index Terms:
Energy storage, deep reinforcement learning, large language model, frequency regulation, bidding strategy

I Introduction

In the pursuit of mitigating climate change and transitioning to low-carbon, sustainable goals, renewable energy development has advanced yet its extensive integration into power grids has significantly impacted system stability and security. In Australia’s National Electricity Market (NEM), the Frequency Control Ancillary Services (FCAS) market purchases frequency regulation (FR) capacities from bidders to ensure stable system frequency [1]. In 2023, a new 1-second FCAS market [2] was introduced in NEM to incentivize flexible resources such as Battery Energy Storage Systems (BESSs) to offer responses within one second. Considering the high cost of BESSs [3], it is imperative to develop highly profitable bidding strategies.

Existing studies have focused on the optimization of bidding strategies in ancillary service markets. The study in [4] adopts a “predict-then-optimize” framework for a single system in a single market, utilizing linear programming (LP) based on predicted prices to seek the optimal bidding strategy of BESS in the energy market. Furthermore, [5] applies a genetic algorithm (GA) to address the bidding problem of a wind-BESS system in multiple markets, considering the joint bidding capacity allocation in energy and frequency reserve markets. [6] models a stochastic game to find Nash equilibrium bidding solutions for multiple virtual power plants (VPPs) in energy and frequency markets. However, mathematical methods are constrained by low efficiency and an over-reliance on forecasting, hence facing challenges in making timely bidding decisions with high returns in dynamic market conditions [7]. More importantly, existing solutions are not applicable in the real-world FCAS market because the actual bidding procedure has various compulsory requirements, such as bids must be step-like over multiple bands, and bidding price modification is prohibited in the real-time market which is often ignored or oversimplified. Bidders are also often assumed to be price-takers, omitting the responses of market operators.

Artificial intelligence (AI)-based methods, such as reinforcement learning (RL) and deep RL (DRL), have been applied to bidding optimization in ancillary markets [8, 9] and can improve efficiency and get rid of the over-reliance on forecasting by learning from historical experiences [10]. In [11], inverse RL (IRL) is used to seek the objective of BESSs based on bidding data in energy and ancillary service markets. Additionally, [12] utilizes multi-agent RL (MARL) to address the bidding game among multiple VPPs while ensuring fairness among bidders. However, the aleatoric uncertainty arising from dynamic environments and the epistemic uncertainty caused by limited training data remain challenges limiting the practical applications of DRL [13].

In DRL bidding problems, the aleatoric uncertainty comes from uncertain market behaviors and imperfect insight and results in DRL agents making erratic decisions under imperfectly predictable information. Conditional value-at-risk (CVaR), as a popular risk measure, can enhance economic benefits by managing market risk [14]. In [6], CVaR is integrated into a risk-averse mathematical model to reduce the risk of overbidding. If CVaR is incorporated into the DRL model to control the risk of erratic decisions of the DRL agent, the profit loss due to the aleatoric uncertainty can be effectively mitigated.

The epistemic uncertainty is from uncommon market conditions outside the training set and exposes the weakness of DRL agents performing well during training but poorly in new environments [15]. [16] suggests that Large Language Models (LLMs) are promising in driving DRL agents to make more reliable decisions in new environments, leveraging their rich knowledge and natural language processing capabilities. [17] proposes an LLMs-assist autonomous driving planning to better handle uncertain driving scenarios. [18] suggests a “GPT-in-the-loop” method, utilizing GPT to aid agent systems in interactive decision-making for better performance and interpretability. Thus, if unfaithful decisions by LLMs’ hallucination [19] can be mitigated, it will be a great opportunity to reduce DRL’s profit loss in unfamiliar market conditions through interactive collaboration between DRL and LLMs.

Therefore, this paper establishes models that comply with the actual rules of NEM, proposes a CVaR-DRL algorithm to seek the optimal bidding strategies to improve the profitability of BESS under market uncertainty and dynamic bidding risks, and builds an LLMs-assisted AI-agent interactive decision-making framework to enhance the bidding performance of the DRL agent in new scenarios through LLMs-assisted AI-agent interactive collaboration. Fig. 1 shows the framework of the bidding system. The main contributions are as follows:

Refer to caption

Figure 1: LLMs-based BESS Ancillary Service Autonomous Intelligent Bidding System.
  • To address the inability of existing bidding solutions based on models with oversimplified market procedures, price taker assumptions, linear approximations, etc., to cope with practical requirements, a bi-level bidding game problem between the market operator and the price-maker BESS is modeled based on the actual bidding records of each bidder in the FCAS market and the NEM’s two-stage multi-band bidding requirements.

  • To handle the weakness of traditional mathematical and DRL methods in terms of efficiency, performance and risk resilience, a CVaR-DRL model is proposed for bidding optimization to manage the risk of erratic decisions of the DRL agent and reduce the profit loss caused by uncertain market conditions and imperfect insight.

  • To improve the strategy timeliness, reliability, and interpretability of DRL in unfamiliar scenarios, an LLMs-assisted AI-agent interactive decision-making framework is built for online market analysis, assisted decision and interpretable feedback, where conditional hybrid decision and self-reflection mechanisms are designed to mitigate unfaithful decisions by LLMs’ hallucination.

The paper is organized as follows: Section II covers research preliminaries. Section III presents the DRL problem and proposed algorithm. Section IV analyzes case study results. Section V concludes the paper.

II Preliminaries

II-A FCAS Markets Rules and Mechanisms

In Australia’s NEM, the Australian Energy Market Operator (AEMO) ensures that the system frequency remains within the normal operation band by purchasing sufficient capacity from FCAS providers’ bids to match the FCAS demand [20]. FCAS is divided into regulation and contingency services: regulation FCAS is dispatched to correct minor frequency deviations while contingency FCAS restores larger frequency deviations when the frequency exceeds the standard band [1]. Depending on the directions of frequency correction, they can be further classified as raised services and lower services. The specific market operation has the following stages.

II-A1 Bidding Stage

The NEM operates a real-time spot market only but has a pre-dispatch stage and a real-time stage [21]. In the pre-dispatch stage, bidders are required to submit the next 24-hour bids one day in advance. A standard bid has 10 bidding bands including 10 bidding capacities and prices, and they must be monotonically increasing. In the real-time stage, the bidding prices will no longer be modified but the capacities can be flexibly adjusted via re-bids before the next trading interval [1]. This rule allows bidders to adjust the capacity allocation in dynamic real-time market conditions while ensuring the clearing prices do not experience undesired large fluctuations due to re-bidding.

II-A2 Clearing Stage

In the market clearing stage, AEMO aims to purchase sufficient bidding capacities with minimal costs. In each trading interval, AEMO will sort all bids by price and prioritize enabling bids with lower prices until the dispatched capacity exactly matches the FCAS demand. Subsequently, the highest price among all enabled bids will be selected as the clearing price [1]. It is worth noting that AEMO considers each bidder’s capacity constraints across the energy, regulation, and contingency markets, meaning that the total dispatched capacity of this bidder across these three markets cannot exceed its maximum output power limit [22].

II-A3 Settlement Stage

In the settlement stage, enabled bidders will be settled based on the enabled capacity and the clearing price. Note that the enabled capacity is different from the actually dispatched capacity. Even if a bidder is on standby as reserve capacity, it will obtain the same revenue [1].

II-B Market Clearing Model

At each trading interval tt, AEMO will deploy sufficient capacity from NN bids to meet the demand dtd_{t} in the order of the cost. bct,n,kbc_{t,n,k} and bpt,n,kbp_{t,n,k} denote the kk-th bidding capacity and bidding price respectively of the nn-th bidder at time tt. Then AEMO will determine the enabled capacity ect,nec_{t,n} and the clearing price cptcp_{t} which will be settled to the nn-th bidder. Let m{lr,rr,lc,rc}m\in\{lr,rr,lc,rc\} denote the lower regulation, raise regulation, lower contingency, and raise contingency FCAS markets respectively. The market clearing model is as follows:

minmt=0T(cptmdtm)\displaystyle\begin{split}&\min\quad\sum_{m}\sum_{t=0}^{T}(cp_{t}^{m}d_{t}^{m})\end{split} (1a)
s.t. k=110ut,n,km1,\displaystyle\sum_{k=1}^{10}u_{t,n,k}^{m}\leq 1, (1b)
BCt,nm=k=110ut,n,kmbct,n,km,\displaystyle BC_{t,n}^{m}=\sum_{k=1}^{10}u_{t,n,k}^{m}bc_{t,n,k}^{m}, (1c)
0ect,nmBCt,nm,\displaystyle 0\leq ec_{t,n}^{m}\leq BC_{t,n}^{m}, (1d)
BPt,nm=k=110ut,n,kmbpt,n,km,\displaystyle BP_{t,n}^{m}=\sum_{k=1}^{10}u_{t,n,k}^{m}bp_{t,n,k}^{m}, (1e)
cptmk=110ut,n,kmBPt,nm,\displaystyle cp_{t}^{m}\geq\sum_{k=1}^{10}u_{t,n,k}^{m}BP_{t,n}^{m}, (1f)
cptmk=110ut,n,kmBPt,nm+M(1vt,nm),\displaystyle cp_{t}^{m}\leq\sum_{k=1}^{10}u_{t,n,k}^{m}BP_{t,n}^{m}+M(1-v_{t,n}^{m}), (1g)
n=1Nvt,nm1,\displaystyle\sum_{n=1}^{N}v_{t,n}^{m}\geq 1, (1h)
n=1Nect,nm+stm=dtm,\displaystyle\sum_{n=1}^{N}ec_{t,n}^{m}+s_{t}^{m}=d_{t}^{m}, (1i)
ect,nlr+ect,nlc+Pct,nPct,nmax,\displaystyle ec_{t,n}^{lr}+ec_{t,n}^{lc}+Pc_{t,n}\leq Pc_{t,n}^{max}, (1j)
ect,nrr+ect,nrc+Pdt,nPdt,nmax,\displaystyle ec_{t,n}^{rr}+ec_{t,n}^{rc}+Pd_{t,n}\leq Pd_{t,n}^{max}, (1k)
ut,n,km,vt,nm{0,1},\displaystyle u_{t,n,k}^{m},v_{t,n}^{m}\in\{0,1\}, (1l)
m{lr,rr,lc,rc},\displaystyle\forall m\in\{lr,rr,lc,rc\}, (1m)

where the optimization objective (1a) is to minimize the costs across all FCAS markets. (1b)-(1c) aim to determine the bidding capacity BCt,nmBC_{t,n}^{m} of each bidder by selecting at most one bidding band from ten bands. (1d) means that AEMO can flexibly enable the capacity ect,nmec_{t,n}^{m} from the selected bidding capacity BCt,nmBC_{t,n}^{m}. Similar to (1b), (1e) determines the corresponding bidding price BPt,nmBP_{t,n}^{m} at the selected band. (1f)-(1h) aims to use the big M method to determine the maximum enabled bidding price (i.e. the clearing price cptmcp_{t}^{m}), where M is a very large value. (1i) ensures the overall enabled FCAS capacity plus the overall market supply variation stms_{t}^{m} is equal to the FCAS demand dtmd_{t}^{m}. (1j) ensures the joint enabled capacity of each bidder in regulation lower, contingency lower FCAS markets plus the charging capacity in energy market Pct,nPc_{t,n} never exceed its maximum charging capacity Pct,nmaxPc_{t,n}^{max}. Similarly, (1k) ensures the joint enabled capacity of each bidder in regulation raise, contingency raise FCAS markets plus the discharging capacity in energy market Pdt,nPd_{t,n} never exceed its maximum discharging capacity Pdt,nmaxPd_{t,n}^{max}. As shown in (1l), ut,n,ku_{t,n,k} and vt,nv_{t,n} are binaries.

II-C BESS Bidding Model

II-C1 Day-ahead Bidding Model

As required by AEMO, an initial bid with prices and capacities should be submitted 24 hours in advance. Different from the bidding capacities, the submitted bidding prices cannot be further changed.

maxω=1Ωπω{t=0T[m(BPω,tmBCω,t,imMrω,t+,mMrω,t,m)+pω,t+1e(SoCω,t+1SoCω,t)α|SoCω,t+1SoCω,t|]}\displaystyle\begin{split}\max&\sum_{\omega=1}^{\Omega}\pi_{\omega}\Bigg{\{}\sum_{t=0}^{T}\Big{[}\sum_{m}(BP_{\omega,t}^{m}BC_{\omega,t,i}^{m}\!-\!Mr_{\omega,t}^{+,m}\!-\!Mr_{\omega,t}^{-,m})\\ +&p_{\omega,t+1}^{e}(SoC_{\omega,t+1}-SoC_{\omega,t})\!-\!\alpha|SoC_{\omega,t+1}\!-\!SoC_{\omega,t}|\Big{]}\Bigg{\}}\end{split} (2a)
s.t. k=111bω,t,n,km=1,\displaystyle\sum_{k=1}^{11}b_{\omega,t,n,k}^{m}=1, (2b)
0bct,i,kmbct,i,k+1mbct,imax,m,k=1,,9\displaystyle 0\leq bc_{t,i,k}^{m}\leq bc_{t,i,k+1}^{m}\leq bc_{t,i}^{max,m},k=1,...,9 (2c)
BCω,t,nm=k=211bω,t,n,kmbct,n,k1m,\displaystyle BC_{\omega,t,n}^{m}=\sum_{k=2}^{11}b_{\omega,t,n,k}^{m}bc_{t,n,k-1}^{m}, (2d)
0bpt,i,kmbpt,i,k+1mbpcap,k=1,,9\displaystyle 0\leq bp_{t,i,k}^{m}\leq bp_{t,i,k+1}^{m}\leq bp^{cap},k=1,...,9 (2e)
BPω,tmk=211bω,t,n,kmbpt,nm,\displaystyle BP_{\omega,t}^{m}\geq\sum_{k=2}^{11}b_{\omega,t,n,k}^{m}bp_{t,n}^{m}, (2f)
BPω,tmk=110bω,t,n,kmbpt,nm+bω,t,n,11mbpcap,\displaystyle BP_{\omega,t}^{m}\leq\sum_{k=1}^{10}b_{\omega,t,n,k}^{m}bp_{t,n}^{m}+b_{\omega,t,n,11}^{m}bp^{cap}, (2g)
BCω,t,im+iBCω,t,im+rω,t+,mrω,t,m+sω,tm=dω,tm,\displaystyle BC_{\omega,t,i}^{m}+\sum_{-i}BC_{\omega,t,-i}^{m}+r_{\omega,t}^{+,m}-r_{\omega,t}^{-,m}+s_{\omega,t}^{m}=d_{\omega,t}^{m}, (2h)
SoCω,0=0.5,\displaystyle SoC_{\omega,0}=0.5, (2i)
SoCω,t+1=SoCω,t+[(Pct+Stω,lrBCω,t,ilr+Sω,tlcBCω,t,ilc)ηc(Pdt+Sω,trrBCω,t,irr+Sω,trcBCω,t,irc)/ηd]Δt/Emax,\displaystyle\begin{split}&SoC_{\omega,t+1}=SoC_{\omega,t}\\ &+[(Pc_{t}+S_{t}^{\omega,lr}BC_{\omega,t,i}^{lr}+S_{\omega,t}^{lc}BC_{\omega,t,i}^{lc})\eta_{c}\\ &-(Pd_{t}+S_{\omega,t}^{rr}BC_{\omega,t,i}^{rr}+S_{\omega,t}^{rc}BC_{\omega,t,i}^{rc})/\eta_{d}]\Delta t/E_{max},\end{split} (2j)
SoCminSoCω,tSoCmax,\displaystyle SoC_{min}\leq SoC_{\omega,t}\leq SoC_{max}, (2k)
0PdtetPdtmax,\displaystyle 0\leq Pd_{t}\leq e_{t}Pd_{t}^{max}, (2l)
0Pct(1et)Pctmax,\displaystyle 0\leq Pc_{t}\leq(1-e_{t})Pc_{t}^{max}, (2m)
bct,i,10lr+bct,i,10lc+PctPctmax,\displaystyle bc_{t,i,10}^{lr}+bc_{t,i,10}^{lc}+Pc_{t}\leq Pc_{t}^{max}, (2n)
bct,i,10rr+bct,i,10rc+PdtPdtmax,\displaystyle bc_{t,i,10}^{rr}+bc_{t,i,10}^{rc}+Pd_{t}\leq Pd_{t}^{max}, (2o)
rω,t+,m,rω,t,m0,\displaystyle r_{\omega,t}^{+,m},r_{\omega,t}^{-,m}\geq 0, (2p)
α=CBESS/[2N(SoCmaxSoCmin],\displaystyle\alpha=C_{BESS}/[2N(SoC_{max}-SoC_{min}], (2q)
bω,t,n,km,et{0,1},\displaystyle b_{\omega,t,n,k}^{m},e_{t}\in\{0,1\}, (2r)
m{lr,rr,lc,rc},\displaystyle\forall m\in\{lr,rr,lc,rc\}, (2s)

where πω\pi_{\omega} is the probability of scenario ω\omega in scenario set Ω\Omega. The optimization objective (2a) is to maximize the profits from the joint energy and FCAS markets with the minimum degradation cost within a 24-hour optimization window TT. Similar to (1b), (2b) is used for bidding band selection but it seeks the ranges instead of values. Specifically, since the bidding price BPω,tmBP_{\omega,t}^{m} is variable from 0 to the bidding price cap bpcapbp^{cap}, it will span 12 prices from 0, ten bidding prices to the price cap and be located in one range from their 11 ranges and that is the reason for 11 binaries. (2c) and (2e) ensure that the bidding prices and capacities of the model are monotonically increasing. (2f)-(2g) find the bidding bands in all bids in which BPω,tmBP_{\omega,t}^{m} is located and (2d) finds the capacities BCω,t,nmBC_{\omega,t,n}^{m} in the selected bidding bands. (2h) ensures that the capacity of the BESS BCω,t,imBC_{\omega,t,i}^{m} plus the capacities of other bidders in market mm plus the overall market supply variation sts_{t} matches the demand dtd_{t} in market mm, where rω,t+,mr_{\omega,t}^{+,m} and rω,t,mr_{\omega,t}^{-,m} are relaxation variables. (2i)-(2k) updates the state of charge (SoC) SoCω,tSoC_{\omega,t} of the BESS within the SoC operation ranges, where Stm(0,1)S_{t}^{m}\in(0,1) is the FR command in market mm. (2l)-(2m) ensure that the charging power PctPc_{t} and the discharging power PdtPd_{t} of the BESS are within the maximum charging capacity PctmaxPc_{t}^{max} and the maximum discharging capacity PdtmaxPd_{t}^{max} and it cannot be charged and discharged simultaneously. Similar to (1j)-(1k), (2n)-(2o) ensure the joint capacities in energy and FCAS markets do not exceed the maximum capacities. (2q) calculates the coefficient α\alpha to estimate the degradation cost of the BESS, where CBESSC_{BESS} and NN are the battery cell cost and the maximum cycle numbers of the BESS [6]. Note that since the bid should be unique, the variables forming the bid (PctPc_{t},PdtPd_{t},bct,i,kmbc_{t,i,k}^{m} and bpt,i,kmbp_{t,i,k}^{m}) are unique solutions in Ω\Omega.

II-C2 Real-time Bidding Model

AEMO allows bidders to adjust capacities via re-bid in response to the latest changes in the real-time market, hence the real-time model aims to adjust the bidding capacities under real-time prediction and fixed bidding prices by the day-ahead model.

maxm[cptm(BCt,im+ΔBCt,im)]+pt+1e(SoCt+1SoCt)α|SoCt+1SoCt|\displaystyle\begin{split}\max&\quad\sum_{m}\big{[}cp_{t}^{m}(BC_{t,i}^{m}+\Delta BC_{t,i}^{m})\big{]}\\ &+p_{t+1}^{e}(SoC_{t+1}-SoC_{t})-\alpha|SoC_{t+1}-SoC_{t}|\end{split} (3a)
s.t. k=111bt,n,km=1,\displaystyle\sum_{k=1}^{11}b_{t,n,k}^{m}=1, (3b)
0bct,i,kmbct,i,k+1mbct,imax,m,k=1,,9\displaystyle 0\leq bc_{t,i,k}^{m}\leq bc_{t,i,k+1}^{m}\leq bc_{t,i}^{max,m},k=1,...,9 (3c)
BCt,nm+ΔBCt,im=k=211bt,n,kmbct,n,k1m,\displaystyle BC_{t,n}^{m}+\Delta BC_{t,i}^{m}=\sum_{k=2}^{11}b_{t,n,k}^{m}bc_{t,n,k-1}^{m}, (3d)
cptmk=211bt,n,kmbpt,nm,\displaystyle cp_{t}^{m}\geq\sum_{k=2}^{11}b_{t,n,k}^{m}bp_{t,n}^{m}, (3e)
cptmk=110bt,n,kmbpt,nm+bt,n,11mbpcap,\displaystyle cp_{t}^{m}\leq\sum_{k=1}^{10}b_{t,n,k}^{m}bp_{t,n}^{m}+b_{t,n,11}^{m}bp^{cap}, (3f)
BCt,im+ΔBCt,im+iBCt,im+stmdtm,\displaystyle BC_{t,i}^{m}+\Delta BC_{t,i}^{m}+\sum_{-i}BC_{t,-i}^{m}+s_{t}^{m}\geq d_{t}^{m}, (3g)
SoCt+1=SoCt+{[Pct+ΔPct+Stlr(BCt,ilr+ΔBCt,ilr)+Stlc(BCt,ilc+ΔBCt,ilc)]ηc[Pdt+ΔPdt+Strr(BCt,irr+ΔBCt,irr)+Strc(BCt,irc+ΔBCt,irc)]/ηd}Δt/Emax,\displaystyle\begin{split}&SoC_{t+1}=SoC_{t}+\big{\{}[Pc_{t}+\Delta Pc_{t}+S_{t}^{lr}(BC_{t,i}^{lr}+\Delta BC_{t,i}^{lr})\\ &+S_{t}^{lc}(BC_{t,i}^{lc}+\Delta BC_{t,i}^{lc})]\eta_{c}-[Pd_{t}+\Delta Pd_{t}+S_{t}^{rr}(BC_{t,i}^{rr}\\ &+\Delta BC_{t,i}^{rr})+S_{t}^{rc}(BC_{t,i}^{rc}+\Delta BC_{t,i}^{rc})]/\eta_{d}\big{\}}\Delta t/E_{max},\end{split} (3h)
SoCminSoCtSoCmax,\displaystyle SoC_{min}\leq SoC_{t}\leq SoC_{max}, (3i)
0Pdt+ΔPdtetPdtmax,\displaystyle 0\leq Pd_{t}+\Delta Pd_{t}\leq e_{t}Pd_{t}^{max}, (3j)
0Pct+ΔPct(1et)Pctmax,\displaystyle 0\leq Pc_{t}+\Delta Pc_{t}\leq(1-e_{t})Pc_{t}^{max}, (3k)
bct,i,10lr+bct,i,10lc+Pct+ΔPctPctmax,\displaystyle bc_{t,i,10}^{lr}+bc_{t,i,10}^{lc}+Pc_{t}+\Delta Pc_{t}\leq Pc_{t}^{max}, (3l)
bct,i,10rr+bct,i,10rc+Pdt+ΔPdtPdtmax,\displaystyle bc_{t,i,10}^{rr}+bc_{t,i,10}^{rc}+Pd_{t}+\Delta Pd_{t}\leq Pd_{t}^{max}, (3m)
bt,n,km,et{0,1},\displaystyle b_{t,n,k}^{m},e_{t}\in\{0,1\}, (3n)
m{lr,rr,lc,rc},\displaystyle\forall m\in\{lr,rr,lc,rc\}, (3o)

where (3a) is to determine the adjusted capacities (ΔPdt\Delta Pd_{t},ΔPct\Delta Pc_{t} and BCt,imBC_{t,i}^{m}) in the energy and FCAS markets within a 5-minute rolling window. Unlike the rough day-ahead forecasts of the 24-hour stochastic scenarios, the real-time model uses more accurate real-time deterministic forecasts as input parameters. Note that (3e)-(3f) aim to seek the located bidding bands of the clearing prices cptmcp_{t}^{m} instead of the bidding prices bpt,i,kmbp_{t,i,k}^{m} in the day-ahead model because bpt,i,kmbp_{t,i,k}^{m} are considered as fixed parameters in the real-time stage. Also, the capacity adjustment in (3g) should ensure that the supply meets the demand in real time.

II-D Supply Estimation Model

Before using the above models, it is necessary to present the model that calculates the overall market supply variation mentioned in the previous models, i.e. stms_{t}^{m}. The clearing price is determined by both the FCAS supply and demand, taking a real-world scenario in the FCAS market as an example, bidders typically put more capacity into arbitrage during peak energy prices, and hence the clearing prices in FCAS markets will be higher to purchase capacity that can meet the demand. However, the data on clearing prices and demands are given by AEMO whereas the data on supply is incomplete. Hence, it is hard to describe the above scenarios caused by the market supply variation. To better describe the relationship between price and supply-demand balance under actual scenarios, we build a model to estimate the overall market supply variation stms_{t}^{m} under given clearing prices cptmcp_{t}^{m} and demands dtmd_{t}^{m}.

minmt=0T(st+,m+st,m)\displaystyle\begin{split}&\min\quad\sum_{m}\sum_{t=0}^{T}(s_{t}^{+,m}+s_{t}^{-,m})\end{split} (4a)
s.t. k=111zt,n,km=1,\displaystyle\sum_{k=1}^{11}z_{t,n,k}^{m}=1, (4b)
ect,nm=k=211zt,n,kmbct,n,k1m,\displaystyle ec_{t,n}^{m}=\sum_{k=2}^{11}z_{t,n,k}^{m}bc_{t,n,k-1}^{m}, (4c)
cptmk=211zt,n,kmbpt,nm,\displaystyle cp_{t}^{m}\geq\sum_{k=2}^{11}z_{t,n,k}^{m}bp_{t,n}^{m}, (4d)
cptmk=110zt,n,kmbpt,nm+zt,n,11mbpcap,\displaystyle cp_{t}^{m}\leq\sum_{k=1}^{10}z_{t,n,k}^{m}bp_{t,n}^{m}+z_{t,n,11}^{m}bp^{cap}, (4e)
n=1Nect,nm+stm=dtm,\displaystyle\sum_{n=1}^{N}ec_{t,n}^{m}+s_{t}^{m}=d_{t}^{m}, (4f)
stm=st+,mst,m,\displaystyle s_{t}^{m}=s_{t}^{+,m}-s_{t}^{-,m}, (4g)
st+,m,st,m0,\displaystyle s_{t}^{+,m},s_{t}^{-,m}\geq 0, (4h)
zt,n,km{0,1},\displaystyle z_{t,n,k}^{m}\in\{0,1\}, (4i)
m{lr,rr,lc,rc},\displaystyle\forall m\in\{lr,rr,lc,rc\}, (4j)

where the optimization objective (4a) is to find the supply variation stms_{t}^{m} at the supply-demand balancing point. (4b)-(4e) can find the available capacity in market mm under the given price cptcp_{t}. Since cptcp_{t} and dtd_{t} are given, the only unknown variable stms_{t}^{m} can be calculated by (4f)-(4h). stms_{t}^{m} can be both positive and negative, which can be divided into a positive component st+,ms_{t}^{+,m} and a negative component st,m-s_{t}^{-,m}.

II-E Optimization Framework

As shown in Fig. 2, we formulate the problem as a bi-level game model, where the upper-level model is the market clearing model as a leader, and the lower-level model is the BESS bidding model as a follower. First, the unknown parameter stms_{t}^{m} in the problem can be obtained based on the predicted clearing price and demand by solving the supply estimation model (4a). Then, in the pre-dispatch stage, the bidding price and capacity of the BESS are obtained by solving the day-ahead bidding model (2a). In the real-time stage, the bidding price and capacity of the BESS are introduced into model (1a) to obtain an initial clearing price. Then, the clearing price is introduced into model (3a) to obtain the bidding capacity. In the two-level game, model (1a) and model (3a) iteratively update their strategies until Nash equilibrium or exceed the iteration limit. Note that all the above models take the predicted market information as inputs. Finally, in the actual dispatch stage, model (4a) is solved based on the actual clearing price and demand to obtain the parameter stms_{t}^{m}, and then the bidding strategy from model (3a) is introduced into model (1a) to obtain the actual clearing price and enabled capacity, as well as the final actual profit of the BESS.

Refer to caption

Figure 2: Optimization framework of the proposed models.

III Methodology

III-A MDP Formulation

To formulate the proposed problem as a DRL problem, a Markov Decision Process (MDP) including a state space 𝒮\mathcal{S}, an action space 𝒜\mathcal{A}, a transition probability function 𝒫\mathcal{P}, and a reward function \mathcal{R} is necessary to be defined in the form of a DRL environment for the interaction with the DRL agent.

  • State space: A state space contains a set of states st𝒮s_{t}\in\mathcal{S} that the agent can observe from the environment. At time tt, the SoC of BESS SoCtSoC_{t}, the predicted energy price ptep_{t}^{e}, FCAS prices ptmp_{t}^{m} and demands dtmd_{t}^{m} in market mm are defined as states.

  • Action space: An action space contains a set of actions at𝒜a_{t}\in\mathcal{A} which is the bidding strategy of the agent. At time tt, the bidding capacities bct,klr,bct,krr,bct,klc,bct,krc,k=1,,9bc_{t,k}^{lr},bc_{t,k}^{rr},bc_{t,k}^{lc},bc_{t,k}^{rc},k=1,...,9 in the FCAS markets as well as the output PctPc_{t} and PdtPd_{t} in the energy market are defined as actions.

  • Transition probability function: 𝒫(st,at,st+1)=Pr(st+1st,at)\mathcal{P}(s_{t},a_{t},s_{t+1})=\Pr(s_{t+1}\mid s_{t},a_{t}). The state st+1s_{t+1} at t+1t+1 is uncertain under given observation sts_{t} and bidding strategy ata_{t} at tt.

  • Reward function: 𝒮×𝒜\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}. When the agent takes an action ata_{t} and transforms sts_{t} to st+1s_{t+1}, it receives a reward rt(st,at,st+1)r_{t}(s_{t},a_{t},s_{t+1})\in\mathcal{R}. We define the profits of the BESS including the revenues in the joint energy and FCAS markets and the degradation costs as the rewards. Similar to the objective function of model (3a), the basic reward function can be written as:

    rt+1=m(cptmect,im)+pt+1e(SoCt+1SoCt)α|SoCt+1SoCt|\begin{split}r_{t+1}=\sum_{m}(cp_{t}^{m}ec_{t,i}^{m})+p_{t+1}^{e}(SoC_{t+1}-SoC_{t})\\ -\alpha|SoC_{t+1}-SoC_{t}|\end{split} (5)

    where cptmcp_{t}^{m} and ect,imec_{t,i}^{m} are the solutions of the market clearing model (1a), which means that there are interactions between the agent and the market.

Unlike the instantaneous rewards in FCAS, arbitrage rewards tend to be sparse and delayed. To encourage the BESS to charge before discharging and receiving rewards, the potential-based reward shaping (PBRS) [23] is employed to assist the agent in learning strategies for joint markets. The reward function (5) is modified by adding an extra reward based on prior knowledge:

rt+1rt+1+γΦ(st+1)Φ(st)r_{t+1}\leftarrow r_{t+1}+\gamma\Phi(s_{t+1})-\Phi(s_{t}) (6)
Φ(st)=[t=0t(PctηcΔt)E¯]\Phi(s_{t})=\big{[}\textstyle\sum_{t=0}^{t}(Pc_{t}\eta_{c}\Delta t)-\underline{E}\big{]}^{-} (7)

where potential function Φ(st)\Phi(s_{t}) measures the gap between charged energy and energy threshold E¯\underline{E}. [][\bullet]^{-} is the projector of min(,0)\min(\bullet,0) to avoid overcharging. In general, the dense artificial reward signals drive the agent to reach the sub-goal of charging at least a certain amount of energy in each episode.

In order to ensure that the bidding strategy of the agent complies with market standards and rules, any action out of specification is limited, for k=1,,9k=1,...,9 we have:

bct,k+1lrclip(bct,k+1lr,bct,klr,PctmaxPct)bc_{t,k+1}^{lr}\leftarrow\operatorname{clip}(bc_{t,k+1}^{lr},bc_{t,k}^{lr},Pc_{t}^{max}\!-\!Pc_{t}) (8)
bct,k+1lcclip(bct,k+1lc,bct,klc,PctmaxPctbct,10lr)bc_{t,k+1}^{lc}\leftarrow\operatorname{clip}(bc_{t,k+1}^{lc},bc_{t,k}^{lc},Pc_{t}^{max}\!-\!Pc_{t}\!-bc_{t,10}^{lr}) (9)
bct,k+1rrclip(bct,k+1rr,bct,krr,PdtmaxPdt)bc_{t,k+1}^{rr}\leftarrow\operatorname{clip}(bc_{t,k+1}^{rr},bc_{t,k}^{rr},Pd_{t}^{max}\!-\!Pd_{t}) (10)
bct,k+1rcclip(bct,k+1rc,bct,krc,PdtmaxPdtbct,10rr)bc_{t,k+1}^{rc}\leftarrow\operatorname{clip}(bc_{t,k+1}^{rc},bc_{t,k}^{rc},Pd_{t}^{max}\!-\!Pd_{t}\!-bc_{t,10}^{rr}) (11)

where the maximum function is used to ensure the bidding capacities are monotonically increasing in each FCAS market. The minimum function is used to ensure the joint capacities in energy and FCAS markets are within the limitations of the maximum charging or discharging capacity. In addition, when SoCtSoC_{t} is larger than SoCmaxSoC_{max}, Pct,bct,klr,bct,klc,k=1,,10Pc_{t},bc_{t,k}^{lr},bc_{t,k}^{lc},k=1,...,10 are limited to 0, and when SoCtSoC_{t} is smaller than SoCminSoC_{min}, Pdt,bct,krr,bct,krc,k=1,,10Pd_{t},bc_{t,k}^{rr},bc_{t,k}^{rc},k=1,...,10 are limited to 0, which ensures SoCtSoC_{t} is always within the limitations of the SoC. Once the MDP is defined, the DRL agent will interact with the environment to obtain experience samples and learn strategies to maximize the cumulative rewards of the scenarios. After offline training, the trained model will be applied to the online bidding system as the decision engine.

III-B LLMs-assisted AI-agent Interactive Decision-making Framework

The epistemic uncertainty leads to a declined bidding performance of the DRL agent in unfamiliar scenarios outside the training set. Thus, we propose an LLMs-assisted AI-agent interactive decision-making framework to decompose the complicated bidding task into more manageable sub-tasks and deploy multiple LLMs-based agents to address the components in an orderly modular pattern. Compared to the DRL model trained on historical data, LLMs can analyze the latest market conditions in real time, enhancing the strategy timeliness. To reduce profit loss in unfamiliar scenarios, LLMs can adjust inferior strategies by conditional hybrid decisions and incrementally integrate new knowledge into DRL through MDP reshaping, thereby improving the strategy reliability. Additionally, unlike the ”black box” nature of DRL networks, LLMs can act as interpreters to provide explanations to users, thereby increasing the interpretability. An external evaluation and self-reflection mechanism are designed in the “DRL-LLMs” loop to mitigate the hallucination problem in LLMs.

III-B1 Definition of LLMs-assisted Agents

  • Data Analyst (Agent-1): Agent-1 aims to detect whether the state space 𝒮\mathcal{S} during online operation is common in the historical training set (True) or not (False). Then it analyzes the potential profitability in each FCAS market from historical statistics, predicted price trends, and prediction quality for subsequent adjustment guidance.

  • Strategy Optimizer (Agent-2): The target of Agent-2 is to adopt a hybrid decision ata_{t}^{\prime} to make higher profits rtr_{t}^{\prime}:

    at=at+ΔatLLMs1(Falseτ=tht1rττ=tht1rτ)a_{t}^{\prime}=a_{t}+\Delta a_{t}^{LLMs}\textbf{1}(\textbf{False}\land\textstyle\sum_{\tau=t-h}^{t-1}r_{\tau}^{\prime}\geq\sum_{\tau=t-h}^{t-1}r_{\tau}) (12)

    To mitigate unreliable decisions by LLMs’ hallucination, we introduce another condition as an external evaluation mechanism to constrain ata_{t}^{\prime} rigidly. ata_{t}^{\prime} is enabled only when its cumulative return within a half-hour rolling horizon hh is higher than that of ata_{t}, thereby ensuring a safe profit margin. Dynamically varying conditions over time also avoid a consistent bias towards one alternative.

  • Performance Evaluator (Agent-3): With the ability of LLMs’ self-reflection and multi-agent interaction, Agent-3 evaluates the bidding performance of ata_{t} and ata_{t}^{\prime} and then provides feedback to reinforce the agents. It can also act as an interpreter for agents’ decision explanations.

III-B2 Interactive Agent Communication and Collaboration

We first define the user prompt \mathbb{P} and formalize it as =(𝕆,𝔸,,)\mathbb{P}=(\mathbb{O},\mathbb{A},\mathbb{R},\mathbb{C}) which includes observation 𝕆\mathbb{O}, action 𝔸\mathbb{A}, reward \mathbb{R} and context \mathbb{C} to formulate the problem for LLMs.

  • Observation 𝕆\mathbb{O} contains the state space 𝒮\mathcal{S}, the action space 𝒜\mathcal{A}, and the market historical statistics and prediction.

  • Action 𝔸\mathbb{A} needs to be defined individually for each agent. For Agent-1, the actions are the detected and analyzed results on 𝕆\mathbb{O} in text form. For Agent-2, the actions are the adjusted bidding capacities in numerical form. For Agent-3, the actions are the evaluation results in text form.

  • Reward \mathbb{R} contains the numerical performance of the decisions, which will be converted to text form as feedback.

  • Context \mathbb{C} includes the problem description, the task definitions, the task targets, the constraints on action 𝔸\mathbb{A}, the expected answer format, and any necessary information.

For a given prompt \mathbb{P}, LLMs can observe 𝕆\mathbb{O} and determine the agents’ actions 𝔸\mathbb{A} in the requested format. The multi-agent workflow is organized as follows. First, Agent-1 sends its detected and analyzed results to Agent-2. Then Agent-2 performs hybrid decisions based on the current conditions and the given information. The original ata_{t} and the adjusted ata_{t}^{\prime} are executed synchronously in the environment to obtain the rewards rtr_{t} and rtr_{t}^{\prime}. Finally, Agent-3 provides feedback including quantitative assessments and qualitative analysis based on each agent’s performance for strategy correction at subsequent iterations.

During online operation, LLMs can realize interactive collaboration between multiple LLMs-assisted AI agents and the DRL agents by affecting the MDP in the environment. First, the tuple (sts_{t},ata_{t},rtr_{t}) under the original strategy will be replaced by the new tuple (sts_{t}^{\prime},ata_{t}^{\prime},rtr_{t}^{\prime}) under the LLMs-assisted new strategy and stored in the experience replay. Then the DRL network parameters are fine-tuned based on gradients through incremental learning from superior trajectories. Unlike the traditional method of penalizing inferior trajectories, adjusted trajectories can mitigate the profit loss in unfamiliar scenarios and drive DRL to better adapt to new environments.

III-C CVaR-DRL Algorithm

In addition to epistemic uncertainty, aleatoric uncertainty also impacts the profitability of the BESS. Due to the prediction errors, the observation of DRL agent exists certain noises in our problem instead of fixed in some traditional problems. To reduce the risk of uncertain MDP and improve the algorithm performance, the CVaR-DRL algorithm is proposed, where the proximal policy optimization (PPO) architecture is employed as the basic decision-making engine, CVaR [24] is introduced as risk metrics and the maximum entropy principle [25] is applied for model robustness enhancement.

A standard DRL aims to find a policy πθ\pi_{\theta} parameterized by θ\theta that maximizes the expected cumulative reward:

argmax𝔼πθ[t=0γtrt(st,at)]\operatorname{argmax}\mathbb{E}_{\pi_{\theta}}\left[\sum_{t=0}^{\infty}\gamma^{t}r_{t}(s_{t},a_{t})\right] (13)

where γ(0,1)\gamma\in(0,1) is a discount factor. Deterministic policy often struggles to cope with uncertainty in MDPs and tends to fall into local optimum, hence, we first combine the concept of entropy with DRL to introduce randomness into the policy so that DRL agents can fully explore the environment. Mathematically, the objective function can be modified as follows:

J(πθ)=𝔼πθ{R(πθ)t=0γt[rt(st,at)+δπθ(πθ(atst))]}J(\pi_{\theta})=\mathbb{E}_{\pi_{\theta}}\bigg{\{}R(\pi_{\theta})\triangleq\!\sum_{t=0}^{\infty}\gamma^{t}\Big{[}r_{t}(s_{t},a_{t})+\delta\mathcal{H}_{\pi_{\theta}}(\pi_{\theta}(a_{t}\mid s_{t}))\Big{]}\!\bigg{\}} (14)

where the term πθ\mathcal{H}_{\pi_{\theta}} maximizes the policy entropy under coefficient δ\delta to encourage the agent to adopt the action distributions with high entropy and improve the generalization capabilities of policy under the uncertainty.

In addition, we introduce CVaR to quantify the expected loss under uncertainty in the worst 1α1-\alpha scenarios:

CVaRα(R(πθ))=minμ{μ+𝔼πθ[R(πθ)μ]+1α}\operatorname{CVaR}_{\alpha}\big{(}R(\pi_{\theta})\big{)}=\min_{\mu}\bigg{\{}\mu+\frac{\mathbb{E}_{\pi_{\theta}}\big{[}R(\pi_{\theta})-\mu\big{]}^{+}}{1-\alpha}\!\bigg{\}} (15)

where α(0,1)\alpha\in(0,1) is the confidence level. []+[\bullet]^{+} denotes the projector of max(,0)\max(\bullet,0). μ\mu denotes the threshold in the loss distribution. R(πθ)R(\pi_{\theta}) is the trajectory return under policy πθ\pi_{\theta}. Based on the optimization method in [26], (15) can be constrained by CVaR to avoid profit loss in the worst scenarios. The rewritten objective function is as follows:

minπθJ(πθ)s.t.CVaRα(R(πθ))β\min_{\pi_{\theta}}-J(\pi_{\theta})\quad\operatorname{s.t.}\operatorname{CVaR}_{\alpha}\big{(}-R(\pi_{\theta})\big{)}\leq\beta (16)

where β\beta is the reward tolerance. Then the Lagrangian relaxation is employed to unconstrain the problem:

maxλminθ,μL(λ,θ,μ)J(πθ)+λ{βμ+𝔼πθ[μR(πθ)]+1α}\max_{\lambda}\min_{\theta,\mu}L(\lambda,\theta,\mu)\!\triangleq\!-J(\pi_{\theta})\!+\!\lambda\bigg{\{}\beta\!-\!\mu\!+\!\frac{\mathbb{E}_{\pi_{\theta}}\big{[}\mu-R(\pi_{\theta})\big{]}^{+}}{1-\alpha}\bigg{\}} (17)

where λ\lambda is the Lagrange multiplier. Based on the PPO architecture, the objective function (17) can be optimized by the following gradients (18)-(20) in terms of λ\lambda, θ\theta and μ\mu:

λL(λ,θ,μ)=βμ+𝔼πθ[μR(ωπθ]+1α\nabla_{\lambda}L(\lambda,\theta,\mu)=\beta-\mu+\frac{\mathbb{E}_{\pi_{\theta}}\big{[}\mu-R(\omega_{\pi_{\theta}}\big{]}^{+}}{1-\alpha} (18)
θL(λ,θ,μ)=\displaystyle\nabla_{\theta}L(\lambda,\theta,\mu)\!= (19)
θmin{πθ(as)πθk(as)A^πθk,clip[πθ(as)πθk(as),1ϵ,1+ϵ]A^πθk}\displaystyle\!\nabla_{\theta}\min\Big{\{}\frac{\pi_{\theta}(a\mid s)}{\pi_{\theta_{k}}(a\mid s)}\hat{A}^{\pi_{\theta_{k}}},\operatorname{clip}\big{[}\frac{\pi_{\theta}(a\mid s)}{\pi_{\theta_{k}}(a\mid s)},1-\epsilon,1+\epsilon\big{]}\hat{A}^{\pi_{\theta_{k}}}\Big{\}}
𝔼πθ{θlogPθ[R(ωπθ)]}{R(ωπθλ𝔼πθ[μR(ωπθ]+1α}\displaystyle-\mathbb{E}_{\pi_{\theta}}\big{\{}\nabla_{\theta}\operatorname{log}P_{\theta}[R(\omega_{\pi_{\theta}})]\big{\}}\Big{\{}R(\omega_{\pi_{\theta}}\!-\!\frac{\lambda\mathbb{E}_{\pi_{\theta}}\big{[}\mu-R(\omega_{\pi_{\theta}}\big{]}^{+}}{1-\alpha}\Big{\}}
μL(λ,θ,μ)=λ𝔼πθ1[μR(ωπθ)]1αλ\nabla_{\mu}L(\lambda,\theta,\mu)=\frac{\lambda\mathbb{E}_{\pi_{\theta}}\textbf{1}\big{[}\mu\geq R(\omega_{\pi_{\theta}})\big{]}}{1-\alpha}-\lambda (20)

where ωπθ\omega_{\pi_{\theta}} is the scenario under policy πθ\pi_{\theta}. 1[]\textbf{1}[\bullet] is the indicator function. A^πθk\hat{A}^{\pi_{\theta_{k}}} is the estimated advantage function, πθ(as)πθk(as)\frac{\pi_{\theta}(a\mid s)}{\pi_{\theta_{k}}(a\mid s)} is the importance sampling ratio, ϵ\epsilon is the clip parameter and the clip function clip()\operatorname{clip}(\cdot) is used to remove the overestimation and underestimation by upper limit 1+ϵ1+\epsilon and lower limit 1ϵ1-\epsilon. Typically, the advantage function is estimated by the Generalized Advantage Estimator (GAE):

A^πθk=t=0(γν)t[rt+t+γV(st+t+1)V(st+t)]\hat{A}^{\pi_{\theta_{k}}}=\sum_{t=0}^{\infty}(\gamma\nu)^{t^{\prime}}\big{[}r_{t+t^{\prime}}+\gamma V(s_{t+t^{\prime}+1})-V(s_{t+t^{\prime}})\big{]} (21)

where ν\nu is the GAE parameter and V(s) is the value function.

In summary, Algorithm 1 presents the implementation and procedure of the LLMs-assisted AI-agent CVaR-DRL.

Parameter initialization: policy parameter θ\theta, clip threshold ϵ\epsilon, GAE parameter ν\nu, entropy coefficient δ\delta, confidence level α\alpha and reward tolerance β\beta;
for Episode e=1,2,3Ee=1,2,3\cdots E do
       for t=1,2,3Tt=1,2,3\cdots T do
             DRL agent observes state sts_{t} and generates the bidding strategy ata_{t};
             Agent-1 sends detected results of 𝒮\mathcal{S} and analyzed market profitability to Agent-2;
             Agent-2 performs hybrid decisions ata^{\prime}_{t} based on the current conditions and the given information;
             The market clearing model computes the rewards rtr_{t} and rtr^{\prime}_{t} based on ata_{t} and ata^{\prime}_{t} respectively;
             Agent-3 evaluates rtr_{t} and rtr^{\prime}_{t} and generates feedback to Agent-1 and Agent-2;
             Update the state to st+1s^{\prime}_{t+1} based on ata^{\prime}_{t};
            
       end for
      Collect trajectories under the LLMs-assisted strategy τ=(s1,a1,r1,s2,a2,r2,,sT,aT,rT)\tau=(s_{1},a^{\prime}_{1},r^{\prime}_{1},s^{\prime}_{2},a^{\prime}_{2},r^{\prime}_{2},...,s^{\prime}_{T},a^{\prime}_{T},r^{\prime}_{T});
      
end for
for Iteration i=1,2,3Ii=1,2,3\cdots I do
       Sample τ\tau from the replay buffer;
       Estimate the advantage A^πθk\hat{A}^{\pi_{\theta_{k}}} based on (21);
       Update the Lagrange multiplier λ\lambda based on (18);
       Update the policy parameter θ\theta based on (19);
       Update the threshold μ\mu based on (20);
      
end for
Algorithm 1 LLMs-assisted AI-agent CVaR-DRL.

IV Case Study

The datasets in this paper are provided by AEMO, including historical clearing prices, demands, and bids in FCAS markets and energy prices in South Australia. ARIMA is employed to predict market prices and demands. The FR commands are simulated based on the regulation D signal from PJM. The scenarios are randomly sampled from the dataset in 24-hour periods. The time interval of the MDP is 5 minutes which matches the trading interval of NEM. The capacity of the BESS is 100MWh with an SoC range from 0.2 to 0.8. The charging/discharging power is 50MW with an efficiency of 0.95%. The battery cell cost is $0.5/Wh with a maximum cycle number of 10,000. E¯\underline{E} is set at 40MWh. The value of α\alpha is 0.9, determined by identifying the inflection point on the CVaR curve with respect to α\alpha. The value of β\beta is 12,500, calculated as the average of CVaR and VaR at selected α\alpha. The hyperparameter setting for the DRL algorithm is shown in Table I. LLMs employ the latest GPT-4 Turbo model.

TABLE I: Hyperparameter setting
Hyperparameter Value Hyperparameter Value
Batch size 2880 Mini-batch size 72
Actor learning rate 0.001 Critic learning rate 0.001
Discount factor 0.99 Entropy coefficient 0.01
Clip threshold 0.2 GAE parameter 0.95
Hidden layer 2 Node number [256,256]
Activation function Tanh Optimizer Adam

IV-A Discussion on Market Clearing Model

The role of the market clearing model is to acquire sufficient capacities from various bidders to satisfy the market demand and establish clearing prices. Fig. 3 illustrates the clearing results for all participants in the market clearing model when the BESS bids 10MW, 30MW, and 50MW respectively. When the BESS bids 10 MW, the market clearing model has to purchase additional capacities from other bidders to ensure the supply-demand balance, resulting in higher clearing prices. As the bidding capacity of the BESS increases to 30MW, fewer capacities are needed from other bidders, leading to a decrease in clearing prices. As the BESS bids 50MW, the clearing prices are significantly lower. Thus, as a price maker, the BESS’s bidding capacities influence the market clearing prices.

Refer to caption

Figure 3: Clearing results of the market clearing model under different BESS bidding capacities.

IV-B BESS Cost-benefit Analysis

To assess the profitability of the mathematical and DRL models, we evaluate their benefit performance in the joint energy and FCAS markets. Fig. 4 displays the bidding results of the mathematical model, wherein the BESS bids higher capacities during price spikes in the contingency markets, while mainly allocating capacities to the regulation markets under normal conditions. Fig. 5 shows the bidding results of the DRL model, which tends to distribute its capacities more uniformly across both the contingency and regulation markets.

Fig. 6 provides statistical data on the clearing prices under both strategies, which aggregates the maximum, mean, and minimum values of each FCAS market into a 24-hour profile, along with their averaged values. Generally, the market trends show that raised markets typically have higher prices than lower markets, and regulation markets are priced above contingency markets. Nevertheless, the contingency markets exhibit greater price volatility, offering rare but large profit opportunities. In both bidding strategies, the models generally allocate more capacities to the regulation market than to the contingency markets to secure stable revenues. However, notable differences exist in handling contingency capacities: the mathematical model tends to shift regulation capacities to the contingency markets after predicting the price spikes, whereas the DRL model consistently bids a portion of capacities into the contingency market to pursue potential profit opportunities. Additionally, the clearing results are different between the models, where the peak prices achieved by the DRL model in both contingency raise and lower exceed $1,000/MW, significantly surpassing those of the mathematical model.

Refer to caption

Figure 4: BESS bidding results of the mathematical model.

Refer to caption

Figure 5: BESS bidding results of the DRL model.

Refer to caption

Figure 6: Statistics on clearing prices under the strategies of DRL and mathematical models.

Table II provides a summary of the profits of the BESS under the strategies of the DRL model and the mathematical model. In the energy market, the arbitrage profits of the BESS utilizing the DRL model show an increase of 7.69% over those achieved by the mathematical model. In the regulation markets, profits from the DRL model both lower and raise regulation are increased by 62.23% and 21.90%, respectively. Additionally, the DRL model enhances profits in the contingency markets by 8.77% and 7.67% and achieves a 5.63% reduction in degradation costs. Overall, the BESS under the DRL model generates $37,163 more in profit than the mathematical model, which demonstrates the superior performance of the DRL model in maximizing bidding benefits.

TABLE II: Profit comparison
Revenue DRL model Mathematical model
Energy arbitrage $81,240 $75,436
Regulation lower $13,937 $8,591
Regulation raise $98,160 $80,525
Contingency lower $6,387 $5,872
Contingency raise $73,199 $67,985
Degradation cost -$47,024 -$49,673
Overall profit $225.899 $188,736

IV-C Evaluation of Bidding Risk

To assess the role of risk management in bidding strategies, we analyze the detailed distribution of bidding bands of each market for both the CVaR-DRL and the original DRL algorithms. Fig. 7 presents the box plot representation of the BESS bidding distribution for the CVaR-DRL model. The bidding bands are monotonically increasing and range from band 1 to band 10. Band1 represents the maximum capacity that can be purchased at the lowest clearing price, whereas band10 corresponds to the maximum capacity at the highest price. A higher concentration of capacity in the lower bands suggests that the BESS prioritizes having its bids accepted and disregards the risk of lower clearing prices and profits. Conversely, a concentration in the higher bands indicates a pursuit of higher clearing prices, with less consideration of the risk of bid failure. In terms of median, bidding capacities tend to be low in band1 across all markets, gradually increasing up to approximately band3, which illustrates the BESS attempts to elevate clearing prices while maintaining stable revenue. Beyond band 3, the lower extremes of capacity bids stabilize above 10MW, with the upper extremes continuing to rise slightly, indicating that in some scenarios, the BESS might incrementally increase bids to between 30MW and 35MW to further elevate the clearing prices. The outliers highlight the capacity distribution in rare cases.

Fig. 8 shows the case of the original DRL model. Compared to the proposed algorithm, the original DRL has a wider distribution both in terms of band dimension and capacity dimension. Specifically, the median on regulation capacity ceases to increase at around band 8, with the lower extreme dropping below 10MW and the upper extreme reaching up to 40MW. A similar pattern is observed in the contingency capacity, where the outliers occur significantly. This wider distribution suggests that the original DRL’s strategies are more aggressive, often disregarding the risks associated with bidding failure and potential profit loss. In contrast, the CVaR-DRL algorithm adopts a more conservative strategy by integrating the CVaR, which avoids the heavy profit loss in the worst bidding failure scenario. In terms of bidding profits, the CVaR-DRL in the same testing scenarios shows an increase of $7,879 and $1,240 in the regulation and contingency markets respectively, reflecting the profit advantages of risk management in bidding strategies.

Refer to caption

Figure 7: BESS bidding distribution of the CVaR-DRL model.

Refer to caption

Figure 8: BESS bidding distribution of the original DRL model.

IV-D Evaluation of LLMs’ Strategies

The offset of FCAS market patterns over time may cause the DRL agent’s original strategy to no longer adapt well to new scenarios and lead to a decline in profitability. Thus, LLMs aim to detect the scenario changes in market patterns online for better timeliness, adjust strategies within a safe profit margin for better reliability, and provide explanations to users for better interpretability.

Fig. 9 compares the bidding results under the LLMs-assisted strategy and the DRL original strategy in the new scenario outside the training set. Typically, in the price axis, the volatility of contingency market clearing prices is significantly higher than that of the regulation market because the demand is often more urgent. However, in this scenario, the regulation market experienced greater volatility. Agent-1 detected the profit opportunities arising from this abnormal market condition and hence in the capacity axis, Agent-2 partially shifted capacity from the contingency markets to regulation markets. In contrast, the original strategy of reserving more contingency capacity based on historical experience would result in profit losses in this scenario, indicating that the LLMs’ bidding strategy is more reliable in terms of profitability in such uncommon scenarios.

In the time axis, the capacity shifting occurred only during the first half of the scenario. For the rest of the period, the regulation prices had returned to common market conditions, thus the original strategy remained essentially unchanged to avoid over-reliance on LLMs’ decisions and ensure a safe profit margin considering the possibility of hallucinations. This reflects the ability of LLMs in terms of timeliness to monitor dynamic changes in market conditions online.

To better demonstrate the workflow of LLMs, we partially excerpt the prompt and answers during interaction with LLMs as shown in Fig. 10. In this process, multiple agents iteratively update their strategies through interactive collaboration based on feedback and affect the interaction between the DRL agent and the environment, thus assisting the DRL agent in making better bidding strategies in new scenarios. In addition, LLMs provide the motivation and rationale for each agent’s decision, which enhances the interpretability of the strategies.

Refer to caption

Figure 9: Comparison of LLMs-assisted and DRL original strategies.

Refer to caption

Figure 10: The prompt and answers during interaction with LLMs.

V Conclusion

In this paper, we model the complete bidding process of BESS in the joint energy and FCAS markets. We develop the CVaR-DRL algorithm to improve the bidding robustness under aleatoric uncertainties and propose the LLMs-based framework to improve the bidding generalization capabilities under epistemic uncertainties. The experiment results show that the proposed method achieves higher bidding profits compared to the traditional mathematical methods. The CVaR-DRL better reduces the bidding risks caused by forecasting uncertainties. The LLMs enhance the bidding performance of strategies in new scenarios through online market condition analysis, hybrid decision-making, and self-reflection in the “DRL-LLMs” loop.

References

  • [1] AEMO, Guide to Ancillary Services in the National Electricity Market, 2023.
  • [2] AEMO, Market Ancillary Service Specification, 2023.
  • [3] Y. Zheng, Z. Y. Dong, F. J. Luo, K. Meng, J. Qiu, and K. P. Wong, “Optimal allocation of energy storage system for risk mitigation of discos with high renewable penetrations,” IEEE Transactions on Power Systems, vol. 29, no. 1, pp. 212–220, 2014.
  • [4] A. A. Mohamed, R. J. Best, X. Liu, and D. J. Morrow, “Single electricity market forecasting and energy arbitrage maximization framework,” IET Renewable Power Generation, vol. 16, no. 1, pp. 105–124, 2022. [Online]. Available: https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/rpg2.12345
  • [5] L. Feng, X. Zhang, C. Li, X. Li, B. Li, J. Ding, C. Zhang, H. Qiu, Y. Xu, and H. Chen, “Optimization analysis of energy storage application based on electricity price arbitrage and ancillary services,” Journal of Energy Storage, vol. 55, p. 105508, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S2352152X22015006
  • [6] Z. Wang, C. Li, X. Zhou, R. Xie, X. Li, and Z. Dong, “Stochastic bidding for vpps enabled ancillary services: A case study,” Applied Energy, vol. 352, p. 121918, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0306261923012825
  • [7] C. Guo, X. Wang, Y. Zheng, and F. Zhang, “Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning,” Energy, vol. 238, p. 121873, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0360544221021216
  • [8] Y. Li, N. Yu, and W. Wang, “Machine learning-driven virtual bidding with electricity market efficiency analysis,” IEEE Transactions on Power Systems, vol. 37, no. 1, pp. 354–364, 2022.
  • [9] W. Chen, J. Qiu, J. Zhao, Q. Chai, and Z. Y. Dong, “Customized rebate pricing mechanism for virtual power plants using a hierarchical game and reinforcement learning approach,” IEEE Transactions on Smart Grid, vol. 14, no. 1, pp. 424–439, 2023.
  • [10] Y. Ye, D. Qiu, X. Wu, G. Strbac, and J. Ward, “Model-free real-time autonomous control for a residential multi-energy system using deep reinforcement learning,” IEEE Transactions on Smart Grid, vol. 11, no. 4, pp. 3068–3082, 2020.
  • [11] Q. Tang, H. Guo, and Q. Chen, “Multi-market bidding behavior analysis of energy storage system based on inverse reinforcement learning,” IEEE Transactions on Power Systems, vol. 37, no. 6, pp. 4819–4831, 2022.
  • [12] X. Liu, S. Li, and J. Zhu, “Optimal coordination for multiple network-constrained vpps via multi-agent deep reinforcement learning,” IEEE Transactions on Smart Grid, 2022.
  • [13] W. R. Clements, B. Van Delft, B.-M. Robaglia, R. B. Slaoui, and S. Toth, “Estimating risk and uncertainty in deep reinforcement learning,” arXiv preprint arXiv:1905.09638, 2019.
  • [14] S. Alexander, T. Coleman, and Y. Li, “Minimizing cvar and var for a portfolio of derivatives,” Journal of Banking & Finance, vol. 30, no. 2, pp. 583–605, 2006, risk Management and Optimization in Finance. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0378426605000841
  • [15] O. Lockwood and M. Si, “A review of uncertainty for deep reinforcement learning,” in Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, vol. 18, no. 1, 2022, pp. 155–162.
  • [16] Y. Cao, H. Zhao, Y. Cheng, T. Shu, G. Liu, G. Liang, J. Zhao, and Y. Li, “Survey on large language model-enhanced reinforcement learning: Concept, taxonomy, and methods,” arXiv preprint arXiv:2404.00282, 2024.
  • [17] S. Sharan, F. Pittaluga, M. Chandraker et al., “Llm-assist: Enhancing closed-loop planning with language-based reasoning,” arXiv preprint arXiv:2401.00125, 2023.
  • [18] N. Nascimento, P. Alencar, and D. Cowan, “Gpt-in-the-loop: Adaptive decision-making for multiagent systems,” arXiv preprint arXiv:2308.10435, 2023.
  • [19] Z. Ji, T. Yu, Y. Xu, N. Lee, E. Ishii, and P. Fung, “Towards mitigating LLM hallucination via self reflection,” in Findings of the Association for Computational Linguistics: EMNLP 2023, H. Bouamor, J. Pino, and K. Bali, Eds.   Singapore: Association for Computational Linguistics, Dec. 2023, pp. 1827–1843. [Online]. Available: https://aclanthology.org/2023.findings-emnlp.123
  • [20] AEMO, The National Electricity Market Fact Sheet, 2021.
  • [21] J. Riesz, J. Gilmore, and I. MacGill, “Frequency control ancillary service market design: Insights from the australian national electricity market,” The Electricity Journal, vol. 28, no. 3, pp. 86–99, 2015. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1040619015000494
  • [22] AEMO, FCAS Model in NEMDE, 2021.
  • [23] A. Ng, D. Harada, and S. J. Russell, “Policy invariance under reward transformations: Theory and application to reward shaping,” in International Conference on Machine Learning, 1999. [Online]. Available: https://api.semanticscholar.org/CorpusID:5730166
  • [24] C. Ying, X. Zhou, H. Su, D. Yan, N. Chen, and J. Zhu, “Towards safe reinforcement learning via constraining conditional value-at-risk,” arXiv preprint arXiv:2206.04436, 2022.
  • [25] B. Eysenbach and S. Levine, “Maximum entropy rl (provably) solves some robust rl problems,” arXiv preprint arXiv:2103.06257, 2021.
  • [26] Y. Chow and M. Ghavamzadeh, “Algorithms for cvar optimization in mdps,” Advances in neural information processing systems, vol. 27, 2014.