Improved Approximation Guarantees for Power Scheduling Problems With Sum-of-Squares Constraints
Abstract
We study a class of combinatorial scheduling problems characterized by a particular type of constraint often associated with electrical power or gas energy. This constraint appears in several practical applications and is expressed as a sum of squares of linear functions. Its nonlinear nature adds complexity to the scheduling problem, rendering it notably challenging, even in the case of a linear objective. In fact, exact polynomial time algorithms are unlikely to exist, and thus, prior works have focused on designing approximation algorithms with polynomial running time and provable guarantees on the solution quality. In an effort to advance this line of research, we present novel approximation algorithms yielding significant improvements over the existing state-of-the-art results for these problems.
Introduction
Management of power systems involves operational routines concerned with the optimal allocation/dispatch of limited resources (power/generation) under various types of objectives and constraints, including total fuel cost, total profit, emissions, active power loss, and voltage magnitude deviation, to name a few. These problems are conventionally modeled as quadratic programs (possibly with discrete variables), which are in general hard to solve exactly or approximately. In this paper, we focus on a class of scheduling problems where the energy constraints are modeled as a sum of squares of linear functions. Such constraints have been studied in a number of works, including electric vehicle charging in smart grids (Yu and Chau 2013; Karapetyan et al. 2018; Elbassioni, Karapetyan, and Nguyen 2019; Khonji et al. 2019); welfare maximization in gas supply networks (Klimm et al. 2022), and scheduling of tasks on machines (Wierman, Andrew, and Tang 2012; Bansal, Kimbrel, and Pruhs 2004).
In (Yu and Chau 2013), the authors investigated the so-called Complex-Demand Knapsack Problem (CKP), formulated to optimize the distribution of power among consumers within Alternating Current (AC) electrical systems. In CKP, power (representing user demand) is expressed in complex numbers (Grainger and Stevenson 1994), where the real part corresponds to active power, the imaginary part stands for reactive power, and the magnitude of the complex number measures the apparent power. Each customer possesses a specific power demand and receives a certain utility if their demand is fully met. However, due to physical limitations on the magnitude of the total power supply available in the system, only a subset of users can be selected for power allocation. Mathematically, the power constraint is formulated as a quadratic function, which is the sum of two squares of linear functions.
The sum-of-squares constraint has also appeared in the context of gas supply networks (Klimm et al. 2022). As noted in (Klimm et al. 2022), a gas pipeline network can be modeled as a directed path graph with different entry and exit nodes. Transportation requests are characterized by their entry and exit nodes and the amounts of gas to be transported. The gas flow on an edge from node to a node is modeled by the Weymouth equations (Weymouth 1912), describing the difference of the squared pressures between these nodes. For the operation of gas networks, one needs to choose a subset of transportation requests to be served such that the maximum difference of the squared pressures in the network does not exceed a given threshold. This pressure constraint can then be formulated as a sum of squares of linear functions.
Mathematically, the above two problems can be cast as a binary program that seeks to maximize a linear objective function subject to a sum-of-squares constraint. This problem covers the classical Knapsack problem as a special case and is thus -hard. The first algorithmic result was given in (Woeginger 2000), where it was proved that the problem is strongly -hard, and thus does not admit a fully polynomial time approximation scheme (FPTAS). Yu and Chau (Yu and Chau 2013) developed a -approximation algorithm, which was subsequently improved to a PTAS in (Chau, Elbassioni, and Khonji 2016), the best possible result one can hope for the problem. In (Elbassioni and Nguyen 2017) the authors further extended this result to the case where the constraint contains a constant number of squares and devised a ()-approximation algorithm for variant with a submodular objective. Recently, (Klimm et al. 2022) studied the version with an unbounded number of squares and presented a constant -approximation algorithm for a linear objective and ruled out the existence of an approximation factor of . These results rely on the so-called pipage rounding technique. One interesting question raised here is whether this rounding technique can be extended to nonlinear objectives such as quadratic and submodular functions, which are frequently encountered in real-world economic contexts.
The minimization version of CKP () with a linear objective and a sum-of-two-squares constraint has been explored in (Elbassioni, Karapetyan, and Nguyen 2019). As noted therein, this version constitutes a special case of the Economic Dispatch problem (a.k.a, Unit Commitment, Generation Dispatch Control), which is principal to power systems engineering and is responsible for minimizing generation costs while maintaining the balance between the net supply and demand (Wood and Wollenberg 2012). Unlike the maximization version, however, ’s natural relaxation is non-convex and hence harder to approximate. As of now, the best-known approximation was the quasi-PTAS (QPTAS) attained in (Elbassioni, Karapetyan, and Nguyen 2019) via a geometric approach, whereas the existence of a PTAS remained an open question.
Taking a step forward, we present two novel approximations for and a generalized variant of CKP. More concretely, the contributions of this study are as follows:
-
•
First, we provide a PTAS for , which is a notable improvement upon the state-of-the-art result of a QPTAS provided in (Elbassioni, Karapetyan, and Nguyen 2019). At the heart of the approximation scheme is a polynomial time algorithm for solving the (non-convex) relaxation problem to optimality, by exploiting the special structure of optimal solutions. Note that a PTAS is the best possible result one can hope for, unless , as one can use a technique similar to that in (Woeginger 2000)111In (Woeginger 2000) the author provided a reduction from a variant of Partition to disprove the existence of an FPTAS for the CKP problem. With some minor modifications, this reduction can also be established for the minimization version of CKP. to rule out the non-existence of an FPTAS.
-
•
Next, we show that any -approximation algorithm for the relaxation problem maximizing a (non-monotone) submodular quadratic function over a general (non-negative) convex quadratic constraint can lead to an -approximation algorithm for the discrete case, where is the reverse golden ratio. The idea is to generalize the technique of (Klimm et al. 2021) developed for a similar problem with a linear objective: given a fractional solution, round it in such a way that the resulting discrete solution remains feasible and the objective value does not decrease. Since in the studied problem both the objective and constraint are quadratic, we have to develop a more sophisticated rounding technique than that derived in (Klimm et al. 2022) for the linear objective case.
Remark
A common technique for designing approximation algorithms is to first solve the relaxation to obtain an optimal (fractional) solution, then to round it to an integer one, with a small loss in the objective value. It should be noted that, however, ’s straightforward relaxation is non-convex, and thus, finding an efficiently computable lower/upper bound on the objective is by itself a major challenge.
Problem Formulations and Notation
In this section, we formalize the mathematical models of the two power scheduling problems under study. The first one, termed , is the minimization version of CKP (Yu and Chau 2013) and is defined as
s.t. | (1) |
where , and . We may assume, w.l.o.g., that (if for some , the corresponding variable can be set to ). Also, it is assumed that are non-negative, otherwise the problem doesn’t admit an approximation algorithm with finite ratio (Chau, Elbassioni, and Khonji 2016). In , the scalar denotes the aggregate apparent power demand to be met, while the column vectors and capture the active and reactive power outputs of the contracted generation units. Accordingly, the vector represents the costs associated with dispatching these units. These costs may quantify the expenses carried by the load-serving entity (operator of the grid), including the utilization of a generation source, or alternatively, the amount paid due to CO2 emissions.
The second problem, referred to as , generalizes the one studied by (Elbassioni and Nguyen 2017; Klimm et al. 2022) and takes the following form:
s.t. | (2) |
where , and the objective is submodular, i.e., the matrix has non-positive off-diagonal entries. Let be the vector of diagonal entries of . One can assume, w.l.o.g., that is non-negative, since if then setting does not affect the optimal solution. The constraint (2) can be written in the form of , for some positive semi-definite (PSD) matrix . Since , one can write the objective function in the equivalent form of , where is obtained from by setting all the diagonal entries to , without changing optimal solutions. It is not hard to verify that in this case the (discrete) function (over ) satisfies submodularity, i.e., , for every two sets , where denotes the binary vector with if and only if .
Approximation algorithms
For completeness, we briefly review the definition of approximation algorithms and schemes. For , a vector is said to be an -approximate solution for a maximization problem (resp., minimization problem) with an objective , if is a feasible solution satisfying (resp., ), where is the value of an optimal solution. For simplicity, if (resp., ), for , we shall call -optimal. Recall that a PTAS is an algorithm that runs in time polynomial in the input size , for every fixed , and outputs an -optimal solution. A QPTAS is similar to a PTAS but the running time is quasi-polynomial (i.e., of the form ), for every fixed . Finally, a PTAS will become an FPTAS if its running time is polynomial in .
Notations
We write for any positive integer . For two vectors we write if for all . For a vector and a subset , we write . Let and denote the -norm and -norm of vector , respectively. We denote by and the vectors (or matrices of appropriate dimensions) of all s and s, respectively. Lastly, with a slight abuse of notation, we shall refer to the value of an optimal solution of a given instance of or problems by .
An Approximation Scheme for MinCKP
In this section, we present a PTAS for . As previously remarked, the relaxation with continuous variables in is non-convex and thus cannot be solved by existing techniques for convex (quadratic) programs. Interestingly, by exploiting the structure of a class of the relaxation’s optimal solutions, and carefully analyzing an equivalent dual program, we can reduce the space of candidates so that an optimal solution can be found efficiently. It can then be used to derive an integer solution with a bounded guarantee.
Theorem 1
There is a PTAS for .
Denote by the (non-convex) relaxation of . It might be the case that the optimum solution for is attained at the boundary of the feasible set, and thus has irrational components. Hence, by saying “an optimal solution” we essentially mean a rational solution with and , for an arbitrary small , where is an optimum solution for . For a given , we say that is -feasible for if it satisfies the second inequality, and -optimal if it satisfies both inequalities. At the heart of our method is the following key result.
Theorem 2
For any given , a -optimal solution for the relaxation can be computed in time , where is the length of the binary representation of the input vectors instance.
We assume that , so that the problem is feasible. We also assume that and are linearly independent vectors; otherwise, the problem simplifies to a linear program. We will prove that, given a fixed positive value , the problem of finding a -feasible solution, if it exists, of value exactly can be achieved in polynomial time. We denote this problem by . As the objective function and the constraint (2) are both monotone222Note that, given a feasible solution to with value , we can increase the components of , one by one, without violating the inequality (2), until either becomes equal to , or becomes equal to for all ; the latter case cannot happen unless . Thus, checking if the optimum value of is at most is equivalent to checking if is feasible., this result combined with a binary search (via Algorithm 1) yields an efficient algorithm for finding a -optimal solution to , and hence proving Theorem 2.
Lemma 1
For a given , one can find a -feasible solution to by solving linear programs.
Proof. A high-level description. We rely on the structural property of optimal solutions of , which will be analyzed using the dual of the linear program () given below. Based on the values of the (basic feasible) optimal dual solution of this linear program, we can classify the variables in an optimal solution for into those having certainly a value in and those who are possibly taking values in . Each variable in the latter set (denoted by below) has the property that the corresponding objective cost can be written as a linear combination of the corresponding and . This property together with the assumption that allows us to eliminate one of the sums , , leading to a single quadratic inequality in only one variable (or ), which can be solved exactly, thus reducing the problem of checking the feasibility of to solving at most linear programs. The formal procedure is given in Algorithm 2.
We first give a structural property of an optimal solution of , which can help to check the feasibility of efficiently. Let be an optimal solution to . We will use as a parameter in analyzing the structure of an optimal solution. Let and . Consider the following linear program:
s.t |
Note that and have the same optimal solution. Indeed, it can be seen that every feasible solution to is also feasible to . Since both programs have the same objective, every optimal solution to must also be optimal to . In what follows we make use of the dual of in analyzing the structure of such a solution.
The dual of () is
s.t. | (3) | |||
(4) |
Since the primal is feasible with the primal solution , then the dual is also feasible and let be an optimal (basic feasible) dual solution333Note that the linear independence of and guarantees that the optimum of the dual is attained at a vertex. corresponding to the primal solution . The optimum values of the primal-dual pair coincide by the strong duality criterion. Moreover, by the complementary slackness condition, it holds that, for all ,
- (i)
-
,
- (ii)
-
The complementary slackness criterion implies that the primal solution satisfies the following conditions
Note that can take any value in when the equality happens.
The following observation is due to the linear independence of and and the fact that is assumed to be a basic feasible solution to the dual.
Observation 1
There must be a pair such that the two vectors and are linearly independent and the dual vector fulfills the system of equations .
Indeed, since is a basic feasible solution to the dual, there must exist linearly independent constraints among the constraints (3)-(4) that are tight at . This implies that we must have two distinct indices such that the two inequalities (3) and (4) hold as equalities for and such that the four obtained equations are linearly independent. This leads to the claim in the observation.
The above observation implies that are uniquely and fully determined by a linear system obtained by selecting two indices such that the two vectors and are linearly independent. Once the solutions are obtained, one can classify the values of variables in the primal solution as follows
-
•
If , then ,
-
•
If , then ,
-
•
If , then .
In other words, based on the values of , we know that some of the variables in the primal solution must be equal to or , but possibly not all. Define to be the sets containing indices with and , respectively, and let . Then the values of ’s, , must be the optimal to the program (), parameterized by :
s.t | |||
where . It follows that the problem of checking if has a feasible solution is equivalent to checking if there is a feasible solution to with . By the definition of , it follows that
Based on this, the problem is now to check if the following nonlinear system is feasible with variables :
(5) |
where , and are upper bounds on the values of and , respectively.
Since we assume , it follows that at least one of the dual solutions must be positive. Without loss of generality, we assume . To check the feasibility of (5), one can compute as a function of from the first equation, and then plug it into the second one, leading to a quadratic inequality in one variable :
(6) |
from which one can easily get the solution of as
where are the two (possibly identical) roots of the quadratic equation obtained by setting the inequality in (6) to an equality444Note that it is possible that the quadratic equation has no real solution in which case, we set and ; this means that our assumption about the feasibility of is wrong.. Note that, depending on the signs of and , consists of at most two intervals on the real line. Using each interval in and the equation in (5), the remaining variables ’s, , can be recovered by solving at most 2 linear programs, one for each choice of :
Note that, due to the possible non-rationality of and , we may need to work with -approximations of and , i.e., compute numbers and such that and . Such approximations can be found in time , and by choosing sufficiently small, we can guarantee that (6) is satisfied within an additive error of , thus yielding a -feasible solution for . ❑
Proof. (of Theorem 1). We rely on the partial enumeration method (Frieze and Clarke 1984). For subsets , let denote the restriction of when we enforce for and for . It is straightforward to modify Algorithm 1 to find a -optimal solution for . Given the desired accuracy , we consider all subsets of size at most and let . For each considered subset , we solve and round the -optimal solution obtained as described below. Among all the rounded solutions obtained, we return the solution with minimum cost. Effectively, we are guessing the indices with the highest values in an optimal solution of and solving for the remaining items. Let us denote the corresponding sets in the optimal guess by and . Let be a -optimal solution to , , , and consider the following linear program ():
s.t | |||
, for . |
A basic fact from Linear Programming tells us that every bounded linear program with only two non-trivial constraints has an optimal (basic feasible) solution lying at some vertex of the polytope defining the constraints, which cannot have more than two fractional components.
Observation 2
There is an optimal solution to with at most two fractional entries.
Let and be the optimal values for and , respectively. Then, as is both -optimal for and feasible for , we must have . Hence, there exists a basic optimal solution of , with at most fractional components (which can be found efficiently), which is also a -optimal solution of . By rounding each of these two fractional components to , we obtain a -feasible solution to of value at most
if we choose . Observe that, for this choice of , implies that , since is a rational number of value at least . We conclude that is feasible solution to of value at most . The running time is . ❑
Pipage Rounding for MaxSUB
In this section we provide a pipage rounding procedure for MaxSUB, given a (fractional) solution of its relaxation (denoted as , obtained by changing binary variables to continuous ones). Our aim is to prove the following result.
Theorem 3
There is an -approximation algorithm for , provided that there is an -approximation algorithm for its continuous relaxation, where .
To prove the Theorem 3, we construct an algorithm, which is formally presented in Algorithm 3, and prove its correctness. Before doing so, (inspired by (Klimm et al. 2022)) let us introduce two relaxations of , which are needed in the performance analysis of the algorithm later on. In both relaxations, we replace the binary variables by . Recall that , define
where is the vector containing all the diagonal entries of . It is worth noting that, because of the non-negativity of , and are equivalent.
Lemma 2
Every feasible integer solution to the original problem is feasible to both relaxations. Also, for every feasible solution to , forms a feasible solution to . Furthermore, the objective value of is at least times the optimum of .
Proof. The first claim is easy to see. For the second one, note that
where the last equality follows from the definition of . It remains to show the last claim. We have that since , and . ❑
Performance analysis of Algorithm 3.
Let . Suppose that is the solution returned at Step 3 of Algorithm 3. Then, it holds that , where and are, respectively, optimal solutions of and its relaxation . By Lemma 2, we have that is feasible to . By the correctness of the pipage rounding, to be proved below, applications of Algorithm 4 result in a solution such that and Hence, the rounded solution is feasible for the original problem .
Now let be the integer solution returned by Algorithm 3. Clearly, . If then we are done. Otherwise, suppose that , where is obtained by rounding up the fractional component in , says , to . By the submodularity of , it holds that
where is defined as in Step 9 of Algorithm 3. 555Recall that denotes the binary vector with if and only if .
By the definition of , it holds that
Furthermore, by the definition of and and Lemma 2, we must have that
Hence, .
Correctness of the pipage rounding (Algorithm 4)
We prove that at the end of the pipage rounding procedure at most one component of the resulting solution is fractional. To this end, we show that, at each step of the rounding, at least one of the fractional variables is rounded to either or , without decreasing or increasing . Note that the value of a fractional variable and its coefficient (which can be a function of other variables) may be changed after each rounding step. In addition, once a variable has been rounded to or , it will never be considered in the next steps. In particular, the while loop in Algorithm 3 runs for at most iterations. Suppose that we are at step of the rounding procedure, with two fractional variables . For ease of presentation, we denote by and the parts of and , respectively, that contain these two variables. Formally, let
be the functions of and , where , . We may use to denote the first order derivatives of and . To simplify the notation, we drop the superscript “” and use , , and to denote ,, and respectively. It suffices to prove that one can round the two variables such that at least one of them goes down to or up to , without decreasing or increasing .
-
•
If then rounding down both variables to does not decrease or increase . Thus, we may assume that .
-
•
If then set , and similarly for . Thus, we may assume that .
-
•
If then set , and similarly for . Thus, we may assume that and .
-
•
If , we set so as to maximize (which is linear in ). This guarantees that while . Thus we may assume that and similarly .
Note that . The following lemma completes the proof of the correctness of the pipage rounding.
Lemma 3
Suppose that , and together with its first derivatives and and all strictly positive. Then there always exist two parameters so as to satisfy simultaneously both inequalities and , with at least one of being held at the boundary of its domain.
Proof. We have that
(7) | |||
(8) |
where and . We will fix one of the variables or to one of its two boundary values, and assign the other variable a value in its range such that (7) is satisfied. To this end, we consider four possible cases below.
-
•
Case 1: . Plugging this into (7) implies which are valid inequalities (even if ) for some if and only if the following system is feasible
-
•
Case 2: . Using a similar argument as in Case 1, there is a value of satisfying (7) if and only if the following system is feasible
-
•
Case 3: . Plugging this into (7) implies
(9) Without loss of generality (otherwise, it becomes only easier to find a feasible value of ). Using similar calculations as in the previous cases, it can be shown (regardless of the sign of ) that there is for which (9) is satisfied if and only if the following system is feasible
-
•
Case 4: . Similar to Case 3, there is a value of satisfying (7) if and only if the following system is feasible
We now prove, via contradiction, that at least one of the systems (I1), (I2), (I3), or (I4) is feasible. Before doing so, we observe some logical relations between some of the inequalities in these systems. In the following, for an inequality (I)(I1.1),…,(I4.2), we overload notation by writing (I) to mean that the inequality holds, and use () to denote that the reverse inequality holds. The following observations are easy to verify:
-
L1:
(I1.2) (I2.2): indeed, suppose that this is not the case, that is
(10) (11) It follows that , which does not hold as .
-
L2:
Similarly, we have [(I3.2) (I4.2)]: indeed, suppose that this is not the case. Then,
and
Collecting terms, we get
which is a contradiction as and .
-
L3:
() (I1.2) (I4.1): indeed, the antecedent of this implication imposes that
giving , which satisfies (I4.1).
-
L4:
Similarly, (I2.2) () (I2.1): indeed, the antecedent of the implication imposes that
implying that , and consequently giving , which satisfies (I2.1).
-
L5:
(I2.2) (I4.2): indeed, suppose that this is not the case. Then we get a contradiction:
as , and .
Now, suppose that none of the systems (I1), (I2), (I3), and (I4) is feasible. W.l.o.g. assume that (I1.2) holds (the case when (I2.2) holds can be done similarly by exchanging the roles of the indices and ). Then we get the following sequence of implications:
leading to the contradiction that the system (I4) is feasible. ❑
Theorem 3 paves the way for designing approximation algorithms for , based on solving (approximately or exactly) its relaxation . We are not aware of any algorithm for efficiently solving such a relaxation. However, for general convex constraint (where is a general non-negative PSD matrix), one can show a constant approximation algorithm by taking advantage of the result of (Bian et al. 2017a).
Theorem 4 ((Bian et al. 2017a))
There is an iterative greedy algorithm for maximizing a (continuous) DR-submodular function subject to a down-closed convex set , such that it proceeds in steps (each taking polynomial time), and outputs a solution such that , where is an optimal solution, is a constant such that , and .
Here, a convex set is said to be down-closed if for any , and implies that . Also, a twice-differentiable function is DR-submodular if and only if for and (see (Bian et al. 2017b) for more details). When applied to our problem, the above result gives the following.
Corollary 1
There is a -approximation algorithm for maximizing under the constraints , when , and is PSD.
Proof. Clearly, the convex set defined by with non-negative PSD matrix , and positive number , is down-closed. We now derive upper bounds on the values of the parameters and defined in Theorem 4. The gradient of can be computed as , and thus
where denotes the Frobenius norm of , is the maximum absolute value of entries in , and the first inequality follows from the Cauchy-Schwarz inequality. This gives . On the other hand, in our case of , is upper bounded by the maximum length of any feasible point in the hypercube , which equals . To show the approximation factor, it is noted that , for all , because of the feasibility of any unit vector (by the non-negativity assumption on ). Also, by the non-negativity of the objective function, it holds that for every , which implies that . Therefore, . By plugging in in Theorem 4, one can get a approximation factor as desired. ❑
Now by applying Corollary 1 and Theorem 3, one can obtain the following result for . Note that the objective function can be written as for binary variables, which exhibits DR-submodularity over as .
Theorem 5
There is an -approximation algorithm for .
Conclusion
We have presented novel approximation algorithms for two power scheduling problems with the sum-of-squares constraints, one of them significantly improved upon the previous known results. Our work also leads to several open problems for future work. Perhaps the most interesting question is whether our PTAS for can be extended to the case when the constraint involves any constant number of squares. One can observe that our approach to solving the natural (non-convex) relaxation cannot be extended in this setting, and thus, the development of new techniques may be required. For the submodular maximization problem , it would be interesting to investigate the following two questions: i) Is there a better approximation algorithm for solving the relaxation? and ii) Is it possible to have pipage rounding that works for more general (differential) DR-submodular functions? For the first question, one idea is to utilize the special structure of the sum-of-square constraints, for which the semi-definite program-based method might be applied. Finally, extending all the obtained results in this paper to a setting with more than one constraint would also be another promising direction for research.
References
- Bansal, Kimbrel, and Pruhs (2004) Bansal, N.; Kimbrel, T.; and Pruhs, K. 2004. Dynamic Speed Scaling to Manage Energy and Temperature. In 45th Symposium on Foundations of Computer Science (FOCS 2004), 17-19 October 2004, Rome, Italy, Proceedings, 520–529. IEEE Computer Society.
- Bian et al. (2017a) Bian, A.; Levy, K. Y.; Krause, A.; and Buhmann, J. M. 2017a. Non-monotone Continuous DR-submodular Maximization: Structure and Algorithms. In Guyon, I.; von Luxburg, U.; Bengio, S.; Wallach, H. M.; Fergus, R.; Vishwanathan, S. V. N.; and Garnett, R., eds., Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, Long Beach, CA, USA, 486–496.
- Bian et al. (2017b) Bian, A. A.; Mirzasoleiman, B.; Buhmann, J. M.; and Krause, A. 2017b. Guaranteed Non-convex Optimization: Submodular Maximization over Continuous Domains. In Singh, A.; and Zhu, X. J., eds., Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, Fort Lauderdale, FL, USA, volume 54 of Proceedings of Machine Learning Research, 111–120. PMLR.
- Chau, Elbassioni, and Khonji (2016) Chau, C.; Elbassioni, K. M.; and Khonji, M. 2016. Truthful Mechanisms for Combinatorial Allocation of Electric Power in Alternating Current Electric Systems for Smart Grid. ACM Trans. Economics and Comput., 5(1): 7:1–7:29.
- Elbassioni, Karapetyan, and Nguyen (2019) Elbassioni, K. M.; Karapetyan, A.; and Nguyen, T. T. 2019. Approximation schemes for r-weighted Minimization Knapsack problems. Ann. Oper. Res., 279(1-2): 367–386.
- Elbassioni and Nguyen (2017) Elbassioni, K. M.; and Nguyen, T. T. 2017. Approximation algorithms for binary packing problems with quadratic constraints of low cp-rank decompositions. Discret. Appl. Math., 230: 56–70.
- Frieze and Clarke (1984) Frieze, A.; and Clarke, M. 1984. Approximation Algorithm for the m-Dimensional 0-1 Knapsack Problem. European Journal of Operational Research, 15: 100–109.
- Grainger and Stevenson (1994) Grainger, J.; and Stevenson, W. 1994. Power System Analysis. McGraw-Hill.
- Karapetyan et al. (2018) Karapetyan, A.; Khonji, M.; Chau, C.; Elbassioni, K. M.; and Zeineldin, H. H. 2018. Efficient Algorithm for Scalable Event-Based Demand Response Management in Microgrids. IEEE Trans. Smart Grid, 9(4): 2714–2725.
- Khonji et al. (2019) Khonji, M.; Karapetyan, A.; Elbassioni, K. M.; and Chau, S. C. 2019. Complex-demand scheduling problem with application in smart grid. Theor. Comput. Sci., 761: 34–50.
- Klimm et al. (2021) Klimm, M.; Pfetsch, M.; Raber, R.; and Skutella, M. 2021. Approximation of Binary Second Order Cone Programs of Packing Type.
- Klimm et al. (2022) Klimm, M.; Pfetsch, M. E.; Raber, R.; and Skutella, M. 2022. Packing under convex quadratic constraints. Math. Program., 192(1): 361–386.
- Weymouth (1912) Weymouth, T. 1912. Problems in natural gas engineering. Trans. Am. Soc. Mech. Eng., 34: 185–231.
- Wierman, Andrew, and Tang (2012) Wierman, A.; Andrew, L. L. H.; and Tang, A. 2012. Power-Aware Speed Scaling in Processor Sharing Systems: Optimality and Robustness. Perform. Eval., 69(12): 601–622.
- Woeginger (2000) Woeginger, G. J. 2000. When Does a Dynamic Programming Formulation Guarantee the Existence of a Fully Polynomial Time Approximation Scheme (FPTAS)? INFORMS J. Comput., 12(1): 57–74.
- Wood and Wollenberg (2012) Wood, A. J.; and Wollenberg, B. F. 2012. Power generation, operation, and control. Wiley.
- Yu and Chau (2013) Yu, L.; and Chau, C. 2013. Complex-demand knapsack problems and incentives in AC power systems. In Gini, M. L.; Shehory, O.; Ito, T.; and Jonker, C. M., eds., International conference on Autonomous Agents and Multi-Agent Systems, AAMAS ’13, Saint Paul, MN, USA, May 6-10, 2013, 973–980. IFAAMAS.