Experimental Design Networks:
A Paradigm for Serving Heterogeneous Learners under Networking Constraints
††thanks: * Y. Liu and Y. Li contributed equally to the paper.
Abstract
Significant advances in edge computing capabilities enable learning to occur at geographically diverse locations. In general, the training data needed in those learning tasks are not only heterogeneous but also not fully generated locally. In this paper, we propose an experimental design network paradigm, wherein learner nodes train possibly different Bayesian linear regression models via consuming data streams generated by data source nodes over a network. We formulate this problem as a social welfare optimization problem in which the global objective is defined as the sum of experimental design objectives of individual learners, and the decision variables are the data transmission strategies subject to network constraints. We first show that, assuming Poisson data streams, the global objective is a continuous DR-submodular function. We then propose a Frank-Wolfe type algorithm that outputs a solution within a factor from the optimal. Our algorithm contains a novel gradient estimation component which is carefully designed based on Poisson tail bounds and sampling. Finally, we complement our theoretical findings through extensive experiments. Our numerical evaluation shows that the proposed algorithm outperforms several baseline algorithms both in maximizing the global objective and in the quality of the trained models.
I Introduction
We study a network in which heterogeneous learners dispersed at different locations perform local learning tasks by fetching relevant yet remote data. Concretely, data sources generate data streams containing both features and labels/responses, which are transmitted over the network (potentially through several intermediate router nodes) towards learner nodes. Generated data samples are used by learners to train models locally. We are interested in the design of rate allocation strategies that maximize the model training quality of learner nodes, subject to network constraints. This problem is relevant in practice. For example, in a mobile edge computing network [1, 2], data are generated by end devices such as mobile phones (data sources) and sent to edge servers (learners) for model training, a relatively intensive computation. In a smart city [3, 4], we can collect various types of data such as image, temperature, humidity, traffic, and seismic measurements, from different sensors. These data could be used to forecast transportation traffic, the spread of disease, pollution levels, the weather, and so on, while training for each task could happen at different public service entities.
We quantify the impact that data samples have on learner model training accuracy by leveraging objectives motivated by experimental design [5], a classic problem in statistics and machine learning. This problem arises in many machine learning and data mining settings, including recommender systems [6], active learning [7], and data preparation [8], to name a few. In standard experimental design, a learner decides on which experiments to conduct so that, under budget constraints, an objective modeling prediction accuracy is maximized. Learner objectives are usually scalarizations of the estimation error covariance.
In this paper, we propose experimental design networks, a novel optimization framework that extends classic experimental problems to maximize the sum of experimental design objectives across networked learners. Assuming Poisson data streams and Bayesian linear regression as the learning task, we define the utility of a learner as the expectation of its so-called D-optimal design objective [5], namely, the log-determinant of the learner’s estimation error covariance matrix. Our goal is to determine the data rate allocation of each network edge that maximizes the aggregate utility across learners. Extending experimental design to networked learners is non-trivial. Literature on experimental design for machine learning considers budgets imposed on the number of data samples used to train the model [9, 10, 11, 12, 13]. Instead, we consider far more complex constraints on the data transmission rates across the network, as determined by network link capacities, the network topology, and data generation rates at sources.
To the best of our knowledge, we are the first to study such a networked learning problem, wherein learning tasks at heterogeneous learners are coupled via data transmission constraints over an arbitrary network topology. Our detailed contributions are as follows:
-
•
We are the first to introduce and formalise the experimental design network problem, which enables the study of multi-hop data transmission strategies for distributed learning over arbitrary network topologies.
-
•
We prove that, assuming Poisson data streams, Bayesian linear regression as the learning task, and D-optimal design objectives at the learners, our framework leads to the maximization of continuous DR-submodular objective subject to a lower-bounded convex constraint set.
-
•
Though the objective is not concave, we propose a polynomial-time algorithm based on a variant of the Frank-Wolfe algorithm [14]. To do so, we introduce and analyse a novel gradient estimation procedure, tailored to Poisson data streams. We show that the proposed algorithm, coupled with our novel gradient estimation, is guaranteed to produce a solution within a approximation factor from the optimal.
-
•
We conduct extensive evaluations over different network topologies, showing that our proposed algorithm outperforms several baselines in both maximizing the objective function and in the quality of trained target models.
The rest of this paper is organized as follows. In Sections II and III, we review related work and provide technical preliminaries. Section IV introduces our framework of experimental design networks. Section V describes our proposed algorithm. We discuss extensions in Section VI and present numerical experiments in Section VII. We conclude in Section VIII.
II Related Work
Distributed Computing/Learning in Networks. Distribution of computation tasks has been studied in hierarchical edge cloud networks [15], multi-cell mobile networks [16], and joint with data caching in arbitrary networks [17]. There is a rich literature on distributing machine learning computations over networks, including exchanging gradients in federated learning [18, 19, 20], states in reinforcement learning [21], and data vs. model offloading [22] among collaborating neighbor nodes. We depart from the aforementioned works in (a) considering multiple learners with distinct learning tasks, (b) introducing experimental design objectives, quite different from objectives considered above, (c) studying a multi-hop network, and (d) focusing on the optimization of streaming data movements, as opposed to gradients or intermediate result computations.
Experimental Design. The experimental design problem is classic and well-studied [5, 23]. Several works study the D-optimality objective [9, 10, 11, 12, 24] for a single learner subject to budget constraints on the cost for conducting the experiments. Departing from previous work, we study a problem involving multiple learners subject to more complex constraints, induced by the network. Our problem also falls in the continuous DR-submodular setting, departing from the discrete setting in prior work. In fact, our work is the first to show that such an optimization, with Poisson data streams, can be solved via continuous DR-submodularity techniques.
DR-submodular Optimization. Submodularity is traditionally studied in the context of set functions [25, 26], but was recently extended to functions over the integer lattice [27] and the continuous domain [14]. Despite the non-convexity and the general NP-hardness of the problem, when the constraint set is down-closed and convex, maximizing monotone continuous DR-submodular functions can be done in polynomial time via a variant of the Frank-Wolfe algorithm. This yields a solution within from the optimal [14, 26], outperforming the projected gradient ascent method, which provides approximation guarantee over arbitrary convex constraints [28].
The continuous greedy algorithm [26] maximizes a submodular set function subject to matroid constraints: this first applies the aforementioned Frank-Wolfe variant to the so-called multilinear relaxation of the discrete submodular function, and subsequently uses rounding [29, 30]. The multilinear relaxation of a submodular function is in fact a canonical example of a continuous DR-submodular function, whose optimization comes with the aforementioned guarantees. Our objective function results from a new continuous relaxation, which we introduce in this paper for the first time. In particular, we show that assuming a Poisson distribution on inputs on the (integer lattice) DR-submodular function of D-optimal design yields a continuous DR-submodular function. This “Poisson” relaxation is directly motivated by our networking problem, is distinct from the multilinear relaxation [31, 26, 28], and requires a novel gradient estimation procedure. Our constraint set also requires special treatment as it is not down-closed, as required by the aforementioned Frank-Wolfe variant [14, 26]; nevertheless, we attain a approximation, improving upon the factor of projected gradient ascent [28].
Submodularity in Networking and Learning. Submodular functions are widely encountered in studies of both networking and machine learning. Submodular objectives appear in studies of network caching [32, 33], routing [34], rate allocation [35], sensor network design [36], as well as placement of virtual network functions [37]. Submodular utilities are used for data collection in sensor networks [38] and also the design of incentive mechanisms for mobile phone sensing [39]. Many machine learning problems are submodular, including structure learning, clustering, feature selection, and active learning (see e.g., [40]). Our proposed experimental design network paradigm expands this list in a novel way.
III Technical Preliminary
We begin with a technical preliminary on linear regression, experimental design, and DR-submodularity. The contents of this section are classic; for additional details, we refer the interested reader to, e.g., [41, 42] for linear regression, [5] for experimental design, and [14] for submodularity.
III-A Bayesian Linear Regression
In the standard linear regression setting, a learner observes samples , , where and are the feature vector and label of sample , respectively. Labels are assumed to be linearly related to the features; in particular, there exists a model parameter vector such that
(1) |
and are i.i.d. zero mean normal noise variables with variance (i.e., ).
The learner’s goal is to estimate the model parameter from samples . In Bayesian linear regression, it is additionaly assumed that is sampled from a prior normal distribution with mean and covariance (i.e., ). Under this prior, given dataset , maximum a posteriori (MAP) estimation of amounts to [42]:
(2) |
where is the matrix of features, is the vector of labels, is the noise variance, and are the mean and covariance of the prior, respectively. We note that, in practice, the inherent noise variance is often not known, and is typically treated as a regularization parameter and determined via cross-validation.
The quality of this estimator can be characterized by the covariance of the estimation error difference (see, e.g., Eq. (10.55) in [42]):
(3) |
The covariance summarizes estimator quality in all directions in : given an unseen sample , also obeying (1), the expected prediction error (EPE) is given by
(4) |
Hence, the eigenvalues of Eq. (3) capture the overall variability of the expected prediction error in different directions.
III-B Experimental Design
In experimental design, a learner determines which experiments to conduct to learn the most accurate linear model. Formally, given possible experiment settings, each described by feature vectors , , the learner selects a total of experiments to conduct with these feature vectors, possibly with repetitions,111Note that, due to the presence of noise in labels, repeating the same experiment makes intuitive sense; formally, repetition of an experiment with features reduces the EPE (4) in this direction. collects associated labels, and then performs linear regression on these sample pairs. In classic experimental design (see, e.g., [5]), the selection is formulated as an optimization problem minimizing a scalarization of the covariance (3). For example, in D-optimal design, the vector of the number of times each experiment is to be performed is determined by minimizing
or, equivalently, by solving the maximization problem:
Max.: | (5a) | |||
s.t.: | (5b) |
In other words, is selected in such a way so that the is as small as possible. Intuitively, this amounts to selecting the experiments that minimize the product of the eigenvalues of the covariance;222Other commonly encountered scalarizations [5] behave similarly. E.g., E-optimality minimizes the maximum eigenvalue, while A-optimality minimizes the sum of the eigenvalues. alternatively, Prob. (5) also maximizes the mutual information between the labels (to be collected) and ; in both interpretations, the selection aims to pick experiments in a way that minimizes the variability of the resulting estimator .
III-C DR-Submodularity
We introduce here diminishing-returns submodularity:
Definition 1 (DR-Submodularity [14, 27]).
A function is called diminishing-returns (DR) submodular iff for all such that and all ,
(6) |
where is the -th standard basis vector.
Moreover, if (6) holds for a real valued function for all such that and all , the function is called continuous DR-submodular.
The above definition generalizes the submodularity of set functions (whose domain is ) to functions over integer lattice (in the case of DR-submodularity), and continuous functions (in the case of continuous DR-submodularity). Particularly for continuous functions, if is differentiable, continuous DR-submodularity is equivalent to being antitone. Moreover, if is twice-differentialble, is continuous DR-submodular if all entries of its Hessian are non-positive. DR-submodularity is directly pertinent to D-optimal design:
IV Problem Formulation
We consider a network that aims to facilitate a distributed learning task. The network comprises (a) data source nodes (e.g., sensors, test sites, experimental facilities, etc.) that generate streams of data, (b) learner nodes, that consume data with the purpose of training models, and (c) intermediate nodes (e.g., routers), that facilitate the communication of data from sources to learners. The data that learners wish to consume is determined by experimental design objectives, akin to the ones described in Sec. III-B. Our goal is to design network communications in an optimal fashion, that maximizes learner social welfare. We describe each of the above system components in more detail below.
IV-A Network Model.
We model the above system as general multi-hop network with a topology represented by a directed acyclic graph (DAG) , where is the set of nodes and is the set of links. Each link can transmit data at a maximum rate (i.e., link capacity) . Sources of the DAG (i.e., nodes with no incoming edges) generate data streams, while learners reside at DAG sinks (nodes with no outgoing edges). We assume this for simplicity; we discuss how to remove this assumption, and how to generalize our analysis beyond DAGs, in Sec. VI.
Data Sources. Each data source generates a stream of labeled data. In particular, we assume that there exists a finite333We extend this to a setting where experiments are infinite in Sec. VI. set of experiments every source can conduct. Once experiment with features is conducted, the source can label it with a label of type out of a set of possible types . Intuitively, features correspond to parameters set in an experiment (e.g., pixel values in an image, etc.), label types correspond to possible measurements (e.g., temperature, radiation level, etc.), and labels correspond to the actual measurement value collected (e.g., .
We assume that every source generates labeled pairs of type according to a Poisson process of rate Moreover, we assume that generated data follows a linear model (1); that is, for every type , there exists a s.t. where are i.i.d. zero mean normal noise variables with variance , independent across experiments and sources .
Learners. Each learner wishes to learn a model for some type . We assume that each learner has a prior on . The learner wishes to use the network to receive data pairs of type , and subsequently estimate through the MAP estimator (2). Note that two learners , may be interested to learn the same model (if ).
Network Constraints. The different data pairs generated by sources are transmitted over edges in the network along with their types and eventually delivered to learners. Our network design aims at allocating network capacity to different flows to meet learner needs.444We assume hop-by-hop routing; see Sec. VI for an extension to source routing. For each edge , we denote the rate with which data pairs of type with features are transmitted as We also denote by
(7) |
the corresponding incoming traffic to node , and
(8) |
the corresponding outgoing traffic from . Specifically for learners, we denote by
(9) |
the incoming traffic with different features of type at . To satisfy capacity constraints, we must have
(10) |
while flow bounds imply that
(11) |
as data pairs can be dropped. We denote by
(12) |
the vector comprising edge and learner rates. Let
(13) |
be the feasible set of edge rates and learner rates. We make following assumption on the network substrate:
Assumption 1.
For , the system is stable and, in steady state, pairs of type arrive at learner according to independent Poisson processes with rate .
IV-B Networked Learning Problem
We consider a data acquisition time period , at the end of which each learner estimates based on the data it has received during this period via MAP estimation. Under Assumption 1, the arrivals of pertinent data pairs at learner form a Poisson process with rate . Let be the cumulative number of times that a pair of type was collected by learner during this period, and the vector of arrivals across all experiments. Then,
(14) |
for all and . Motivated by standard experimental design (see Sec. III-B), we define the utility at learner as the following expectation:
(15) |
where and is given by (5a). We wish to solve the following problem:
Maximize: | (16a) | |||
s.t. | (16b) |
Indexing flows by both type and features implies that, to implement a solution , routing decisions at intermediate nodes should be based on both quantities. Problem (16) is non-convex in general.555It is easy to construct instances of objective (16) that are non-concave. For example, when , , , , and , the Hessian matrix is not negative semi-definite. Nevertheless, we construct a polynomial time approximation algorithm in the next section.
V Main Results
Our main contribution is to show that there exists a polynomial-time randomized algorithm that solves Prob. (16) within a approximation ratio. We do so by establishing that the objective function in Eq. (16a) is continuous DR- submodular (see Definition 1).
V-A Continuous DR-submodularity
Our first main result establishes the continuous DR-submodularity of the objective (16a):
Theorem 1.
The proof can be found in Section V-C; we establish the positivity of the gradient and non-positivity of the Hessian of . We note that Theorem 1 identifies a new type of continuous relaxation to DR-submodular functions, via Poisson sampling; this is in contrast to the multilinear relaxation [31, 26, 28], which is ubiquitous in the literature and relies on Bernoulli sampling. Finally, though our objective is monotone and continuous DR-submodular, the constraint set is not down-closed. Hence, the analysis by Bian et al. [14] does not directly apply, while using projected gradient ascent [28] would only yield a approximation guarantee.
V-B Algorithm and Theoretical Guarantee
Our algorithm is summarized in Algorithm 1. We follow the Frank-Wolfe variant for monotone DR-submodular function maximization by Bian et al. [14], deviating both in the nature of the constraint set , but also, most importantly, in the way we estimate the gradients of objective .
Frank-Wolfe Variant. In the proposed Frank-Wolfe variant, variables and denote the solution and update direction at the -th iteration, respectively. Starting from , the algorithm iterates as follows:
(18a) | |||
(18b) |
where is the stepsize with which we move along direction , and is an estimator of the gradient w.r.t. . The step size is set to for all but the last step, where it is selected so that the total sum of step sizes equals 1.
We note that we face two challenges preventing us from computing the gradient of directly via. Eq. (17): (a) the gradient computation involves an infinite summation over , and (b) conditional expectations in require further computing infinite sums. Using (17) directly in Algorithm 1 would thus not yield a polynomial-time algorithm. To that end, we replace the gradient used in the standard Frank-Wolfe method by an estimator, which we describe next.
Gradient Estimator. Our estimator addresses challenge (a) above by truncating the infinite sum, and (b) via sampling. In particular, for , we estimate partial derivatives via the partial summation:
(19) |
where estimate is constructed via sampling as follows. At each iteration, we generate samples , of the random vector according to the Poisson distribution in Eq. (14), parameterized by the current solution vector . We then compute the empirical average:
(20) |
where is equal to vector with set to .
Theoretical Guarantee. Extending the analysis of [14], and using Theorem 1, we show that the Frank-Wolfe variant combined with gradients estimated “well enough” yields a solution within a constant approximation factor from the optimal:
Theorem 2.
The proof can be found in Section V-D. Theorem 2 implies that, through an appropriate (but polynomial) selection of the total number of iterations , the number of terms and samples , we can obtain a solution that is within from the optimal. The proof crucially relies on (and exploits) the continuous DR-submodularity of objective , in combination with an analysis of the quality of our gradient estimator, given by Eq. (19).
V-C Proof of Theorem 1
By the law of total expectation, we have:
Notably, , for which the following is true:
where the last inequality is true because is monotone-increasing (Lemma 1).
Next, we compute the second partial derivatives . It is easy to see that for , we have
For and , it holds that , where
and the last equality follows from the DR-submodularity of (Lemma 1).
For and , it holds that ,
(24) |
where the last inequality follows from the DR-submodularity of (Lemma 1). ∎
V-D Proof of Theorem 2
Our proof relies on a series of key lemmas; we state them below. Full proofs of all lemmas can be found in the appendix. We begin by associating the approximation guarantee of Algorithm 1 to the quality of gradient estimation :
Lemma 2.
The proof, found in Appendix -B, relies on the continuous DR-submodularity of , and follows [14]; we deviate from their proof to handle the additional issue that is not downward closed (an assumption in [14]).
Next, we turn our attention to characterizing the quality of our gradient estimator. To that end, use the following subexponential tail bound:
Lemma 3 (Theorem 1 in [44]).
Let , for . Then, for any , we have
(27) |
where is the function defined by .
The expression for implies that the Poisson tail decays slightly faster than a standard exponential random variable (by a logarithmic factor). This lemma allows us to characterize the effect of truncating Eq. (17) in estimation quality. In particular, for , let:
(28) |
Then, this is guaranteed to be within a constant factor from the true partial derivative:
Lemma 4.
For and , we have:
(29) |
The proof can be found in Appendix -C.
Next, by estimating via sampling (see (20)), we construct our final estimator given by (19). Putting together Lemma 4 and along with a Chernoff bound [45], to attain a guarantee on sampling, we can bound the quality of our gradient estimator:
Lemma 5.
The proof is in Appendix -D.
Theorem 2 follows by combining Lemmas 2 and 5. In particular, by Lemma 5 and a union bound, we have that (30) is satisfied for all iterations with probability greater than . This, combined with Lemma 2, implies that
is satisfied with the same probability. This implies that for any and ,
with probability . From Eq. (27), the probability is an increasing function w.r.t. , and is an upper bound for . Letting
we have
where the last line holds because when is large enough, e.g., . Thus, .
We determine and by setting
and
Therefore, , and .
VI Extensions
Our model extends in many ways (e.g., to multiple types per learner). We discuss three non-trivial extensions below.
Heterogeneous Noisy Sources. Our model and analysis directly generalizes to a heterogeneous (or heteroskedastic) noise setting, in which the noise level varies across sources. Formally, labels of type at source are generated via , where are zero-mean normal noise variables with variance . In this case, the estimator in (2) needs to be replaced by Generalized Least Squares [46], whereby every pair of type generated by is replaced by prior to applying Eq. (2). This, in turn, changes the D-optimality criterion objective, so that is replaced by for vectors coming from source . In other words, data coming from noisy sources are valued less by the learner. This rescaling preserves the monotonicity and continuous DR-submodularity of our overall objective, and our guarantees hold, mutatis mutandis.
Uncountable . We assumed in our analysis that data features are generated from a finite set , and that transmission rates per edge are parametrized by both the type and the features of the data pair transmitted. This a priori prevents us from considering an infinite set of experiments : this would lead to an infinite set of constraints in Problem (16). In practice, it would also make routing intractable, as routing decisions depend on both and .
We can however extend our analysis to a setting where experiments are infinite, or even uncountable. To do so, we can consider rates per edge of the form , i.e., parameterized by type and source rather than features . In practice, this would mean that packets would be routed based on the source and type, not inspecting the features of the internal pairs, while constraints would be finite (depending on , not ). Data generation at source can then be modelled via a compound Poisson process with rate , at the epochs of which the features are sampled independently from some probability distribution over . The objective then would be written as an expectation over not only arrivals at a learner from source (which will again be Poisson) but also the distribution of features. Sampling from the latter would need to be used when estimating ; as long as Chernoff-type bounds can be used to characterize the estimation quality of such sampling (which would be the case if, e.g., are Gaussian), our analysis would still hold, taking also the number of sampled features into account.
Arbitrary (Non-DAG) Topology. For notational convenience, we assumed that graph was a DAG, with sources and sinks corresponding to sets and respectively. Our analysis further extends to more general (i.e., non-DAG) graphs, provided that extra care is taken for flow constraints to prevent cycles. This can be accomplished, e.g., via source routing. Given an arbitrary graph, and arbitrary locations for data sources and learners, we can extend our setting as follows: (a) flows from a source to a learner could follow source-path routing, over one or more directed paths linking the two, and (b) flows could be indexed by (and remain constant along) a path, in addition to and , while also ensuring that (c) aggregate flow across all paths that pass through an edge does not violate capacity constraints. Such a formulation still yields a linear set of constraints, and our analysis still holds. In fact, in this case, the corresponding set is downward closed, so the proof of the corresponding Theorem 2 follows more directly from [14].
VII Numerical Evaluation
Graph | |||||||
synthetic topologies | |||||||
Erdős-Rényi (ER) | 100 | 1042 | 20 | 5 | 10 | 5 | 309.95 |
balanced-tree (BT) | 341 | 680 | 20 | 5 | 10 | 5 | 196.68 |
hypercube (HC) | 128 | 896 | 20 | 5 | 10 | 5 | 297.69 |
star | 100 | 198 | 20 | 5 | 10 | 5 | 211.69 |
grid | 100 | 360 | 20 | 5 | 10 | 5 | 260.12 |
small-world (SW) [47] | 100 | 491 | 20 | 5 | 10 | 5 | 272.76 |
real backbone networks [48] | |||||||
GEANT | 22 | 66 | 20 | 3 | 3 | 3 | 214.30 |
Abilene | 9 | 26 | 20 | 3 | 3 | 3 | 216.88 |
Deutsche Telekom | 68 | 546 | 20 | 3 | 3 | 3 | 232.52 |
(Dtelekom) |
To evaluate the proposed algorithm, we perform simulations over a number of network topologies and with several different network parameter settings, summarized in Table I.
VII-A Experiment Setting
We consider a finite feature set that includes randomly generated feature vectors with , and a set that of different Bayesian linear regression models with , . Labels of each type are generated with Gaussian noise, whose variance is uniformly at random (u.a.r.) chosen from 0.5 to 1. For each network, we u.a.r. select learners and data sources, and remove incoming edges of sources and outgoing edges of learners. Each learner has a target model , with a diagonal prior generated as follows. First, we separate features into two classes: well-known and poorly-known. Then, we set the corresponding prior covariance (i.e., the diagonal elements in ) to low (uniformly from 0 to 0.01) and high (uniformly from 100 to 200) values, for well-known and poorly-known features, respectively. The link capacity is selected u.a.r. from 50 to 100, and source generates the data of type label with rate , uniformly distributed over [2,5].
Algorithms. We compare our proposed Frank-Wolfe based algorithm (we denote it by FW) with several baseline data transmission strategies derived in different ways:
-
•
MaxSum: This maximizes the aggregate total useful incoming traffic rates of learners, i.e., the objective is:
-
•
MaxAlpha: This maximizes the aggregate -fair utilities [49] of the total useful incoming traffic at learners, i.e., the objective is:
We set
We also compare with another algorithm for the proposed experimental design networks:
- •
Note that projected gradient ascent finds a solution within from the optimal if the true gradients are accessible [28]; its theoretical guarantee with estimated gradients is out of the scope of this work.


Simulation Parameters. We run FW and PGA for iterations with step size . In each iteration, we estimate the gradient according to Eq. (19) with , and , where is given by the current solution. We consider a data acquisition time . Since our objective function cannot be computed in closed-form, we rely on sampling with samples. We also evaluate the model training quality by the average norm of estimation error, where the estimation error is the difference between the true model and the MAP estimator, given by (2). We average over 1000 realizations of the label noise as well as the number of data arrived at the learner .




VII-B Results
Performance over Different Topologies. We first compare the proposed algorithm (FW) with baselines in terms of the normalized aggregated utility and model estimation quality over several networks, shown in Figures 1(a) and 1(b), respectively. Utilities in Fig. 1(a) are normalized by the aggregate utility of FW, reported in Tab. I under . Learners in these networks have distinct target models to train. In all network topologies, FW outperforms MaxSum and MaxAlpha in both aggregate utility and average norm of estimation error. PGA, which also is based on our experimental design framework, performs well (second best) in most networks, except balanced tree, in which it finds a bad stationary point.
Effect of Source Rates and Link Capacities. Next, we evaluate how algorithm performance is affected by varying source rates as well as link capacitites. We focus on the Abilene network, having 3 sources and 3 learners, where two of the learners have a same target training model. We set the data acquisition time to , and labels are generated with Gaussian noise with variance 1. Finally, we again use diagonal prior covariances, and again split between high-variance (selected uniformly between 1 and 3) and low-variance (selected uniformly between 0 and 0.1) features.
Figures 2(a) and 2(b) plot the aggregate utility and average total norm of estimation error across learners, with different data source rates at the sources. The initial source rates are sampled u.a.r. from 2 to 5, and we scale it by different scaling factors. As the source rates increases, the aggregate utility increases and the norm of estimation error decreases for all algorithms. FW is always the best in both figures. Moreover, the proposed experimental design framework can significantly improve the training quality: algorithms based on our proposed framework (FW and PGA) with source rates scaled by already outperform the other two algorithms (MaxSum and MaxAlpha) with source rates scaled by . We see reverse results of MaxSum and MaxAlpha in these two figures compared with Figure 1, showing that the algorithm which considers fairness (i.e., -fair utilities), may perform better if we have competing learners.
Figures 2(c) and 2(d) show performance in Abilene network with different link capacities of several bottleneck links. The capacities are initially sampled u.a.r. from 50 to 100, and we divide it by different downsize factors. The overall trend is that as the link capacities decrease, algorithms achieve smaller aggregate network utility and get a higher average norm of estimation error. The proposed algorithm is always the best with different bottleneck link capacities in both figures.
VIII Conclusion
We propose experimental design networks, to determine a data transmission strategy that maximizes the quality of trained models in a distributed learning system.666Our code and data are publicly available at https://github.com/neu-spiral/Networked-Learning. The underlying optimization problem can be approximated even though its objective function is non-concave.
Beyond extensions we have already listed, our framework can be used to explore other experimental design objectives (e.g., A-optimality and E-optimality) as well as variants that include data source generation costs. Distributed and adaptive implementations of the rate allocation schemes we proposed are also interesting future directions. Incorporating non-linear learning tasks (e.g., deep neural networks) is also an open avenue of exploration: though Bayesian posteriors are harder to compute in closed-form for this case, techniques such as variational inference [50] can be utilized to approach this problem. Finally, an interesting extension of our model involves a multi-stage setting, in which learners receive data in one stage, update their posteriors, and use these as new priors in the next stage. Studying the dynamics of such a system, as well as how network design impacts these dynamics, is a very interesting open problem.
Acknowledgment
The authors gratefully acknowledge support from the National Science Foundation (grants 1718355, 2107062, and 2112471).
References
- [1] N. Abbas, Y. Zhang, A. Taherkordi, and T. Skeie, “Mobile edge computing: A survey,” IEEE Internet of Things Journal, vol. 5, no. 1, pp. 450–465, 2017.
- [2] B. Yang, X. Cao, X. Li, Q. Zhang, and L. Qian, “Mobile-edge-computing-based hierarchical machine learning tasks distribution for iiot,” IEEE Internet of Things Journal, vol. 7, no. 3, pp. 2169–2180, 2019.
- [3] V. Albino, U. Berardi, and R. M. Dangelico, “Smart cities: Definitions, dimensions, performance, and initiatives,” Journal of urban technology, vol. 22, no. 1, pp. 3–21, 2015.
- [4] M. Mohammadi and A. Al-Fuqaha, “Enabling cognitive smart cities using big data and machine learning: Approaches and challenges,” IEEE Communications Magazine, vol. 56, no. 2, pp. 94–101, 2018.
- [5] S. Boyd, S. P. Boyd, and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
- [6] Y. Deshpande and A. Montanari, “Linear bandits in high dimension and recommendation systems,” in 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2012, pp. 1750–1754.
- [7] B. Settles, Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin-Madison, 2009.
- [8] N. Polyzotis, S. Roy, S. E. Whang, and M. Zinkevich, “Data lifecycle challenges in production machine learning: a survey,” ACM SIGMOD Record, vol. 47, no. 2, pp. 17–28, 2018.
- [9] T. Horel, S. Ioannidis, and S. Muthukrishnan, “Budget feasible mechanisms for experimental design,” in Latin American Symposium on Theoretical Informatics. Springer, 2014, pp. 719–730.
- [10] Y. Guo, J. Dy, D. Erdogmus, J. Kalpathy-Cramer, S. Ostmo, J. P. Campbell, M. F. Chiang, and S. Ioannidis, “Accelerated experimental design for pairwise comparisons,” in Proceedings of the 2019 SIAM International Conference on Data Mining. SIAM, 2019, pp. 432–440.
- [11] N. Gast, S. Ioannidis, P. Loiseau, and B. Roussillon, “Linear regression from strategic data sources,” ACM Transactions on Economics and Computation (TEAC), vol. 8, no. 2, pp. 1–24, 2020.
- [12] Y. Guo, P. Tian, J. Kalpathy-Cramer, S. Ostmo, J. P. Campbell, M. F. Chiang, D. Erdogmus, J. G. Dy, and S. Ioannidis, “Experimental design under the bradley-terry model.” in IJCAI, 2018, pp. 2198–2204.
- [13] P. Flaherty, A. Arkin, and M. I. Jordan, “Robust design of biological experiments,” in Advances in neural information processing systems, 2006, pp. 363–370.
- [14] A. A. Bian, B. Mirzasoleiman, J. Buhmann, and A. Krause, “Guaranteed non-convex optimization: Submodular maximization over continuous domains,” in Artificial Intelligence and Statistics. PMLR, 2017, pp. 111–120.
- [15] L. Tong, Y. Li, and W. Gao, “A hierarchical edge cloud architecture for mobile computing,” in IEEE INFOCOM 2016-IEEE International Conference on Computer Communications, 2016, pp. 1–9.
- [16] K. Poularakis, J. Llorca, A. M. Tulino, I. Taylor, and L. Tassiulas, “Joint service placement and request routing in multi-cell mobile edge computing networks,” in IEEE INFOCOM 2019-IEEE Conference on Computer Communications, 2019, pp. 10–18.
- [17] K. Kamran, E. Yeh, and Q. Ma, “Deco: Joint computation, caching and forwarding in data-centric computing networks,” in Proceedings of the Twentieth ACM International Symposium on Mobile Ad Hoc Networking and Computing, 2019, pp. 111–120.
- [18] A. Lalitha, O. C. Kilinc, T. Javidi, and F. Koushanfar, “Peer-to-peer federated learning on graphs,” arXiv preprint arXiv:1901.11173, 2019.
- [19] G. Neglia, G. Calbi, D. Towsley, and G. Vardoyan, “The role of network topology for distributed machine learning,” in IEEE INFOCOM 2019-IEEE Conference on Computer Communications, 2019, pp. 2350–2358.
- [20] S. Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan, “When edge meets learning: Adaptive control for resource-constrained distributed machine learning,” in IEEE INFOCOM 2018-IEEE Conference on Computer Communications, 2018, pp. 63–71.
- [21] K. Zhang, Z. Yang, H. Liu, T. Zhang, and T. Basar, “Fully decentralized multi-agent reinforcement learning with networked agents,” in International Conference on Machine Learning. PMLR, 2018, pp. 5872–5881.
- [22] S. Wang, Y. Ruan, Y. Tu, S. Wagle, C. G. Brinton, and C. Joe-Wong, “Network-aware optimization of distributed learning for fog computing,” IEEE/ACM Transactions on Networking, 2021.
- [23] F. Pukelsheim, Optimal design of experiments. Society for Industrial and Applied Mathematics, 2006.
- [24] X. Huan and Y. M. Marzouk, “Simulation-based optimal bayesian experimental design for nonlinear systems,” Journal of Computational Physics, vol. 232, no. 1, pp. 288–317, 2013.
- [25] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher, “An analysis of approximations for maximizing submodular set functions—i,” Mathematical programming, vol. 14, no. 1, pp. 265–294, 1978.
- [26] G. Calinescu, C. Chekuri, M. Pal, and J. Vondrák, “Maximizing a monotone submodular function subject to a matroid constraint,” SIAM Journal on Computing, vol. 40, no. 6, pp. 1740–1766, 2011.
- [27] T. Soma and Y. Yoshida, “A generalization of submodular cover via the diminishing return property on the integer lattice,” Advances in neural information processing systems, vol. 28, pp. 847–855, 2015.
- [28] H. Hassani, M. Soltanolkotabi, and A. Karbasi, “Gradient methods for submodular maximization,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, pp. 5843–5853.
- [29] A. A. Ageev and M. I. Sviridenko, “Pipage rounding: A new method of constructing algorithms with proven performance guarantee,” Journal of Combinatorial Optimization, vol. 8, no. 3, pp. 307–328, 2004.
- [30] C. Chekuri, J. Vondrák, and R. Zenklusen, “Dependent randomized rounding via exchange properties of combinatorial structures,” in 2010 IEEE 51st Annual Symposium on Foundations of Computer Science. IEEE, 2010, pp. 575–584.
- [31] T. Soma and Y. Yoshida, “Maximizing monotone submodular functions over the integer lattice,” Mathematical Programming, vol. 172, no. 1, pp. 539–563, 2018.
- [32] S. Ioannidis and E. Yeh, “Adaptive caching networks with optimality guarantees,” IEEE/ACM Transactions on Networking, vol. 26, no. 2, pp. 737–750, 2018.
- [33] K. Poularakis and L. Tassiulas, “On the complexity of optimal content placement in hierarchical caching networks,” IEEE Transactions on Communications, vol. 64, no. 5, pp. 2092–2103, 2016.
- [34] S. Ioannidis and E. Yeh, “Jointly optimal routing and caching for arbitrary network topologies,” IEEE Journal on Selected Areas in Communications, vol. 36, no. 6, pp. 1258–1275, 2018.
- [35] K. Kamran, A. Moharrer, S. Ioannidis, and E. Yeh, “Rate allocation and content placement in cache networks,” in IEEE INFOCOM, 2021.
- [36] T. Wu, P. Yang, H. Dai, W. Xu, and M. Xu, “Charging oriented sensor placement and flexible scheduling in rechargeable wsns,” in IEEE INFOCOM 2019-IEEE Conference on Computer Communications, 2019, pp. 73–81.
- [37] G. Sallam and B. Ji, “Joint placement and allocation of virtual network functions with budget and capacity constraints,” in IEEE INFOCOM 2019-IEEE Conference on Computer Communications, 2019, pp. 523–531.
- [38] Z. Zheng and N. B. Shroff, “Submodular utility maximization for deadline constrained data collection in sensor networks,” IEEE Transactions on Automatic Control, vol. 59, no. 9, pp. 2400–2412, 2014.
- [39] D. Yang, G. Xue, X. Fang, and J. Tang, “Crowdsourcing to smartphones: Incentive mechanism design for mobile phone sensing,” in Proceedings of the 18th annual international conference on Mobile computing and networking, 2012, pp. 173–184.
- [40] A. Krause and C. Guestrin, “Beyond convexity: Submodularity in machine learning,” ICML Tutorials, 2008.
- [41] G. James, D. Witten, T. Hastie, and R. Tibshirani, An introduction to statistical learning. Springer, 2013, vol. 112.
- [42] R. G. Gallager, Stochastic processes: theory for applications. Cambridge University Press, 2013.
- [43] F. P. Kelly, Reversibility and stochastic networks. Cambridge University Press, 2011.
- [44] C. L. Canonne, “A short note on poisson tail bounds,” http://www.cs.columbia.edu/~ccanonne/files/misc/2017-poissonconcentration.pdf.
- [45] N. Alon and J. H. Spencer, The probabilistic method. John Wiley & Sons, 2004.
- [46] J. Friedman, T. Hastie, R. Tibshirani et al., The elements of statistical learning. Springer series in statistics New York, 2001, vol. 1, no. 10.
- [47] J. Kleinberg, “The small-world phenomenon: An algorithmic perspective,” Cornell University, Tech. Rep., 1999.
- [48] D. Rossi and G. Rossini, “Caching performance of content centric networks under multi-path routing (and more),” Relatório técnico, Telecom ParisTech, pp. 1–6, 2011.
- [49] R. Srikant, The Mathematics of Internet Congestion Control. Birkhäuser Boston, MA: Springer Science & Business Media, 2012.
- [50] T. S. Jaakkola and M. I. Jordan, “A variational approach to bayesian logistic regression models and their extensions,” in Sixth International Workshop on Artificial Intelligence and Statistics. PMLR, 1997, pp. 283–294.
- [51] K. B. Petersen and M. S. Pedersen, “The matrix cookbook (version: November 15, 2012),” 2012.
- [52] R. A. Horn and C. R. Johnson, Matrix analysis. Cambridge university press, 2012.
-A Proof of Lemma 1
Proof.
We extend the proof in Appendix A of [9] for the supmodularity of D-optimal design over a set to the integer lattice: For and , we have
where , and the last equality follows Eq. (24) in [51]. The monotonicity of follows because is positive semidefinite. Finally, since the matrix inverse is decreasing over the positive semi-definite order, we have , , which leads to . ∎
-B Proof of Lemma 2
Proof.
To start with, , as a convex combination of points in . Next, consider the point , in which is the solution at current iteration and is the optimal solution. Because is non-decreasing (Thm. 1), we have
(33) |
A DR-submodular continuous function is concave along any non-negative direction, and any non-positive direction (see e.g., Prop. 4 in [14]), thus , where , is concave, hence,
(34) |
Then,
(35) |
where the second inequality is because the LHS maximizes the inner product and the third inequality is because and is positive (Thm. 1). From the definition of the Hessian, we can show that is bounded by (because 2-norm is smaller than Frobenius norm [52]), thus is the Lipschitz continuous constant of . Then, we have
After rearrangement, we have
since . By telescope,
Finally, as and , we have
∎
-C Proof of Lemma 4
-D Proof of Lemma 5
Proof.
Our final estimator of the partial derivative is given by
where
We define
where
We have , because
for any , . By Chernoff bounds described by Theorem A.1.16 in [45], we have
Suppose we let , where is the step size, then we have
with probability greater than . By Lemma 4, we have . Thus, we have
(36) |
We now use the superscript to represent the parameters for the th iteration: we find that maximizes . Let be the vector that maximizes instead and define where and . We have
where the first and last inequalities are due to (36) and the second inequality is because maximizes . The above inequality requires the satisfaction of (36) for every partial derivative. By union bound, the above inequality satisfies with probability greater than . ∎