Decentralized Network Topology Design for Task Offloading in Mobile Edge Computing
Abstract
The rise of delay-sensitive yet computing-intensive Internet of Things (IoT) applications poses challenges due to the limited processing power of IoT devices. Mobile Edge Computing (MEC) offers a promising solution to address these challenges by placing computing servers close to end users. Despite extensive research on MEC, optimizing network topology to improve computational efficiency remains underexplored. Recognizing the critical role of network topology, we introduce a novel decentralized network topology design strategy for task offloading (DNTD-TO) that jointly considers topology design and task allocation. Inspired by communication and sensor networks, DNTD-TO efficiently constructs three-layered network structures for task offloading and generates optimal task allocations for these structures. Comparisons with existing topology design methods demonstrate the promising performance of our approach.
Index Terms:
MEC, Task offloading, Network topology designI Introduction
With the advancement in Internet of Things (IoT) technology, many delay-sensitive yet computing-intensive applications have emerged, such as automotive driving, face recognition and virtual reality[1]. However, IoT devices typically have limited computing power, making it difficult to meet the demands of these tasks. Centralized cloud computing is traditionally used to process these tasks, but offloading them to the remote cloud can cause significant transmission delays that degrade user Quality of service (QoS). To address this issue, Mobile Edge Computing (MEC) solutions were introduced[2], where MEC places servers closer to end users at the network edge, such as at base stations, to reduce transmission delays.
To better serve end users, extensive research has been conducted in the field of MEC, addressing various design aspects such as system deployment, task offloading, resource allocation, mobility management, and privacy and security [3]. Nevertheless, little attention has been given to optimizing network topology to enhance computational efficiency. Most existing studies primarily focus on offloading tasks from users to one or more nearby MEC servers within communication range [4, 5]. A few studies [6, 7] have explored offloading tasks to servers multiple hops away, but these designs did not consider the impact of network topology. In our prior work [8], we proposed a multi-layered task offloading framework and demonstrated that computational efficiency can be improved by leveraging layered network structures. We also showed that computational efficiency is influenced not only by the computing and communication characteristics of the servers but also by their network topology. In this paper, we aim to further investigate the joint design of layered network structure and task allocation.
Layered structures have been widely used in communication and practical sensor networks due to energy efficiency and network management simplicity [9]. In these networks, a base station is typically present alongside several clusters of sensors or communication devices, with each cluster comprising a Cluster Head (CH) and multiple Cluster Members (CMs). The CH collects data from its CMs and transmits it to the base station. Methods for selecting CHs and CMs can be broadly categorized into two types: cluster-based [10, 11, 12, 13] and grid-based [14, 15, 16, 17]. In cluster-based methods, CHs are selected directly based on certain criteria. In contrast, grid-based methods first divide the network into grids, and then select CHs within each grid. Despite the widespread use of layered structures in communication and sensor networks, their design for task offloading remains underdeveloped.
In this paper, we introduce a decentralized network topology design strategy for task offloading (DNTD-TO). We explore three-layered network structures, similar to those commonly used in communication and sensor networks, to facilitate computing. In this setup, tasks are offloaded from the root node (referred to as master) to servers in the second layer (CHs), which then distribute the tasks to their child nodes in the third layer (CMs). To select CHs and CMs, our strategy iterates through two nested phases: a local cluster formation phase, where each server within master’s communication range selects CMs in a decentralized manner, and a cluster selection phase where the master selects CHs and their associated CMs. The selection of CHs and CMs is based on servers’ task processing capacities and their potential to enhance computational efficiency. Optimal task allocation is integrated into every step of the selection process.
II System Modeling and Problem Formulation
Consider an MEC system with servers scattered in an open area, each communicating wirelessly only with nearby servers within a uniform communication range . Suppose one of the servers, referred to as the master (e.g., a server located at or near a base station), receives computation task processing requests from end users with a total task size of . To process tasks efficiently, the master allocates tasks, which are assumed to be arbitrarily decomposable, to its neighbors, referred to as CHs. The CHs then decide whether to further offload these tasks to their own neighbors, referred to as CMs. Since simply offloading tasks to all neighbors may not yield optimal performance, this study investigates the joint optimization of network topology and task allocation.
II-A System Modeling
The MEC system is modeled as a graph , where denotes the set of servers, with the master server labeled as . represents the set of edges. An edge between two servers, and , is established when they are within each other’s communication range, meaning their Euclidean distance, denoted as , satisfies . The connectivity is described by the adjacency matrix , where each element is equal to if there is an edge between server and server , and otherwise.
Additionally, define , where , as the set of neighbors of server that are within its communication range. By strategically selecting CHs from the set to receive tasks offloaded from the master, and assigning CMs from the set to each CH to handle tasks offloaded from CH , the master, CHs, and CMs form a three-layer tree topology, denoted as . We assume that each CM can belong to only one CH.
To describe data transmission between two servers, we adopt the model in [18]. Specifically, suppose the servers communicate using the Orthogonal Frequency Division Multiplexing protocol [19], where the bandwidth is equally divided when a server transmits data simultaneously to its connected servers via orthogonal channels. For simplicity of analysis, we assume the channels are free from interference. If a server (e.g., a CH) transmits data to its neighboring servers (e.g., associated CMs) simultaneously, the data transmission rate for the link between server and its -th neighbor is given by , where (MHz) denotes the total bandwidth, and represents the signal-to-noise (SNR) ratio.
The computation model from [19] is used in this study to describe the task processing time. Let represent the number of CPU cycles required to compute 1 bit of data, and (MHz) denote the computation capacity of server . Given a task of size (Gbits), the time taken by server to process the task is , where (s/Mbits) indicates the time taken by server to process one Gbit of data.
II-B Problem Formulation
In this subsection, we present the mathematical formulation of the problem.
The master selects a set of CHs from set for task offloading. To describe the selection of CHs, we introduce binary decision variables , where . The variable equals 1 if server is selected as a CH, and 0 otherwise.
After receiving tasks from the master, the CHs select their CMs for further task offloading. To describe the selection of CMs, we introduce decision variables , where and finds the cardinality of the set . is a binary vector, where its -th element, , equals 1 if server is selected to join the cluster , and 0 otherwise. The resulting cluster with CH , denoted as , is given by .
Additionally, we introduce continuous decision variables to represent the size of the tasks offloaded to server , where . In case when , indicates the size of the tasks processed at the master.
The time required for each server to receive its assigned tasks can then be expressed as:
(1) |
Notably, if server is not selected for task offloading, the associated transmission delay is 0.
The total time required for each server to receive and process the assigned computation tasks is given by , where . Here, we assume the generated results are small in size, making the transmission delay for sending them back to the master negligible, as is often assumed in existing studies (e.g., [20]). The total task completion time can then be written as .
The objective of this study is to minimize the total task completion time by jointly optimizing the selection of CHs and CMs, and the task allocation, which is formulated as follows:
Constraints - ensure that the decision variables take valid values. Constraints ensure that clusters do not overlap. Constraints - maintain the integrity of task sizes. It should be noted that solving this problem directly is challenging due to the nonlinear constraints .
III Decentralized Network Topology Design for Task Offloading (DNTD-TO)
In this section, we introduce our approach, DNTD-TO, for solving problem . This method consists of two nested phases: (1) Local Cluster Formation (LCF) phase, and (2) Cluster Selection phase.
III-A Local Cluster Formation (LCF)
In this phase, each server within the master’s communication range selects its CMs in a decentralized manner, forming a candidate cluster with itself as the CH. These candidate clusters will then be examined by the master in the cluster selection phase as detailed in the next subsection. Notably, this phase allows a server to join multiple clusters.
To select CMs, each server employs a forward selection mechanism, picking CMs from its neighbors one by one until either no further performance gains can be achieved or all neighbors have been evaluated. In each iteration, the CH adds the neighboring server that maximally reduces the task processing time, assuming tasks of any size are assigned to server . This selection is achieved by identifying the neighboring server with the highest processing capacity, defined as the time to receive and process a unit-sized task, and determining if its addition would reduce the overall task processing time. This is challenging, as a neighboring server’s processing capacity depends on the number of servers in the cluster due to shared bandwidth. Additionally, determining time savings from adding a server requires solving an optimization problem for task allocation. In the following, we first derive the processing capacity, denoted as .
Initially, the cluster , with server as the CH, is empty. Therefore, when a neighboring server is added to the cluster, this server can utilize the entire bandwidth resource. Then, for each server within ’s communication range, its processing capacity can be derived as , where denotes the set of neighboring servers that have not been added to the cluster. Nevertheless, in subsequent iterations, as the cluster becomes nonempty, a neighboring server can no longer utilize the entire bandwidth resource, and its processing capacity is given by:
(2) |
Notably, the processing capacity of the CH is always due to local computing.
To determine whether adding a neighboring server would reduce the task processing time, we define a performance indicator that compares the minimum task processing time before and after the server is added. Specifically, consider cluster at the -th iteration, for task assigned to server , the minimum task processing time and the optimal task allocation can be derived by solving the following optimization problem:
where is the task processing time and is the time required for server to receive and process its assigned task . The performance indicator is then defined as , where and represent the minimum task processing time obtained before and after adding server to cluster , respectively. Therefore, if , adding the server to the cluster will improve performance; otherwise, it will not.
To derive the formula for , we solve the above optimization problem, which leads to the following lemma, with the proof provided in the Appendix.
Lemma 1.
Consider problem . The optimal task allocation, denoted as , , satisfy , , when . Consequently, the minimum task processing time is given by , . In the special case where , we have .
From the above lemma, we can derive that and for . Here, and represent the processing capacities for server before and after adding server to cluster , respectively, while and denote the corresponding optimal task allocations. Therefore, we have . In this equation, the processing capacities and can be readily computed using (2). Specifically, if , and ; otherwise, if , . However, determining the optimal task allocations and by solving at each iteration is time-consuming. To address this issue, we introduce an iterative method to efficiently compute the values of , and .
Initially, is empty. Hence, we can easily obtain that before adding server , and , after adding server based on Lemma 1. Thus, . In subsequent iterations, to derive , we note the existence of following relationships:
(3a) | |||||
(3b) | |||||
(3c) |
which yields
(4) |
Since , , and , we have
(5) |
Since was obtained in the previous iteration, i.e., , where superscript indicates the iteration index, (6) can be readily computed. Moreover, once is obtained, we can derive by , and can be obtained by (4). Algorithm 1 summarizes the procedure.
Remark 1.
III-B Cluster Selection
In this phase, the master examines the candidate clusters formed in the LCF phase, selects the CHs and their associated CMs, and resolves any overlaps between clusters.
To select CHs, the master evaluates each server within its communication range, following a procedure similar to that of LCF. Particularly, in each iteration, the master identifies the neighboring server with the highest processing capacity and adds it to the set of CHs, denoted as , if doing so would reduce task processing time. The iteration stops when no further reduction in processing is achievable or when all neighboring servers of the master have been examined.
Unlike the processing capacity defined for individual servers in the LCF phase, the processing capacity of each candidate CH here is defined as the time required for it and its CMs, as a team, to receive and process a unit-sized task. Specifically, for server , the time to receive a unit-sized task from the master is . Moreover, once server receives this task, according to Lemma 1, the minimum time required for it and its CMs to process this task is , where is the output of Algorithm 2. Therefore, the processing capacity of server in the GCF phase, denoted as , is given by . Notably, the processing capacity of the master is .
Moreover, to assess whether adding server to the set of CHs, , would improve performance, we use the same performance indicator , which can be computed similarly as detailed in the previous subsection. Particularly, at the -th iteration, we have:
(7) |
where and denote the processing capacities of server before and after adding server to the cluster , and we have
(8) |
when . and denote the optimal task allocations before and after adding server . Similarly, the updating equation for is , and we have:
(9) |
To resolve any overlaps between clusters, the selected CH and its CMs are “removed” from the network at the end of each iteration and do not participate in subsequent iterations. At the start of each new iteration, the unselected neighbors of the master undergo the LCF phase to update their CMs, ensuring that clusters remain distinct and non-overlapping. Algorithm 3 outlines the core procedure of our approach, which constructs the three-layer tree topology and determines the associated optimal task allocation .
Remark 2.
The task allocation generated by our approach in Algorithm 3 is optimal for the topology .
IV Simulation studies
In this section, we conduct simulation studies to evaluate the performance of our approach.
IV-A Experiment Setup
We randomly generate and distribute servers within a m area, with the exception of the master, which is fixed at the location m. The computing power of each server is sampled from a uniform distribution over MHz, with set to . The SNR ratio is sampled from a uniform distribution over dbm. The task size is set to Gbits, and the bandwidth is set to 50MHz. To evaluate the performance of our method, we compare it with the following three benchmarks:
-
•
Unequal Cluster (Unequal)[14]: This method selects CHs by comparing servers’ computing power; if two CHs are within each other’s communication range, the server with higher computing power is selected as the CH. CMs join the nearest CH.
-
•
Leach-C [21]: A centralized approach that calculates each server’s probability of being a CH based on computing power (originally, energy level was used), with the top servers selected as CHs. CMs join the CH with the highest received signal strength (originally, minimal communication energy was used).
-
•
LBAS [22]: CHs are selected using an iterative approach that considers distance, number of neighbors, and computing power, with higher-scoring servers selected as CHs. A similar procedure is applied to select CMs.
-
•
Dijkstra’s [8]: A tree topology is constructed by identifying the shortest routes with the least transmission delay from the master to each server using the Dijkstra’s algorithm. Servers more than two hops away are pruned.
Notably, these benchmarks only generate the tree topology . The associated optimal task allocation is derived using the same approach as ours.
IV-B Experiment Results
In the first experiment, we compare the performance of different approaches across networks of varying sizes and topologies. Specifically, we examine a small-scale network with servers and a large-scale network with servers. For each network size, we randomly generate 10 different topologies by varying server locations, with each server’s communication range set to m. As shown in Fig. (1), our approach outperforms all benchmarks in both small-scale (Fig. 1) and large-scale networks (Fig. 1). The performance advantage is more pronounced in large-scale networks, due to the increase of servers that can potentially slow down processing if included without careful selection. Comparing the two figures, we can see that the task completion time generally decreases with the increase of the number of servers, due to more servers participating in computations.
In the second experiment, we compare the performance of different approaches under varying server communication ranges. Specifically, we consider servers distributed over the simulation area, as illustrated in Fig. 2. The communication range is configured to be the same for each server, and we vary from m to m. From Fig. 2, we can see that our approach outperforms all benchmarks and continues to improve as the communication range increases. This improvement occurs because, as grows, network connectivity strengthens as more servers within each other’s communication range. This expanded connectivity enables more servers to contribute to task processing, which, when properly selected, further reduces task completion time.




V Conclusion and Future Works
In this paper, we investigated the design of network topology to improve computational efficiency for task offloading in MEC. The proposed approach, DNTD-TO, draws inspiration from communication and sensor networks and builds three-layered network structures for task offloading in an iterative, decentralized manner. Additionally, it generates the optimal task allocation for the designed topology. To evaluate its performance, we conducted various comparison studies. The simulation results show that DNTD-TO significantly outperforms existing topology design methods. In the future, we will explore network structures with more than three layers and consider networks with mobile servers.
Acknowledgment
We would like to thank the National Science Foundation under Grant CAREER-2048266 and CCRI-1730675 for the support of this work.
[Proof of Lemma 1] It is straightforward that when , we have . When , we can solve problem by relaxing it to a linear programming problem as follows:
To solve problem , the Lagrangian multiplier method [8] can be used, with the Lagrangian function given by , where and are Lagrangian multipliers. As it fulfills the Slater’s condition [23], we can resort to the Karush-Kuhn-Tucker (KKT) condition [24] to solve this problem:
(10a) | |||||
(10b) | |||||
(10c) | |||||
If and let , then (10a) - (10c) can be rewritten as:
(11a) | |||||
(11b) | |||||
(11c) |
Since , we can derive from (11c) that , from (11b) that , and then from (11a) that . However, for , (10a) can also be written as . This produces a conflict value for . Therefore, the optimal task allocation has to satisfy , and .
References
- [1] X. Yang, Z. Chen, K. Li, Y. Sun, N. Liu, W. Xie, and Y. Zhao, “Communication-constrained mobile edge computing systems for wireless virtual reality: Scheduling and tradeoff,” IEEE Access, vol. 6, pp. 16 665–16 677, 2018.
- [2] Y. C. Hu, M. Patel, D. Sabella, N. Sprecher, and V. Young, “Mobile edge computing—a key technology towards 5g,” ETSI white paper, vol. 11, no. 11, pp. 1–16, 2015.
- [3] J. Yang, A. A. Shah, and D. Pezaros, “A survey of energy optimization approaches for computational task offloading and resource allocation in mec networks,” Electronics, vol. 12, no. 17, p. 3548, 2023.
- [4] J. Linderoth, M. Yoder et al., “Metacomputing and the master-worker paradigm,” Mathematics and Computer Science Division, Argonne National Laboratory, Tech. Rep. ANL/MCS-P792–0200, 2000.
- [5] B. Wang, J. Xie, K. Lu, Y. Wan, and S. Fu, “Learning and batch-processing based coded computation with mobility awareness for networked airborne computing,” IEEE Transactions on Vehicular Technology, vol. 72, no. 5, pp. 6503–6517, 2023.
- [6] H. Qi, M. Liwang, X. Wang, L. Li, W. Gong, J. Jin, and Z. Jiao, “Bridge the present and future: A cross-layer matching game in dynamic cloud-aided mobile edge networks,” IEEE Transactions on Mobile Computing, 2024.
- [7] F. Liu, J. Huang, and X. Wang, “Joint task offloading and resource allocation for device-edge-cloud collaboration with subtask dependencies,” IEEE Transactions on Cloud Computing, vol. 11, no. 3, pp. 3027–3039, 2023.
- [8] K. Ma and J. Xie, “Joint task allocation and scheduling for multi - hop distributed computing,” in ICC 2024 - IEEE International Conference on Communications, 2024, pp. 2664–2669.
- [9] R. Priyadarshi, “Energy-efficient routing in wireless sensor networks: a meta-heuristic and artificial intelligence-based approach: a comprehensive review,” Archives of Computational Methods in Engineering, vol. 31, no. 4, pp. 2109–2137, 2024.
- [10] W. R. Heinzelman, A. Chandrakasan, and H. Balakrishnan, “Energy-efficient communication protocol for wireless microsensor networks,” in Proceedings of the 33rd annual Hawaii international conference on system sciences. IEEE, 2000, pp. 10–pp.
- [11] W. Heinzelman, A. Chandrakasan, and H. Balakrishnan, “An application-specific protocol architecture for wireless microsensor networks,” IEEE Transactions on Wireless Communications, vol. 1, no. 4, pp. 660–670, 2002.
- [12] G. Koltsidas and F.-N. Pavlidou, “A game theoretical approach to clustering of ad-hoc and sensor networks,” Telecommunication Systems, vol. 47, pp. 81–93, 2011.
- [13] D. Xie, Q. Sun, Q. Zhou, Y. Qiu, and X. Yuan, “An efficient clustering protocol for wireless sensor networks based on localized game theoretical approach,” International Journal of Distributed Sensor Networks, vol. 9, no. 8, p. 476313, 2013.
- [14] G. Chen, C. Li, M. Ye, and J. Wu, “An unequal cluster-based routing protocol in wireless sensor networks,” Wireless Networks, vol. 15, pp. 193–207, 2009.
- [15] R. Logambigai, S. Ganapathy, and A. Kannan, “Energy–efficient grid–based routing algorithm using intelligent fuzzy rules for wireless sensor networks,” Computers & Electrical Engineering, vol. 68, pp. 62–75, 2018.
- [16] Y.-K. Chiang, N.-C. Wang, and C.-H. Hsieh, “A cycle-based data aggregation scheme for grid-based wireless sensor networks,” Sensors, vol. 14, no. 5, pp. 8447–8464, 2014.
- [17] H. Farman, H. Javed, J. Ahmad, B. Jan, and M. Zeeshan, “Grid-based hybrid network deployment approach for energy efficient wireless sensor networks,” Journal of Sensors, vol. 2016, no. 1, p. 2326917, 2016.
- [18] Y. Wang, Z.-Y. Ru, K. Wang, and P.-Q. Huang, “Joint deployment and task scheduling optimization for large-scale mobile users in multi-uav-enabled mobile edge computing,” IEEE transactions on cybernetics, vol. 50, no. 9, pp. 3984–3997, 2019.
- [19] C. You, K. Huang, H. Chae, and B.-H. Kim, “Energy-efficient resource allocation for mobile-edge computation offloading,” IEEE Transactions on Wireless Communications, vol. 16, no. 3, pp. 1397–1411, 2016.
- [20] F. Zhou, Y. Wu, R. Q. Hu, and Y. Qian, “Computation rate maximization in uav-enabled wireless-powered mobile-edge computing systems,” IEEE Journal on Selected Areas in Communications, vol. 36, no. 9, pp. 1927–1941, 2018.
- [21] W. B. Heinzelman, A. P. Chandrakasan, and H. Balakrishnan, “An application-specific protocol architecture for wireless microsensor networks,” IEEE Transactions on wireless communications, vol. 1, no. 4, pp. 660–670, 2002.
- [22] W. Osamy, B. Alwasel, A. Salim, A. M. Khedr, and A. Aziz, “Lbas: Load balancing aware clustering scheme for iot-based heterogeneous wireless sensor networks,” IEEE Sensors Journal, 2024.
- [23] A. Auslender and M. Teboulle, “Lagrangian duality and related multiplier methods for variational inequality problems,” SIAM Journal on Optimization, vol. 10, no. 4, pp. 1097–1115, 2000.
- [24] Z.-Q. Luo and W. Yu, “An introduction to convex optimization for communications and signal processing,” IEEE Journal on Selected Areas in Communications, vol. 24, no. 8, pp. 1426–1438, 2006.