Abstract
With the emergence of several new services such as driverless vehicles and virtual reality, mobile communication networks face problems such as heavy load and insufficient computing resources. The development of cloud, edge, and mobile edge network computing provides a good solution to this problem. This paper proposes the development of a user energy efficiency fairness task unloading algorithm for cloud-side networks. First, a cloud-side network cooperation model is constructed. The model ensures the efficient use of user energy and addresses the task offloading decision and resource allocation optimization problem jointly. Using the generalized fraction theory, the optimization problem is transformed into an equivalent convex problem by introducing relaxation as well as auxiliary variables. Next, the centralized energy efficiency fairness (CEEF) and alternating direction method of multiplier (ADMM)-based energy efficiency fairness algorithms are implemented to obtain an optimal solution for the optimization problem. Finally, through experimental simulation, the convergence of the CEEF- and ADMM-based energy efficiency fairness algorithms is verified. Compared with noncooperative algorithms, the performance of our proposed method increased by 30.76%. The proposed algorithm has been verified to ensure the fairness of user energy efficiency.
1. Introduction
With the rapid advances in mobile communication, several new applications such as virtual/augmented reality, face recognition, and automated driving have been developed. These applications demand ultra-low delay and high reliability of mobile networks [1–3]. Although these new applications enhance the convenience of performing tasks, the amount of data and energy consumed to transmit data has increased at an astronomical rate. This has led to a heavy processing burden on the network communication equipment. Cloud-side networks can help realize comprehensive applications regarding communication computing and resource collaboration to efficiently mitigate the burden on the communication equipment [4–10]. Therefore, research on cloud-side networks is of great significance. An incredible amount of research has been conducted in recent years.
The combination of cloud-side networks and mobile edge computing (MEC) [11–15] technology is able to fulfill the networking requirements of communication, computing, and business processing. It can realize an effective support system for the network providing novel applications. By applying MEC technology, users on the cloud-side network can upload computing tasks with high computing requirements and energy consumption to edge nodes. Edge computing nodes are very close to users. Therefore, data transmission pressure and link congestion can be effectively alleviated. Simultaneously, the network bandwidth demand and transmission delay in data computing and storage process can be significantly reduced [16–18].
At present, considerable advancements have been made in the field of mobile edge computing for cloud network-edge network applications. Previous research used edge nodes and mobile devices to form an edge computing system wherein mobile devices have computing tasks [19]. Considering the frequency adjustment and nonadjustment of edge access points, different methods were used to obtain the unloading decision. A user cooperative cloud network-edge mobile edge computing model was also proposed, whose primary objective is minimizing the task processing delay to make the optimal task offloading decision [20]. Stochastic optimization theory was also used to make the optimal unloading decision for the computational unloading problem [21]. The data cache in the mobile edge computing model was considered to obtain the unloading decision by considering the average minimum task processing delay [22]. These studies only focused on the optimization analysis of the task unloading decision-making and did not study the resource allocation problem.
In the cloud-side network, the intensive deployment of edge servers consumes considerable resources. Many studies have previously been conducted in this field. A resource allocation method was proposed to minimize the energy consumption of wireless power supply system in a cloud-edge computing scenario [23]. A computational resource allocation algorithm was proposed to minimize terminal energy consumption with delay as a constraint [24]. A fair aware communication and computing resource allocation scheme was proposed that aimed at minimizing the maximum loss between users as the objective function for optimization [25]. For the single-cell scenario, reference [23]proposed a task unloading and resource allocation scheme to minimize the energy consumption of the system with wireless power supply capability. Considering the user delay constraint, reference [24] proposed an access control and computing resource allocation algorithm to minimize terminal energy consumption. In the multicell scenario, considering that the user can unload tasks to multiple edge nodes, task allocation of the terminal was optimized to minimize the weighted sum of delay as well as terminal energy consumption [26]. The unloading decision, communication resource allocation (uplink bandwidth and downlink bandwidth), and computing resource allocation of each user were optimized to minimize the energy consumption and delay weight loss for all users under user delay constraints [27]. An optimization problem aimed at minimizing the maximum loss between users was constructed, and a fair and perceptual unloading decision was proposed for the communication and computing resource allocation scheme [25]. The joint scheduling problem of tasks and resources for multiple edge servers in the cloud network was also analyzed [28]. Here, although the resource management under the cloud network-edge end fusion network and load balancing between edge nodes are considered, the correlation between the user unloading decision and resource allocation is ignored. Accordingly, a comprehensive consideration of the user unloading decision and resource allocation is lacking.
On the cloud-side network, the interaction between computing and communication makes the resource optimization problem very complex and difficult to solve. In addition, there are multiple edge nodes in the network with vast differences in their communication computing power and load. In a cloud network-edge integration network, it is important that the nodes cooperate with each other in order to improve network efficiency and performance and realize efficient utilization and resource sharing. Therefore, this study aims to develop a cooperative cloud network-edge fusion network taking maximum and minimum user energy efficiency as the objective function and comprehensively considering user unloading decisions and resource allocation.
The primary contributions of this paper are summarized as follows:(i)An edge network model is built for the collaborative cloud network, and the fairness of user energy efficiency is analyzed based on the maximum and minimum criteria.(ii)The optimization of user energy efficiency fairness proposed in the cloud network edge-end fusion network model is a mixed-integer nonconvex fractional programming problem which is tough to solve. Using the generalized fraction theory, relaxation variables, and equivalent replacement of the optimization problem, we convert the optimization problem into a convex optimization problem. CEEF- and ADMM-based energy efficiency fairness algorithms are proposed to dispose of the problem.(iii)Through simulation, the performance of the proposed CEEF and ADMM energy efficiency fairness algorithms is observed. The efficiency of the proposed algorithm to guarantee user energy efficiency fairness is verified.
2. System Model
In this section, a system model for cloud-side network systems, including network, communication, and computation models, is presented.
2.1. Network Model
As shown in Figure 1, the cloud-side network model consists of I user terminals, M base stations, and a remote cloud server. Each base station in this network is equipped with a MEC server. The MEC server and user terminal sets can be represented as , and , respectively.

The user terminal has a computing task for processing. describes the computing task, where represents the data size of the task, and represents the number of CPU cycles required to accomplish the terminal task. The user terminal is able to choose to deal with the task locally or unload it to the MEC or cloud server. When the user terminal sends the task to the MEC server, the MEC server can process the task by itself, forward it to other MEC servers with richer computing resources, or further unload the task to the cloud server for processing. Table 1 summarizes the used notations and the corresponding definition in this paper.
Binary variables are defined, where shows that the task of the user terminal is processed locally and means that the task is not processed locally. Similarly, indicates that the task of user terminal is processed by a MEC server and indicates that the task is not processed on the MEC server . Additionally, indicates that the task of user terminal is processed by a cloud server and indicates that the task of user terminal is not processed by the cloud server. The unloading decision to be made by the user terminal for computing tasks needs to meet the following constraints:
This constraint means that the computing task of each user terminal can only be processed by one of three means: local processing, MEC server processing, and cloud server processing. This implies that, for a user terminal , only one binary variable can be equal to 1.
2.2. Communication Model
In the cloud-edge networks model, when the user terminal sends a task to the MEC server , the channel gain is expressed aswhere is the channel power gain coefficient when the user terminal sends tasks to the MEC server , is the distance between user terminal and MEC server , and is the path loss factor.
Assuming that the user movement rate is minimal during unloading calculations, can be considered to be a constant. The uplink transmission rate between and iswhere is the available spectrum bandwidth, is the uplink transmission power of user terminal , and is the noise power.
2.3. Computation Model
The local computing capacity of is represented by , and the power consumed during local computing is represented by . Therefore, when processes the task locally, the calculation delay can be expressed as
The energy consumed can be expressed as
When sends the task to for processing, the computing power of is represented by and the computing resources allocated to are represented by . When the user terminal sends a task, the transmission delay can be expressed as
The energy consumed during transmission by can be expressed as
Accordingly, the calculation delay of for task processing can be expressed as
When unloads the task on , can choose to process the task itself or send it to other MEC servers with richer computing resources. The order indicates the average round-trip time of task forwarding between the MEC server and another MEC server . When , . Therefore, when transmits a task to for processing, the total delay is divided into three parts: the transmission delay between and , the forwarding delay between and , and the calculation delay after receives the task. The total delay can therefore be expressed as
When unloads a task to the cloud server for processing, the cloud computing capacity allocated for it is represented by , the average round-trip time of task transmission between and the cloud server is represented by , and the processing delay of the task in the cloud server can be expressed as
The size of the returned data after processing is considerably smaller than that before processing. Hence, the transmission delay of the result is negligible. When the cloud server processes the task of , the total delay of the entire process can be expressed as
To sum up, the total delay of user terminal in task processing can be expressed as
The total energy consumption can be expressed as
3. User Energy Efficiency Fairness Resource Allocation Algorithm
First, this section formulates the resource allocation and task unloading optimization problems based on the maximum-minimum criterion and then solves them using the CEEF and ADMM algorithms, respectively.
3.1. Problem Reformulation
The maximum-minimum criterion is an effective means to ensure fairness for all users. Hence, it was used to construct the joint resource allocation and task unloading optimization problem. The unloading decision vector for is expressed as , and the joint optimization problem is expressed as follows:
Equation (14) represents the objective function of joint resource allocation and task unloading, equation (15) represents the user terminal requirements for the delay in the task unloading process, equation (16) shows that the computing resources of the base station cannot exceed the maximum computing capacity, and equations (17) and (18) indicate that only one node can be selected for computing the required tasks.
3.2. Solutions Using the CEEF Algorithm
The user energy efficiency fairness optimization problem based on the maximum and minimum criteria is a mixed-integer nonconvex fractional programming problem. First, the problem is transformed into an equivalent mixed-integer nonconvex subtraction optimization problem using the generalized fractional theory. Let the variable Q represent the value of the optimization problem, that is, the maximum and minimum energy efficiency. Using the generalized fraction theory and introducing a relaxation variable , the problem can be transformed into the following:where
As are binary variables, it can be relaxed to . The optimization problem is further modified as follows:
The objective function equation (26) is a linear function, and equations (28)–(31) are linear constraints. The total delay of the user terminal using equation (27) is expressed as
The product term couples the variables and . Additional variables and are introduced to solve the aforementioned equation. Therefore, the problem is described as follows:where is a normal number close to 0. In order to solve the aforementioned problem, the CEEF algorithm is proposed. Its steps are shown in Algorithm 1.
|
As the unloading decision variable is substituted into a continuous variable, need to be mapped to a binary variable after the optimal solution is obtained. The recovery scheme is as follows:
All the other variables were also recovered similarly.
3.3. Solutions Using the ADMM Algorithm
The ADMM algorithm is a standard method to solve large-scale optimization problems. Combined with CEEF, it can effectively achieve user energy efficiency fairness.
First, the optimization variables , , and are copied and redefined as , , and to form the following equation:
Equation (33) can be equivalently modified as follows:
Therefore, the augmented Lagrangian function from equation (43) can be expressed aswhere , , and represent the Lagrange multipliers and represents the augmented Lagrange parameter, which is a constant. This constant controls the convergence of the ADMM algorithm iteration. To facilitate the solution of the target variable, the Lagrange multiplier coefficients are simplified. Hence, the augmented Lagrange function can be equivalently transformed into the following:
When using ADMM to solve the problem, the variables need to be updated through multiple iterations. Letrepresent the value of the tth iteration of the variables. The first step to solve the equations is to update the local variable, which is expressed as follows:
This problem can be decomposed into m parallel subproblems, where each subproblem is solved separately using convex optimization.
The second step is updating the global variable using the following formula:
The aforementioned problem is an unconstrained quadratic convex problem. After deriving the variable, the following equation is obtained:
The global variable optimal solution is obtained as follows:
The third step is updating the Lagrange multiplier as follows:
The fourth step verifies the iteration termination condition, expressed as follows:where and represent the iteration stop threshold under feasible conditions.
The energy efficiency fairness algorithm based on the ADMM algorithm described here is shown in Algorithm 2.
|
As the unloading decision variable is modified into a continuous variable, need to be mapped to a binary variable after the optimal solution is obtained. The recovery scheme is as follows:
All the other variables were recovered in the same way.
Complexity Analysis: when using CEFF algorithm to solve the problem, the algorithm complexity is . For the proposed ADMM-based distributed algorithm, in the local variable update, the computational complexity is . In the global variable update, the corresponding calculation complexity is . In addition, the complexity of updating Lagrange multiplier is . Therefore, the computational complexity of each iteration of algorithm is .
4. Simulation Results
In this section, the proposed algorithms are compared by simulation experiments. The simulation is composed of 3 base stations and 27 random users. The radius covered by each base station is 500 m, and the users are randomly distributed in the overlapping area covered by the base station. The transmission link bandwidth in the system is 15 MHz, noise power is dBW, path loss model is , and d represents the distance between the MEC server and user terminal. The local computing capacity of the user terminal, base station, and cloud server is 0.6 Gcycles/s, 5 Gcycles/s, and 10 Gcycles/s, respectively. To achieve a performance comparison, a noncooperative scheme is simulated in addition to the CEEF and ADMM [29]. In the noncooperative scheme, computing tasks could be processed by the user terminal, MEC server, or cloud server, but the cooperation between MEC servers is not considered. Computing tasks cannot be forwarded to other MEC servers, and the maximum-minimum user energy efficiency cannot be achieved by optimizing resource allocation.
Figure 2 shows the convergence process of the CEEF algorithm, ADMM algorithm, and noncooperative algorithm. The abscissa represents the number of iterations, and the ordinate represents the energy efficiency. It can be seen from the simulation graph that with an increase in the number of iterations, the three schemes stabilize at their unique value, thereby verifying the convergence of the algorithms. The performance of CEEF and ADMM algorithms is 30.76% higher than that of the noncooperative scheme. This is because the computing tasks in the noncooperative scheme cannot be forwarded among MEC servers, thereby increasing energy consumption and decreasing energy efficiency.

Figure 3 shows the impact of different task calculations on user energy efficiency performed by the CEEF, ADMM, and noncooperative algorithms when the transmission power of the user terminal is 0.2 W and 0.5 W. The abscissa represents computations required for the task, and the ordinate represents energy efficiency. The energy efficiency of the three schemes gradually decreases with an increase in computations. As the number of calculations increases, the energy consumption increases, thereby decreasing energy efficiency. The figure also shows that the energy efficiency trend of the CEEF and ADMM algorithms is consistent and greater than that of the noncooperative scheme. We can hence conclude that the performance of the cooperative schemes is better than that of noncooperative scheme under varied computational loads.

Figure 4 shows the impact of different data sizes of a task on user energy efficiency under the CEEF, ADMM, and noncooperative algorithms when the transmission power of the user terminal is 0.2 W and 0.5 W. The abscissa in the figure represents the data size of the task, and the ordinate represents the energy efficiency. An increase in the data size of the task gradually decreases the efficiency of the three algorithms. When the data size of a task increases, the energy consumed by the system for calculations increases, thereby decreasing energy efficiency. The figure also shows that the energy efficiency trend of the CEEF and ADMM algorithms is consistent and greater than that of the noncooperative scheme. We can hence conclude that the performance of the CEEF and ADMM algorithms is better than that of the noncooperative scheme for varied data sizes.

Figure 5 shows the impact of transmission power of the user terminal on user energy efficiency of the CEEF, ADMM, and noncooperative algorithms when the task data size is 0.2 Mb and 0.5 Mb. The abscissa represents the transmission power, and ordinate represents the energy efficiency. With an increase in transmission power, the energy efficiency of the three schemes gradually decreases. When the transmission power gradually increases, the energy consumed by the system for transmission calculation increases more than the transmission rate, resulting in reduced energy efficiency. The figure also shows that the energy efficiency trend of the CEEF and ADMM algorithms is consistent and greater than that of the noncooperative scheme. We can conclude that the CEEF and ADMM algorithms perform better than the noncooperative algorithm when the user transmission power is varied.

Figure 6 shows a comparison between considering and not considering fairness. The difference between the two is that the objective function of optimization is different. The objective function without considering fairness is that the maximum total energy efficiency, that is, the ratio of the total user rate to the total user energy consumption should be the largest. The figure shows that when fairness is not considered, the energy efficiency gap between the best users and the worst users is relatively large. We can conclude that the maximum and minimum user energy efficiency considered in this paper can better ensure the fair access of resources for all users.

5. Conclusion
This paper develops a cloud network-edge network model and analyzes the fairness of user energy efficiency based on the maximum-minimum criterion. The user energy efficiency fairness optimization problem proposed in the cloud network-edge network model is a mixed-integer nonconvex fractional programming problem, which is difficult to deal with. Through the generalized fraction theory, relaxation variables, and equivalent replacement of the optimization problem, the problem was modified into a convex optimization problem. Accordingly, CEEF- and ADMM-based energy efficiency fairness algorithms are proposed to dispose of the problem. Simulations are performed to test the proposed algorithm and verify that the proposed algorithm can guarantee user energy efficiency fairness.
Data Availability
Data sharing is not applicable to this article as no data sets were generated or analyzed during the current study.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
The work was supported in part by Subproject of National Key Research and Development Plan in 2020 (Grant No. 2020YFC1511704), Beijing Science and Technology Project (Grant no. Z211100004421009), Beijing Information Science & Technology University (Grant Nos. 2020KYNH212 and No. 2021CGZH302), and in part by the National Natural Science Foundation of China (Grant no. 61971048).