Abstract
With the development of the mobile Internet, smart mobile terminals have become an indispensable tool for people's lives and mobile applications are becoming more and more powerful. This research mainly discusses the dynamic resource allocation strategy of the mobile edge cloud computing environment. The physical resource layer in the network model is responsible for providing specific resources that are actually available, such as hardware resources, computing resources, storage resources, mainly including base stations, mobile edge computing servers, spectrum, power, and other communications of different infrastructure vendor basic components of the system. The functions of the virtual machine monitor include resource virtualization and resource management. As an important component of wireless network virtualization, virtual machine monitors are usually deployed in physical base stations to provide physical resources and to consider the connection between the virtual machine stations. The business of the business cache model is an application that is requested by users running on the mobile edge computing server or cloud at the base station. The computing task scheduling in the mobile edge environment can be classified as a wireless interaction model. This model captures the user throughput in cellular network interaction. The physical layer channel access strategy (CDMA) allows all mobile users to efficiently share the same spectrum resources at the same time. When the preference coefficient for task energy consumption varies between 0.35–0.55 and 0.65–1, the superior range of maximum system efficiency achieved by RAOM accounts for 55% of the entire range. This research contributes to the reasonable allocation of resources, and the mobile edge computing model improves the fairness of users with a lower transmission cost.
1. Introduction
In the process of the prosperity and development of cloud computing, its own problems have gradually emerged. The first is the problem of data privacy. How to protect the private data stored in cloud service providers from illegal use requires not only technological improvement but also legal improvement. Second is the problem of data security. Part of the data stored in the cloud service provider is a key business secret for the enterprise. It is related to the survival and development of the enterprise. Data security affects the application and popularization of cloud computing in the enterprise; the most important thing is the network delay problem. The network connection between the cloud computing node and the enterprise or user is not always stable, and too many users in the same time series request to communicate with the cloud server at the same time will cause channel blockage, increase the waiting time of the task queue, and reduce the user's work efficiency and user experience.
Mobile edge cloud computing is the inevitable product of conforming to this development trend. It narrows the physical distance between mobile terminals and servers by migrating servers from a centralized data center to a decentralized mobile network edge. On the one hand, it can reduce the pressure on the backbone network and reduce the transmission delay of the network. On the other hand, it can also share the concentration. The heavy load of the server has received widespread attention from the industry and academia. With the vigorous development of 5G, Internet of Things, artificial intelligence, big data, and other technologies, the role of mobile edge computing has become more prominent, helping to overcome development bottlenecks in these fields and provide strong support for its development.
Mobile edge computing (MEC) is currently being standardized as a new paradigm, which is expected to enrich future broadband communication networks. Vallati C believes that with the help of MEC, the capabilities of traditional networks can be enhanced by placing cloud computing-like functions in the wireless access network and in the MEC server close to the end user. His research can enable applications and services to be implemented near the terminal, but it lacks test data [1]. Kang studied the current state of cloud processing methods and studied effective use of cycles on the cloud and mobile devices to achieve such low latency, low energy consumption, and high data center throughput computing partition strategies. His research uses 8 smart applications. Although his research uses the latest deep neural network technology, it still lacks data support [2]. Chen W believes that mobile edge cloud computing (MECC) has become an attractive solution for enhancing the computing and storage capacity of mobile devices (MD) by leveraging the available resources at the edge of the network. He considers computing offloading on a mobile edge cloud composed of a group of wireless devices (WD), and each WD has an energy harvesting device that can collect renewable energy from the environment. He first formulated the problem of multiuser and multitask computing offloading for the green MECC and then used Lyapunov optimization method to determine the energy harvesting strategy—how much energy each WD needs to collect—and the task offloading schedule—a set of computing to be accepted into the mobile edge cloud unloading request, assigned to each WD group that has allowed the unloading request and how much workload to be processed in the assigned WD. His research put forward the concept of MD but did not briefly describe its performance [3]. Rimal et al. researched the performance improvement of centralized cloud and integrated fiber wireless (FiWi) access network supporting MEC. They proposed a novel unified resource management solution that combines centralized cloud computing and MEC computing offloading activities into the basic FiWi dynamic bandwidth allocation process. By using time division multiple access, both MEC and cloud traffic are arranged outside the transmission time slot of FiWi traffic. They developed an analysis framework for the data packet delay and response time efficiency of cloud and broadband access traffic. The integrated optical fiber wireless that they proposed has no specific performance test indicators and lacks logic [4].
The physical resource layer in the network model is responsible for providing specific resources that are actually available, such as hardware resources, computing resources, storage resources, mainly including base stations, mobile edge computing servers, spectrum, power, and other communications of different infrastructure vendor basic components of the system. The functions of the virtual machine monitor include resource virtualization and resource management. As an important component of wireless network virtualization, virtual machine monitors are usually deployed in physical base stations to provide physical resources and to consider the connection between the virtual machine stations. The business of the business cache model is an application that is requested by users running on the mobile edge computing server or cloud at the base station. The computing task scheduling in the mobile edge environment can be classified as a wireless interaction model. This model captures the user throughput in cellular network interaction. The physical layer channel access strategy (CDMA) allows all mobile users to efficiently share the same spectrum resources at the same time.
2. Dynamic Resource Allocation Strategy
2.1. Cloud Computing
The advent of the Internet of Things and 5G applications requires the integration of centralized cloud computing and the emerging mobile edge computing (MEC) with existing network infrastructure to enhance storage, processing, and caching functions not only in a centralized manner but also in a distributed manner. To achieve the following goals, support both delayed and mission-critical applications [5, 6]. Reducing total power consumption and network latency are the most interesting issues for large-scale mobile cloud computing (MCC) systems and their ability to meet service level agreements (SLA) [7]. Such a system uses cloud computing infrastructure to support offloading some of the user's heavy computing tasks to the cloud data center [8]. However, the delay caused by this offloading process leads to the use of servers (called “clouds”) placed physically near the user, creating the so-called mobile edge computing (MEC). Cloudlet-based infrastructure faces challenges, such as the limited functionality of the Cloudlet system (in terms of its ability to satisfy different types of requests from users in a broad geographic area). In order to meet the needs of users for different types of services and in a wide geographical area, Micro Cloud collaborates with each other by passing user requests from one Micro Cloud to another. This cooperation will affect power consumption and latency [9, 10].
2.2. Mobile Edge Cloud Computing
In the case of edge cloud computing, mobile users need to schedule tasks to the MEC server through the wireless channel provided by the mobile operator [11, 12]:
This research also ignores the time and energy consumption in the process of returning data from the MEC cloud to the user, because most scenarios and applications in mobile edge computing, such as the Internet of vehicles, facial recognition, and virtual reality often have large input data, but MEC calculates based on the input data magnitude of the result set data returned to the user is much lower than the input data, so ignoring this part of the cost is not included in the total cost calculation process [13]. When the user determines his own game scheduling strategy, the design of the scheduling strategy set for all users excluding user n is
We can obtain the mathematical expression of computing cost in two cases of local and edge cloud users [14, 15]:
The whole set is a set of strategies in which all users are at the Nash equilibrium point in the edge cloud computing task scheduling problem. Any user changing their own strategy within the framework of this set will lead to a decline in their own profits [16]:
At the Nash equilibrium point, the interests of the user group are guaranteed, while the relative interests of the users are improved. If the game strategy of a user in the Nash equilibrium set is not to schedule tasks to the cloud, then he changes the game strategy to local operation and then improves the personal income situation is contrary to the nature of the Nash equilibrium, so it can be inferred that the users who choose to schedule tasks to the MEC in the multiuser computing scheduling game in this section are all effective edge cloud users [17, 18]. Nash equilibrium has the characteristics of individual users maintaining stability. Under the state of Nash equilibrium, users reach mutually satisfactory game solutions so that users in the game will not change their strategies for their own interests. This property is very important for this research model [19].
2.3. Computing Task Transmission
First, the downlink transmission signal of the control node CU can be expressed as
Among them, and represent signals to be transmitted to , and , respectively [20, 21].
In order to describe the transmission model in more detail, we will first discuss the signal received by the cooperation node :
Through further analysis, it is found that after disassembling the diagonal matrix, the original expression can be written into the following form [14]:
The ZFBF matrix W will be generated according to the CSI conditions of all cooperating nodes, so the ZFBF matrix W applied on the control node can be obtained according to the following formula [22, 23]:
Among them,
After a series of processing by the cooperative node, the corresponding can be expressed as
The SINR corresponding to can be expressed as
After that, through Shannon’s formula, the maximum downlink transmission rate that each cooperating node can achieve is
It is assumed that the uplink transmission uses the same channel as the downlink transmission [24]. The uplink transmission signal transmitted by the cooperation node to the control node CU is
Among them, represents the transmit power of uplink transmission. During the uplink transmission, the interference between cooperative groups is ignored. After the control node CU side decodes the information transmitted by the cooperating node through SIC, the corresponding SINR can be expressed as
The architecture of mobile cloud computing is shown in Figure 1.

3. Dynamic Resource Allocation Strategy Experiment
3.1. Network Model Design
3.1.1. Physical Resource Layer
The physical resource layer is responsible for providing specific resources that are actually available, such as hardware resources, computing resources, storage resources, mainly including base stations, mobile edge computing servers, spectrum, power, and other communications of different infrastructure vendors the basic components of the system [25]. In this framework, there are a total of M In P providing intervention services. Each cellular network will deploy a base station and be managed by an In P. It is believed that each base station has deployed a mobile edge computing server, and each server has the capacity of caching and computing. For the m-th mobile edge computing server, its storage space is marked as Cm (in bits), which can be used to cache business data (such as a library or database used by a certain type of business) and its maximum CPU. The computing power is expressed as Fm (expressed in Hertz/CPU revolutions per second), which is used in computing offloading for mobile users. As for the frequency spectrum, the authorized frequency spectrum owned by different In P is orthogonal to each other, so there is no interference between base stations of different In P.
This research design has N = {1, 2, …, N} mobile device users, and each user has a computationally intensive task to complete. They are randomly distributed around an eNodeB, and the eNodeB can schedule the user's computing tasks go to the MEC server to calculate. Referring to the research results in previous articles on mobile cloud computing and mobile communication networks, this paper designs mobile user scheduling tasks in a quasi-static scenario [26]. In this scenario, all users with task scheduling requirements are offloaded to MEC. During the server process (usually a few hundred milliseconds), the unloading strategy is kept unchanged. On the one hand, this design does not affect the user will change the strategy at a later stage; on the other hand, it maintains the interests of other users and system stability.
3.1.2. Wireless Network Virtualization
The functions of the virtual machine monitor include resource virtualization and resource management. As an important component of wireless network virtualization, virtual machine monitors are usually deployed in physical base stations to provide physical resources and to consider the connection between the virtual machine stations [27]. Through wireless network virtualization, the physical layer network can be virtualized into multiple deliberate networks and managed by MVNO. The resource management function of the virtual network monitor is implemented by the virtual network controller and the virtual network manager. VNC is mainly responsible for the communication between In P and the user information collected by MVNO (such as service quality requirements and computing task information), and feedback the results of resource allocation to the user. VRM is responsible for dynamically allocating virtual resources to the MVNO client to maximize resource utilization or MVNO efficiency. All in all, each MVNO needs to maintain a virtualized network composed of In P substate networks, and each user is served by the base station in In P.
3.2. Business Cache Model Design
The business here is for applications that are requested by users running in the mobile edge computing server at the base station or in the cloud. This research believes that to run a certain type of business, it is necessary to cache certain types of data, for example, the required databases and libraries. Suppose that in the time interval of interest, the core network can provide K types of services. In order to solve the user’s request, the base station needs to obtain service-related data and store it. Because the computing and storage resources of mobile edge computing servers are limited, each mobile edge computing server can only provide a limited number of services, the types of services provided by the core network change over time.
3.3. Communication Model Design
Two links are mainly studied in the computing offloading problem, the wireless link from the user to the base station, and the wired link from the base station to the cloud in the core network. In the wireless link, this study uses the finite state Markov model based on fading characteristics. The FSMC model has a wide range of applications in wireless networks.
The computing task scheduling in the mobile edge environment can be classified as a wireless interaction model. This model captures the user throughput in cellular network interaction. The physical layer channel access strategy (CDMA) allows all mobile users to efficiently share the same spectrum resources at the same time. The channel is divided into nonoverlapping intervals through the division of channel-related parameter ranges, and each interval of selected parameters represents a state in the FSMC model. The relevant parameters used in FSMC can be the SNR of the receiving end, the amplitude of the received signal, or the collected energy. SNR can be selected as a parameter that composes the SNR model. The SNR of the receiving end is divided into L levels, and each level is associated with a state of the Markov chain. The block fading channel is considered to be that the SNR of the receiving end is a constant within a period of time but will change according to the Markov transition probability between different periods.
3.4. Calculation Model Design
3.4.1. Mobile Edge Server-Side Execution
The computing task performed by the edge server mainly includes three processes: transmitting data from the user to the base station, performing computing tasks on the mobile edge computing server, and transmitting data back from the base station [28]. Computing resources are shared by users who make the same offloading decision at the same time. Computing resources are dynamically allocated to users, depending on the type of business.
3.4.2. Cloud Execution
Cloud execution includes five stages: from user to base station, from base station to cloud, cloud execution, from cloud back to base station, and from base station to user. It is assumed that the cloud always has sufficient computing and caching resources. Therefore, the execution time of the cloud is often ignored. In addition, the transmission from the base station to the user and from the cloud to the base station is ignored. This is because the transmission process can be synchronized with the data transmission process. Secondly, for the results of the mobile edge computing calculation task and the upload results, the comparison is often smaller.
In simulation environment setting, the network scenario considered in this paper contains 4 base stations, each base station contains 3 channels, and a total of 12 users can perform calculation offloading in one selection. The coverage area of a base station is 50 meters. The bandwidth of each channel is 5 MHz, and each channel can only serve one user in a selected time. The number of users is between 12 and 50, randomly distributed near the base station. The user’s transmit power is randomly generated between 50 and 100 mW. The size of the upload task is randomly generated between 50 and 5000 kB. The total settlement resource on the cloud server is 10 GHz. Channel loss and noise effects are also considered in the simulation. The specific parameters are shown in Table 1.
4. Dynamic Resource Allocation Strategy
4.1. Computational Overhead and Effective Edge Users
This section simulates the basic research parameters of an edge computing environment in a cellular network environment, that is, some mobile device users who have computing tasks to be processed are randomly distributed around an eNodeB and MEC server. The specific network environment and hardware and software parameters are shown in Table 2.
In the research, the two performance factors of the edge cloud computing that have been arrived in the front are simulated, that is, the calculation cost and the number of effective edge users. The users design 15 or 30 cases, respectively, in order to simplify the calculation of the weight of the system dimension for time distribution and energy distribution. Set M1 = 1, M2 = 0, M1 = 0, M2 = 1, M1 = 0.5, and M2 = 0.5. There is a random allocation mode. Figure 2 shows the dynamic changes of individual computing costs for 30 users. Figure 3 shows the dynamic changes of the individual computing costs of 15 users. It can be seen from Figures 2 and 3 that each curve with a different color represents a user's total cost change. The user selects the local calculation method in the initialization phase, and the time and energy consumption distribution weights are different, resulting in an initial calculation cost of 1.5 and 2.0 or a value between 1.5 and 2. In the subsequent iteration process, when the edge computing overhead is greater than the local computing overhead due to the increase in multiuser channel interference, the user will switch back to the local computing method, but in the end all user groups will maintain the convergence characteristics of the potential function and converge to the system stability point, which is Nash balance point.


4.2. Algorithm Comparison Analysis
Next, in this article, the number of mobile users in the experiment is designed to be 20, 25, …, 50 to run the algorithm proposed by this research; all users choose to schedule tasks to the edge cloud and the distributed computing task scheduling algorithm designed in this chapter is effective for edge computing users. The comparison of numbers is shown in Figure 4. It can be seen that the simultaneous selection of edge computing by multiple users will cause wireless cellular channels to interfere with each other and channel blockage, which causes the increase in user scheduling computing task overhead, which is greater than the local computing overhead. At the same time, it can be seen that as the number of users increases, the wireless channel load increases, and the growth trend of the number of effective edge cloud users slows down. At this time, the number of wireless channels needs to be increased to improve system performance. The comparison of the total system-wide overhead in the three cases where all users choose local computing, edge computing, and distributed computing scheduling algorithms is shown in Figure 5. As the number of users increases, channel congestion and interuser interference increase so rapidly that there are more than 35 users in the system. When users choose the edge computing strategy, the system overhead is greater than the local computing overhead. Therefore, the design of the edge cloud computing environment needs to design the hardware configuration of eNodeB and MEC server according to the number of edge users to meet the needs of mobile users. The system simulation parameters for different numbers of mobile users are shown in Table 3.


As the number of cooperative groups changes, the total transmission rate of the nodes in our proposed scheme and the two comparison schemes also changes as shown in Figure 6. It can be seen from Figure 6 that as the number of cooperative groups gradually increases, the total transmission rate of the three schemes shows an increasing trend. This is mainly due to the increase in the number of cooperative nodes, which makes more nodes transmit wirelessly and participate in computing assistance. Therefore, if the channel conditions allow, the increase in the number of cooperation groups will benefit the calculation of the cooperation process. At the same time, we can find that the total transmission rate gap between the proposed scheme and the other two schemes gradually increase as the number of cooperative groups changes. The main reason for this phenomenon is that the zero-forcing coding technology is reasonably used in our proposed scheme, which increases the transmission rate during the downlink transmission, reduces the transmission delay, and reduces the overall energy consumption. The channel loss and noise influence values are shown in Table 4.

When the size of the computing task follows a uniform distribution, the comparison of the cumulative distribution function (CDF) of energy consumption under different cooperative group numbers and different minimum SINR requirements is shown in Figure 7. It can be found from Figure 7 that as the number of cooperative groups increases, the optimization scheme we propose will gradually increase energy consumption. Specifically, the increase in the number of collaborative groups will enable more collaborative nodes to participate in the computing collaboration process. Although this can further reduce the overall delay of the computing offloading process, it will also increase the total computing collaboration process. In addition, when the number of cooperation groups is fixed, the overall energy consumption will increase as the minimum SINR requirement increases. The main reason for this phenomenon is that when the minimum SINR requirement increases, both the control node and the cooperation node need to increase. Large wireless transmission power meets this demand, which will also increase the energy consumption of the wireless transmission process [29]. From a practical point of view, the increase in the minimum SINR requirement will also enable users to get a better experience. On the whole, according to the length of the user's maximum tolerable delay, selecting an appropriate number of collaboration groups for computing offloading can achieve the goal of ensuring user experience while reducing overall energy consumption. The independent simulation results are shown in Table 5.

4.3. System Effectiveness Analysis
When considering the wireless interference between the macro base station and the small cell base station, the variation of the maximum system performance with the number of users is shown in Figure 8. The trend of the curve is completely different, which proves that the wireless interference between the macro base station and the small cell base station is indeed too big to be ignored, and the expansion of the model is necessary. When the number of users is less than 80, the performance of the three algorithms is similar. As the number of users continues to increase, the impact of interference on system performance has gradually become prominent. When the number of users is greater than 80, RAOM has superior anti-interference performance compared to the other two algorithms and efficiently completes the calculation and load sharing. When the preference coefficient for task energy consumption varies between 0.35–0.55 and 0.65–1, the maximum system efficiency achieved by RAOM is slightly higher than that achieved by the other two algorithms. The superior range accounts for 55% of the entire range. With the change of the preference coefficient, the rate of change of the slope of the RAOM curve is the largest so that it can better adapt to the preferences of different users in actual situations.

The relationship between task execution energy consumption and task input data volume is shown in Figure 9. The simulation scenario in this section is a cellular MEC system that supports dense networking consisting of multiple small base stations and multiple users. Among them, the size of the simulation area is 1000 m × 1000 m, and the users and small base stations are randomly distributed in the simulation area. Related parameters are used in the simulation as shown in Table 6. In order to describe the difference between task requirements, user equipment, small cell access capabilities, and MEC server service capabilities, this chapter randomly selects the task input data amount, the amount of computing resources required to complete the task, user computing capabilities, base station bandwidth, and access from the corresponding rows. Values for parameters are capacity, MEC server computing capacity, and service capacity. It can be seen from the figure that as the number of iterations of the algorithm increases, the energy consumption of task execution tends to converge in a smaller number of times. Comparing the task execution energy consumption under different channel noises, it can be seen that the task execution energy consumption decreases as the channel noise decreases. This is because the reduction of channel noise leads to an increase in user transmission rate, which in turn leads to task execution delay and task execution energy consumption. Both the task-free offloading algorithm and the random offloading algorithm are noniterative algorithms, and the energy consumption of task execution does not change with the number of iterations.

5. Conclusion
Two links are mainly studied in the computing offloading problem, the wireless link from the user to the base station and the wired link from the base station to the cloud in the core network. In the wireless link, this study uses the finite state Markov model based on fading characteristics. The FSMC model has a wide range of applications in wireless networks. The computing task scheduling in the mobile edge environment can be classified as a wireless interaction model. This model captures the user throughput in cellular network interaction. The physical layer channel access strategy (CDMA) allows all mobile users to efficiently share the same spectrum resources at the same time. The channel is divided into nonoverlapping intervals through the division of channel-related parameter ranges, and each interval of selected parameters represents a state in the FSMC model.
The relevant parameters used in FSMC can be the SNR of the receiving end, the amplitude of the received signal or the collected energy. SNR can be selected as a parameter that composes the SNR model. The SNR of the receiving end is divided into L levels, and each level is associated with a state of the Markov chain. The block fading channel is considered to be that the SNR of the receiving end is a constant within a period of time but will change according to the Markov transition probability between different periods. The computing task performed by the edge server mainly includes three processes: transmitting data from the user to the base station, performing computing tasks on the mobile edge computing server, and transmitting data back from the base station. Computing resources are shared by users who make the same offloading decision at the same time. Computing resources are dynamically allocated to users, depending on the type of business.
Cloud execution includes five stages: from user to base station, from base station to cloud, cloud execution, from cloud back to base station, and from base station to user. It is assumed that the cloud always has sufficient computing and caching resources. Therefore, the execution time of the cloud is often ignored. In addition, the transmission from the base station to the user and from the cloud to the base station is ignored. This is because the transmission process can be synchronized with the data transmission process. Secondly, the results of the mobile edge computing calculation task and the upload results of the comparison are often smaller.
Data Availability
No data were used to support this study.
Conflicts of Interest
The author declares no conflicts of interest.
Acknowledgments
This work was supported by the Characteristic Innovation Projects in Ordinary Colleges and Universities in Guangdong Province: Research on the Sharing of Mechanism of Practical Teaching Resources in a Cloud Environment (2016WTSCX128).