Abstract

As the cloud data centers size increases, the number of virtual machines (VMs) grows speedily. Application requests are served by VMs be located in the physical machine (PM). The rapid growth of Internet services has created an imbalance of network resources. Some hosts have high bandwidth usage and can cause network congestion. Network congestion affects overall network performance. Cloud computing load balancing is an important feature that needs to be optimized. Therefore, this research proposes a 3-tier architecture, which consists of Cloud layer, Fog layer, and Consumer layer. The Cloud serves the world, and Fog analyzes the services at the local edge of network. Fog stores data temporarily, and the data is transmitted to the cloud. The world is classified into 6 regions on the basis of 6 continents in consumer layer. Consider Area 0 as North America, for which two fogs and two cluster buildings are considered. Microgrids (MG) are used to supply energy to consumers. In this research, a real-time VM migration algorithm for balancing fog load has been proposed. Load balancing algorithms focus on effective resource utilization, maximum throughput, and optimal response time. Compared to the closest data center (CDC), the real-time VM migration algorithm achieves 18% better cost results and optimized response time (ORT). Realtime VM migration and ORT increase response time by 11% compared to dynamic reconFigure with load (DRL) with load. Realtime VM migration always seeks the best solution to minimize cost and increase processing time.

1. Introduction

Demand side management (DSM) system provides information and communication technology (ICT) is one of the important functions of smart grid (SG) [1]. It performs bidirectional communication to get user information and distribute energy among users according to their needs. A huge no. of monitoring devices have industrialized, deployed and utilized in DSM. Many new concepts have been introduced in smart grids, such as charging/discharging of electric vehicles (EVs), smart meters, and smart home appliances, etc., [1]. As the number of smart devices grows, huge storage space and high levels of security are required. To solve the above problems, the concept named cloud computing had introduced. In recent years, the demand for cloud computing has increased rapidly. Cloud computing provides Internet services with access to services from anywhere in the world. Cloud computing can offer the lowest storage cost, highest speed, high performance, and flexibility. Cloud data centers typically consist of many physical machines (PMs). Virtualization technology allows providers of cloud services to offer users the convenience of virtual machine (VM) and resource sharing [2]. Intelligently loading VMs into PMs is a research theme that saves energy and minimizes operating costs. The cloud can be public, hybrid, or private. Examples of cloud computing include Netflix, Skype, e-mail, and Microsoft Office 365. However, the issues with cloud computing are latency and security. Then introduce fog computing to solve the above problems.

Computer Information Systems Corporation (CISCO) introduced the theory of fog computing in 2014. Fog computing is very useful to provide services with minimal latency at the network edges. Fog computing decreases the load on the cloud and provides the convenience of uninterrupted communication with users. Communication between the fog and the user can take place over a specific communication medium, WiFi. Fog computing provides users through local services.

End users communicate directly with Fog, and their requests are fulfilled by multiple applications on the VM. Communication between end users and fog requires network resources. High network resource utilization causes communication delays. Network latency can even cause network congestion. Through 2-ways network resources can be balanced: VM consolidation and intelligent task assignment. VM consolidation includes VM migration and VM placement. VM placement is used to place VMs intelligently based on the processing power of each node, but cloud data centers are complex and users cannot rely on their initial placement [3]. VM migration is to modify the position of VMs by bandwidth utilization rate and this is an optimal approach to balance a load of network resources [3]. Intelligent task assignment greatly increases the operational cost [3]. In this research we have presented an algorithm of live VM migration to make a balance between different network resources. VM-to-PM packing is also has been performed to reduce the number of active PMs. 3-Layered architecture is presented in this paper: cloud, fog, and a consumer layers. Cloud and fog provide same services: infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) and software-as-a-service (SaaS). Migrating a virtual machine is changing the location of the virtual machine through bandwidth utilization. This is the best way to balance the load on your network resources [3]. Intelligent task assignment significantly increases operational costs [3]. This paper proposes a real-time VM migration algorithm for load balancing network resources. VM-to-PM packaging is also done to reduce the no. of active PMs. In this paper, we propose 3-layer architecture of cloud layer, fog layer and consumer layer. Cloud and fog offer the same service: PaaS, IaaS and SaaS. The difference between cloud and fog are: the distance from the user, processing, size, and users. The distance between the consumer layer and the cloud is thousands of kilometers, and the fog is at the edge of the network.

1.1. Problem Statement

Cloud data centers have large no. of PMs [2]. Virtualization technology allows cloud service providers to share resources and also offer capabilities of virtual machine to their customers [2]. Cloud service providers may want to package multiple VMs into a small number of PMs to minimize operational costs and save energy [2]. The author of [2] suggested a way to solve the VM-to-PM packaging problem based on shadow routing. However, when a PM is fully packed, there is a chance of congestion. Due to congestion, there will be more delay. There is a rapid growth of cloud data centers. The rapid growth of Internet services has imbalanced the load on network resources [3]. The bandwidth usage of some PMs may be too high resulting in network congestion [3]. Fu et al. in cite 3 proposes a layered algorithm of VM to effectively balance the network load [3]. The cost of migrating between regions is high because the regions are separated from each other. An intraregion migration algorithm is proposed to minimize the cost of interregion migration. However, the intraregion migration algorithm is expensive because it migrates VMs too often. Reference [4] proposed a virtual machine migration algorithm using Markov decision processes and Q-learning algorithm, which is applied to Round Robin, inverse ant system, max-min ant system and ant system algorithm. In this paper, cost-aware intraregion VM migration algorithm is proposed. The problem of VM-to-PM packing problem is also considered. Reference [5] proposed a virtual machine migration process, which will be responsible for minimizing the migration that leads to the reduction of response time.

1.2. Related Work

In reference [1], the authors proposed a method based on shadow routing to solve VM autoscaling and VM-to-PM packaging issues. The intelligently package in VMs to reduce the number of managed PMs, save energy, and minimize operating costs. However, congestion can occur if there are many VMs in the PM. In references [6, 7], the authors proposed IGGA, ACO and firefly optimization algorithms to minimize the migration cost. These algorithms are also proposed to reduce high energy consumption. However, the migration cost and load on data center is still high. Reference [8] predicts VM migration using chaotic Drosophila Rider neural network, and uses VM migration and optimal switching strategy to optimize power. If the state of predicted load is overloaded, the HHSMO optimization algorithm will be used for VM migration. If the state of predicted load is underloaded, the HHSMO algorithm will be used to switch the server ON/OFF, which will improve the energy efficiency of the system. Reference [9] proposed a model to analyze the current state of the running tasks according to the results of the QoS prediction assigned by an ARIMA prediction model optimized with Kalman filter. According to the analysis of QoS status, a scheduling strategy is calculated by combining particle swarm optimization (PSO) and gravity search algorithm (GSA), which reduces resource consumption and SLA violation rate. However, minimizing the number of VM migrations is not a solution for reducing energy consumption. However, it affects the system performance. In reference [10], the author proposed a scheduling algorithm that assigns four priority levels to vehicle charging and discharging. The proposed method optimizes the latency of plugins. It provides the communication architecture between SG and cloud platform. It optimizes the load during peak hours. However, the pricing policies used to eject vehicles are not efficient for users. In reference [11], a virtual data center migration (VDC-M) algorithm has been proposed to decrease the high energy consumption and wasted network resources. This article considers correlated VMs as a whole rather than individually. It remaps VDC-M requests, calculates migration paths, and allocates bandwidth resources to migrated VMs. However, it consumes a lot of processing time. Munshi and Mohamed proposed the Hypergraph Partition Algorithm (HPA) to solve the problem of minimizing the cost of server consolidation [12]. The proposed method minimizes network overhead to 50%. However, migrating one VM at a time adds cost. Table 1 shows the related work of the paper.

A system to handle big data is proposed by Munshi and Mohamed in based on the lambda architecture [12]. The main objective of the proposed method is to do real-time operations and parallel batching. The data of all the connected smart devices is stored on Hadoop Lake of big data. The system can handle a large amount of data, but there is a limit on the data, and the devices are increasing day by day. That means more advanced research and more robust algorithms are required to compete and work with the dynamic nature of intelligent machines.

Arora et al. proposed various security algorithms that work on data threats and preserve them while using the Internet [13]. The authors deploy encryption and decryption algorithms to eliminate the fear of losing data and data segregation. Furthermore, they have also shown a comparison between different algorithms based on their features.

Nepal et al. proposed an algorithm that works on the features of long term preservation like flexibility and business progress that decreases the jeopardies by recovering from disasters [14]. Now, these features are not available on physical machines that they have introduced it on virtual machines where all the data is stored on cloud resource. Riahi and Krichen presented an algorithm of multi-objective genetic [15]. The primary purpose of the proposed algorithm is to solve the issues found in the placement of virtual machines, like over usage of physical devices and depletion of resources in the cloud. They have also used Bernoulli Simulations to prove that their work has the right adaptivity approach. They have also shown positive results in a company where they apply the proposed algorithm, cost reduction, and the resources are fully optimized in the end. But a negative point in their submitted work is that there is nothing to handle big data accurately. So there is a need to solve this problem as now data is the most critical object for each business and organization. Khalid et al. proposed a system for home energy management that has resolved the issues of workload with harmonization [16]. It has practically performed very well, and the energy is consumed efficiently and compactly. The cost of electricity has also been reduced with this algorithm, but a significant flaw is that it does not consider user comfort while working. So here needs improvement. Chekired and Khoukhi worked on the energy consumption and proposed a scheduling algorithm in which the electronic automobiles use the power based on priorities [1]. The vital benefit of the proposed algorithm is that it has also taken the charging and noncharging states of the vehicles into consideration. They have divided the users into two types named calendar and random users. The main objective of their algorithm is to supply power to the public in overload and underload hours. They have introduced four levels of priority to resolve the overload and underload issues. The 2 and 4 priority means the discharge of calendar and random users, whereas if the demand is less or equal to the energy production, then calendar users will get priority one. Random users will get priority 3 for charging. And if the need is greater than the production, priority 1 and 3 means the discharging of automobiles, and 2 and 4 are used for charging. An alarming fact of the algorithm is that the damaged batteries are disposed of that infest the environment. Jensi and Jiji proposed the swarm optimization algorithm specifically designed to solve the universal optimization issues [17]. The authors present a framework that is proficient at solving optimization issues. They have also shown a comparative analysis of the algorithm with two prevailing algorithms with the help of 21 standard functions. Furthermore, the algorithms have some problem that needs to be resolved as it can only work with single objective tasks and have precipitate conjunction frequency. Faheem et al. proposed a new model dedicated to in-house infrastructure and is location independent [18]. Their model highlights the security issues and future challenges. The article wants to inform the users about the security risks linked with cloud storage and data accuracy. They have also presented a comparative analysis to show the compatibility of their paradigm. But the issues are still in the way of progress that needs attention to find out the solution.

A shadow routing approach is proposed by Gou et al. [19]. They have introduced a packing of virtual machines to physical machines for circumventing surplus of cloud resources. They only want to save energy and cost. VM placement problem is also solved via autoscaling. Optimization is not needed from the beginning because the presented algorithm is adaptive. The practicality of the algorithm is very significant. Furthermore, there is a need for more research, and more robust algorithms are required to resolve the issues of VM placement. Fan et al. proposes a dynamic Virtual Machine amalgamation [20]. That is an energy-aware system to decrease the usage of power in virtual machines. The authors have worked on the migration and placement of VM. They have worked with excellent research and use the best mechanisms for placement of VM, and they have also worked on migration based on top CPU consumption to show the stability in the cloud circumstances. Thus it has proved its worth by decreasing power consumption and by balancing cloud load. But there is still a need for an excellent algorithm that can handle critical data centers and increase migration expenditures. Mirjalili, et al. offered two novel components in a multi-objective grey wolf optimization algorithm [21]. They use fixed-size archives for the security of nondominating clarifications. They have also tested the proposed algorithm with ten known standard functions to show the adaptiveness of the algorithm. It can work with binary and ternary functions, but a drawback of the algorithm is that it has failed to work with four or more parts accurately. Song et al. proposed a model on the consumption of energy efficiently [22]. They worked with a mathematical expression called EE, which means energy efficiency calculated with the CPU’s frequency and usage. They have also implemented numerous tests to prove that the proposed algorithm is correct. They have also confirmed that the EE approach is accurate for cloud systems with the theory and practical performance. A mechanism specifically designed to manage demand-side power optimization is proposed by Naz et al. [23]. It preserves stability between the production and the plea of energy—the proposed architecture for the communication between users and the production unit. The reusable resources are deployed to produce power. They have also used the grey wolf valuation algorithm to balance between overload and underload of energy consumption. The algorithm has successfully transferred the on-peak load to off-peak so that there will be equilibrium between the demand and energy production. But a discomfort of the proposed algorithm is that it can shift the burden to underload hours, but they can get overloaded with the shifting of the load, and the problem still exists. There is a need for enhancements in the algorithm that can resolve this issue and work on a stable level.

The Advancement in grey wolf optimization algorithm is proposed by Wu et al. [24]. They have to work on the existing GWO algorithm and modify it to overcome issues like the local minima. The author verified the algorithm by 29 tests, and it proved excellent for single-objective tasks. Still, the real-time optimization problems are more critical and need more than one objective function to resolve global issues. So the modified version of the algorithm still needs more work to do it.

An efficient algorithm to solve the virtual machine placement issues is proposed by Liu et al. [25]. The idea behind this great algorithm comes from an existing algorithm named the ant colony optimization algorithm. The authors’ primary objective is to take such an algorithm in the market that resolves VM placement issues and decreases the use of Physical machines and energies, leading to cost reduction. But it does not help manage resources. That is why it is still in infancy, so more research is required in this algorithm.

A layered virtual machine migration algorithm is proposed by Fu et al. [3]. The regions are connected and can share the burden of each other. For using cloud resources, two algorithms are used. The main objective of the proposed system is to determine that a region is overloaded or underloaded and then convert it to the normal phase. One algorithm is dedicated to this interregion communication.

Furthermore, it is a great initiative to reduce the congestion, so there will be no delays. But a lousy fact of the proposed system is that migration of even one task will cost high. That is why there is a need for such algorithms that work successfully with interregion migration systems at a reasonable cost.

Two pricing scheme is proposed by Javaid et al. [26]. That uses IT and communication with the conventional lattice to make it efficient. One pricing scheme is for short-term users, and the other is for long term users. In a short-term pricing scheme, you need to pay as you are moving, and in the different strategies, the users need to pay honestly for the instances in possession.

In order to achieve the balance between the demand of power consumers and the supply side of power grid, a three-tier architecture of “Cloud-Fog-Consumer” is established in this paper. Based on the live VM migration algorithm and three service agent strategies, the migration cost and response time are reduced.

2. Research Methodology

Bidirectional communication architecture is proposed for the efficient management of the resources in the residential area, which has three layers: Consumer layer, Fog layer and Cloud layer as shown in Figure 1. Cloud layer mainly is responsible for dispatching, communication and data processing, which includes service provider, cloud environment, and utility. In the Consumer layer, residential areas are considered. On the basis of six continents, the world is classified into 6 regions [27-36]. In this research, we consider region 0, which is North America, because the percentage of users in this region is 80 million [31]. Two fogs are used for two cluster of buildings to effectively meet the needs of the users. Suppose that each cluster has 10 buildings. Each building has 50–80 homes. A smart meter is connected to each house and a controller to each cluster for communication.

A fog contains 2 PM. Each PM contains: memory, storage, number of processors along with processors speed and available bandwidth which is shown in Table 2. Virtualization technology enables service providers to provide the users the facility to use the VMs instead of PMs. 60 VMs are considered in each fog. There is a load on some PMs. If the load on PM is greater than the defined bandwidth, the VMs will migrate to the normal PM. Label 0 represents the overloaded region and label 1 shows the normal region. The stored data in the fog is temporary to make it permanent fog forwards the data to cloud. Consider using a centralized cloud platform. The cloud stores data persistently and provides the utility grid facilities to meet the needs of consume. Each cluster has a controller for communication because users cannot communicate directly with fog.

Energy demand is expressed as EDEM. The fog then communicates with the MG near the clusters of buildings. Also, MG uses renewable energy resources. MG also have its own generation of power resources and a small amount of electricity. The index of MG is MG = 1, …, M and the total energy generated in MG is expressed as Egen. It sends back an acknowledgment of the energy they have. If

The MG will fulfill the consumers need.

Else

To provide macro grid facility, fog communicates with the cloud. The macro grid is on layer 3 (Cloud Layer). There is a large amount of electricity produced by macro-girds. Wind turbines, fossil fuels, water turbines, etc. are sources of electricity for macro grids. The workflow is shown in Figure 2. The key symbols used in the paper are given in Table 3.

Figure 2 shows the work flow of overall system. The first connection Node is grid station, second is cloud server, third one is fog controller. Lastly, we have consumers and cities.

3. Live VM Migration Algorithm

With the continuous change of VM load, the physical host load may be too high to reduce the service quality, or the physical host load may be too low to make full use of resources. Therefore, it is necessary to migrate virtual machines periodically in order to improve the service quality and resource utilization rate. The VM migration algorithm comprehensively considers the CPU and memory usage of VM, and performs data migration to ensure the stability of the load. The main idea of this algorithm is to migrate VMs that exceed a certain bandwidth. There is a fixed amount of PM and its bandwidth is defined by fog. Multiple VMs are installed on the PM to save energy and minimize operating costs. The VM accepts requests from consumers. Consumer demand is increasing rapidly and fog is underload. To balance the fog load Live VM migration is used. Load balancing algorithms has been used to allocate the workload in the fog. It migrates overloaded VMs to normal PM to avoid congestion. The main focus of real-time VMs is to minimize migration costs and use network resources effectively. The pseudocode for live VM migration [16] is shown below (Algorithm 1).

(1)input: Hostlist, VMlist
(2)CurrentTime
(3)LinkSpeed
(4)VMMigrationTime
(5)VMMigraitonalListTime
(6)for i:0 to Hostlist do
(7) host: HostLargeSize in Hostlist
(8) while host >0 do
(9)  VM: VMLargeSize in VMlist
(10)  for j : 1 to VMlist do
(11)   if VM > host then
(12)    VM: VM++ in VMlist
(13)   else
(14)    host: host–VM (size)
(15)    VM is in Migration
(16)   end if
(17)  end for
(18) end while
(19)end for
(20)VMMigrationListTime: CurrentTime + (VM/LinkSpeed)
(21)VM: VM++ in VMlist
(22)host: host++ in Hostlist

4. Service Broker Policies

These are used to map incoming traffic from consumers to available fog. Below are three service proxy strategies.

4.1. Closest Data Center

The nearest data center (CDC) holds all fog index tables. It selects the fog with the smallest delay and the closest cluster to the same region. If the distances to the clusters are the same, then fog is randomly selected.

4.2. Optimize Response Time Policy

It keeps an index table of all the fog. Optimized response time (ORT) for checking all fog history. Then select the fog with the best response time in the same region.

4.3. Dynamically ReconFigure with Load

Dynamically reconFigure with load (DRL) is the closest combination of data center and optimized response time. It selects the fog that is closest to your cluster and has the best response time. It is also responsible for scalability. It can increase or decrease the number of VMs accordingly.

5. Simulation Results and Discussion

In this paper, we used the Cloud Analyst simulation tool. Live VM migration algorithms have been proposed to balance the overall load on the network. It can also be used to effectively utilize network resources and minimize overall migration costs. To do the experiment, this paper presents the results using the live VM migration algorithm and three service agent strategies. Live VM migration always seeks the best solution to minimize cost and increase processing time.

5.1. Cost Comparison

Figure 3 shows the VM cost, MG cost, total data transmission cost, and total cost when using the live VM migration algorithm with the three service agent strategies. The total cost of using DRL is not good, as the strategy is still under consideration. Therefore, the result is not very accurate.

5.2. Response Time

Response time (RT) is the time taken by the VM to execute a request from the initialization process and receive a response. The response time of the live VM migration algorithm with three service agent strategies is shown in Figure 4. The response time using the live VM migration algorithm with ORT is optimal because it selects the fog with the best response time. Equation (3) gives the total response time and equation (4) gives the total delay. Tlatency represents signal delay time.

5.3. Processing Time

The total time taken to process a request is called the processing time (PT). Figure 5 shows the processing times for the three service agent strategies. The fog is chosen based on the best response time, so the processing time using the real-time VM migration algorithm and ORT is optimal. Equation (5) is used to calculate the processing time and equation (6) to calculate the total bandwidth for the user.

The total time taken to process a request is called the processing time. Figure 5 shows the processing times for the three service agent strategies. The fog is chosen based on the best response time, so the processing time using the real-time VM migration algorithm and ORT is optimal.

In Figure 6 there are some value of different parameters which is used in SG and VM. These parameters are storage, memory, available bandwidth, and the number of different processors.

To arrange SGs queries, the designed methodology analyzes six areas, each of which is controlled by 2 fog nodes. Each area contains a 100 SGs, each with a max of 1,000 houses. Each house asks power via SG, which is sent to fog node, who subsequently allocate energy to that particular residence via SG based on consumption and energy data. Each area has a fully electric Power Distribution Stations capable of charging or discharging a total of 1,000 units in 60 minutes. Energy provider (company) is connected with web and SGs. The web server holds all of the data about grid’s power production and is linked to fog nodes throughout each location. Each cloud server contains data on energy usage in its associated area and makes decisions in realtime to meet users’ electricity needs. The cloud hosts interact with remote server to designate an energy business that really can meet the needs of SGs who are powerless. The suggested energy efficiency improvement system is intended to reduce cloud server costs and delay. By minimizing the overall reaction time among cloud and SGs, delay is reduced.

Figure 7 shows the price & Cost including Grid cloud transmission (GCT) of broke services which is provided by the power suppliers to energy consumers. And in Figure 8, DRL and GCT show the cost allowance of service broker policies for their clients and consumers.

The average response time by 12 different regions are shown in Figure 9 which identify the PSS, Cluster by Active VM, Round Robin algorithm and Throttled graph.

Response time is the time taken by the VM to execute a request from the initialization process and receive a response. The response time of the live VM migration algorithm with three service agent strategies is shown in Figure 9. The response time of the live VM migration algorithm with three service agent strategies is shown in Figure 10. The response time using the live VM migration algorithm with ORT is optimal because it selects the fog with the best response time.

The total processing time and the overall response time of all 12 regions or 12 fogs shown in Figure 10. There is average response time and fog processing time. And average response time is greater in value than the fog response time.

Figure 11 shows the VM cost, MG cost, total data transmission cost, and total cost when using the live VM migration algorithm with the three service agent strategies. The total cost of using DRL is not good, as the strategy is still under consideration. Therefore, the result is not very accurate.

In above Figure 12 the comparison of response time and processing time with different regions such as fog processing time of 12- fog regions are less as compare to fog processing tome of 6 regions. As the response time of overall response time is smaller in value than the response time of half regions such as 6 regions or fog-6.

6. Conclusion and Future Work

Cloud computing is rapidly gaining popularity. Cloud computing is committed to providing efficient services to our customers. The main purpose of cloud computing is to provide efficient services to customers. It contains many resources. Cloud computing resources need to be effectively managed. VM consolidation is becoming more common, and network resources can be effectively managed. VM placement and VM migration are working fine. Migrating VMs has more powerful consequences than deploying VMs. In this paper, we propose an integrated environment based on clouds and fog. A two-way communication architecture is shown. Consumers send demands to fog, and MG provides energy to meet demand. This paper proposes live VM migration to effectively balance the load of VMs in the fog. Migration cost is 18% better with CDC and ORT. The response time with live VM migration algorithm and ORT is 11% better than the dynamically Figure with load. However, processing time is increased because live VM migration algorithm always goes for the optimal solution to minimize cost. In future, cluster-based VM migration will be done for more efficient results.

Data Availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

It is declared by the authors that this article is free of conflicts of interest.

Acknowledgments

This research was funded by the National Natural Science Foundation of China “Research on Theory and Method of Userside Integrated Energy Optimization Based on Multi-Agent Game” (U1766210).