Abstract
A fuzzy cloud resource scheduling model with time-cost constraints is built using fuzzy triangular numbers to represent uncertain task execution time. Task scheduling reduces total time and cost spent on a project. It connects virtual machines and functions. Particle swarm optimization (HPO) is used to plan cloud resources (HSOA). The approach uses orthogonal particle swarm initialization to increase the quality of the initial particle exploration, rerandomization to regulate the particle search range, and real-time updating of inertia weights to control particle speed. The suggested problem model and optimization approach are evaluated using random simulation data provided by the CloudSim simulation platform. Less overall execution time and a lower cost are shown to have fast convergence and solution capabilities in experiments.
1. Introduction
Cloud resource scheduling is the core content of cloud computing. In order to minimize the completion time, literature [1] established a corresponding cloud computing scheduling model. Reference [2] enables cloud service providers to obtain maximum benefits under the premise of ensuring service quality in the scheduling process. Reference [3] established a real-time scheduling system in order to minimize energy consumption. These studies used a deterministic execution time. However, due to the unpredictability of task execution, the execution time can only be an estimated value before the task execution is completed, which leads to uncertainty in the execution time of the task. Cases with time uncertainty are considered. When solving practical problems, mathematical models can be divided into three categories.
The fuzzy mathematical model is required to handle the uncertain problem that has to be estimated, as does the mathematical model of randomness, the mathematical model of randomness, and the mathematical model of fuzziness. Reference [4] looked at the temporal complexity of illness detection using a fuzzy genetic system, while [5] looked at task scheduling in the cloud using a fuzzy neural network method. In this research, triangular fuzzy numbers [6] are used to represent the unknown task execution time, and a fuzzy cloud resource scheduling model is developed. The issue of cloud resource scheduling is NP-complete [7, 8], and no efficient polynomial method exists. Scholars in the United States and overseas frequently employ clever optimization techniques to tackle the challenge of cloud resource scheduling. Genetic algorithm (GA) [9], particle swarm optimization (PSO) [10], and ant colony optimization (ACO) [11] are examples of intelligent optimization techniques. Because the PSO method has no overlap and mutation in processing and implementation as compared to the GA algorithm, it is quicker at handling cloud resource scheduling challenges. The PSO method is easier to acquire the initial solution than the ACO algorithm and can more effectively optimize the cloud resource scheduling issue. Because the PSO approach in a distributed environment has a quicker computation speed and shorter processing time, this research employs an optimization technique based on particle swarm to tackle the cloud resource scheduling problem. The convergence accuracy of the PSO method when searching the optimal solution is low, it is simple to slip into the local optimal solution, and premature convergence difficulties are common. Reference [12] discusses the fuzzy rules and statistical analysis and the Fuzzy Unordered Rule Induction Algorithm (FURIA) has been assessed for ASD behaviors detection on cloud.
In view of the aforementioned defects and disadvantages, this paper proposes a hybrid particle swarm optimization approach (rerandomization inertia weight orthogonal initialization particle swarm optimization, RIOPSO).
Using the rerandomization strategy, the particle swarm may extensively explore the solution space and avoid premature particles [13]. The concept of real-time updating of inertia weight [14] is used to govern the speed of particles in the search process to prevent sliding into a local optimum. In this work, an orthogonal matrix is also used to start the particle swarm [15] so that the particle swarm may attain an ordered starting solution, which is more efficient while searching the solution space. The three optimization methodologies listed above are employed in this study to increase the quality of the solution produced by the PSO algorithm in the search process, as well as the particle’s search ability, culminating in the optimal solution. Different related security issues related to data privacy and consistency at cloud are discussed in [16].
2. Related Work
2.1. Multiobjective Problem
The multiobjective problem (MOP) is often due to the conflict between multiple objectives, and it is impossible to obtain the optimal solution that makes each objective reach the optimal state [17]. To solve this problem, the solution algorithm is divided into the three following ways:(a)Multiobjective evolutionary algorithm based on Pareto domination [18]. Due to the increase of objective functions, the optimal solution set is sometimes too large, which leads to the problem of message overflow when solving multiobjective problems with this method [19].(b)Multiobjective evolutionary algorithms based on performance indicators [20]. When using the performance index as the reference information for the evolution of the algorithm, the calculation of the performance index is too complicated, resulting in a long running time [21].(c)Decomposition-based multiobjective evolutionary algorithm [22]. The solution of multiobjective problems is transformed into the collaborative solution of multiple single objectives, and the idea of decomposition is introduced to simplify complex multiobjective problems. The algorithm has high solution efficiency and better solution set performance. The multiobjective evolutionary algorithm based on decomposition (MOEA/D) proposed by Pratap and Zaidi [23] is particularly effective.
Compared with the other two algorithms, the multiobjective evolutionary algorithm based on decomposition has stronger search ability for dealing with combinatorial optimization problems. The optimization goals of cloud resource scheduling include minimizing the total completion time, minimizing resource consumption, and satisfying QoS. Therefore, this paper uses the decomposition idea of the MOEA/D algorithm and uses the compromise model to decompose the weight of the objective evaluation function under the constraints of time and cost. According to the weight ratio, the objective function of time and cost is optimized synchronously.
2.2. Algorithms and Performance Indicators
In order to verify that the RIOPSO algorithm proposed in this paper can guarantee the convergence and diversity when solving the multiobjective cloud resource scheduling problem, this paper will use the NSGA-I algorithm [24], NSGA-II algorithm [25], NSGA-III algorithm [9], and MOEA/D algorithm [26], which are four mainstream algorithms, in comparison with the RIOPSO algorithm proposed in this paper.
With the multiobjective optimization algorithm proposed, how to evaluate the pros and cons of the algorithm has also become an important research direction. When the multiobjective optimization algorithm solves multiobjective problems, the performance indicators can be used to quantify the performance of the algorithm. Commonly used performance indicators can be divided into three categories: accuracy measurement indicators, diversity measurement indicators, and comprehensive measurement indicators. When the above algorithm solves the multiobjective cloud resource scheduling problem, two comprehensive metrics are used, the inverted generational distance (IGD) [27] and the hypervolume (HV) [28] metrics; the accuracy metric, the coverage-metric (C-Metric) [29] index, quantifies the performance of the algorithm, and the above algorithms are compared and evaluated through the quantified performance. This paper uses a fuzzy mathematical model to solve the multiobjective cloud resource scheduling problem with uncertain execution time, and an optimization algorithm is proposed for how to find the best scheduling scheme efficiently.
3. Fuzzy Cloud Resource Scheduling Problem
3.1. Cloud Resource Scheduling Model
Tasks must be executed on virtual machines according to the feasibility algorithm in cloud resource scheduling, and each task can only be completed on one virtual machine; however, virtual machines can execute numerous tasks as needed. There are now m jobs to do. There are n virtual machines; therefore TASK = Task1, Task2, and Task m. VM = Vm1, Vm2, and Vm n, where n is the number of virtual machines and m is the number of jobs.
The mapping link between virtual machines and tasks may be represented since there is a one-to-many relationship between them in cloud resource scheduling. For example, each job can only be completed on one virtual machine, and each virtual machine can execute numerous tasks. Some fundamental concepts are used in the cloud resource scheduling model. A scheduling scheme Rk is depicted. The execution time of job I on virtual machine j is represented by 1 time ij, and its calculation algorithm is as follows:
Definition 1. Cost (Rk) represents the total execution cost of the scheduling scheme Rk. The total execution cost of the scheduling scheme is the sum of the costs consumed by all task executions. The calculation equations are as follows:In the above equation, Costmin and Costmax are the minimum cost and highest cost required for task execution. Since only the time factor or cost factor is considered, the consideration factor is too single, and a solution that satisfies both cloud resource providers and cloud resource consumers cannot be obtained. References [30, 31] mentioned that cloud resource providers want to provide services at a lower cost, while cloud resource consumers want to perform tasks in a faster time. Therefore, through comprehensive consideration, in order to achieve the goal of satisfying both cloud resource providers and cloud resource consumers, a time-cost constraint is proposed, and the time factor (4) is put forward as well as cost factor (5) to convert the multiobjective problem into a single-objective problem as in equation (6). The evaluation function isIn the above equation, t represents the time factor, and c represents the cost factor. By changing the values of t and c, this paper determines the proportion of the time factor and cost factor in the scheduling scheme and further controls the impact of time and cost on the evaluation function. The cloud resource scheduling model under the constraint of time-cost condition isAccording to the scheduling model P, each scheduling scheme is evaluated, and the minimum evaluation function value is obtained, and the corresponding scheduling scheme is the optimal scheduling scheme.
3.2. Fuzzy Cloud Resource Scheduling Model
According to equation (1), what is obtained is the determined time when the task is executed on the virtual machine. However, due to the uncertainty of task execution in the actual execution process, this paper uses triangular fuzzy numbers to represent the uncertainty of task execution time. The fuzzy range of task execution time is given by triangular fuzzy numbers, and the upper and lower ranges of the fluctuation range of task execution time on the virtual machine are determined by equations (8) and (9). Fuzzy lower bound on task execution time is
Fuzzy upper bound on task execution time is
In the above equation, Rand() represents a random number in (0, 1), and α1 and α2 represent the fuzzy coefficients of the upper and lower bounds, respectively. After blurring the execution time of the task, it is expressed as ProTime∼ = (tL, tM, tR), where tL represents the optimistic time of the task execution, tM represents the most probable time of the task execution, and tR represents the pessimistic time of the task execution.
According to the linear characteristics and decomposability of triangular fuzzy numbers, the corresponding evaluation function value is calculated using the fuzzy execution time, and equation (7) is converted into the following equation:
Reference [32] proposed a discriminant rule to determine who has better fuzzy performance by calculating the mean and standard deviation of triangular fuzzy numbers. A fuzzy number is considered to be ranked higher if it has a higher mean and a lower standard deviation. According to the above method, the determined cloud resource scheduling model can be transformed into a fuzzy cloud resource scheduling model with uncertain task execution time. The goal of scheduling is transformed into the calculation of the mean and standard deviation of the evaluation function, and then, according to the addition and multiplication of triangular fuzzy numbers, the characteristics of triangular fuzzy numbers are used to obtain the cloud resource scheduling under uncertain task execution time. The problem model is shown in equation (11). min{eval(Rk)} = min{P͂} = min{PL, PM, PR} =
In the above equation, Pη is the average value of the fuzzy numbers P͂, Pμ is the standard deviation, and ∂ is the weighting factor for uncertainty.
4. RIOPSO Algorithm
4.1. Analysis of PSO Algorithm
The PSO algorithm is an evolutionary system that models bird foraging behaviour. It iteratively looks for the global optimum with the individual optimum to arrive at the final outcome. To tackle the cloud resource scheduling problem, this work employs the particle swarm optimization technique with an unpredictable execution time. The value of the evaluation function is used to determine the best scheduling strategy. The parameters that must be employed in the PSO method are defined in Table 1. The particles in the PSO algorithm may imitate the bird’s foraging activity and be optimized in G-dimensional space. In the particle swarm, there are N particles, and the location of the ith particle is
The speed is
The particle’s position is updated by updating the velocity in Table 1 in the iterative process.
In the process of optimization, the particle swarm finds the individual optimum and the global optimum by comparing the value of the evaluation function and then iteratively updates the velocity and position. In the kth iteration, the individual optimum value of the ith particle is pBesti, and the global optimal value is gBestk. The velocity update equation of particle i is
The displacement update equation of the particle is
To use the PSO algorithm to solve the problem of cloud resource scheduling, firstly, the particles in the PSO algorithm correspond to the tasks and virtual machines in the cloud resource scheduling. The dimension of particle solution space depends on the number of tasks in cloud resource scheduling. The value range of each dimension is the number of virtual machines allocated in cloud resource scheduling; that is, the solution of each dimension corresponds to the number of virtual machines. In the process of each iterative update of the particle, the position of the particle will evolve accordingly, and a new evolution particle will be generated, resulting in a new correspondence between the task and the virtual machine, as well as a new scheduling plan. For a cloud resource scheduling with 8 tasks and 3 virtual machines, in the PSO algorithm used, the corresponding particles in the particle swarm have 8 dimensions, and the value of each dimension can be 0, 1, or 2. For the correspondence between tasks and virtual machines, Table 2 is one possible representation of particles.
4.2. Orthogonal Initialization
In the PSO algorithm, the optimization process of particles needs to be carried out through iteration. The initial state of the particle swarm has a direct impact on the subsequent optimization process. In the initial stage of particle search, the more orderly the initial solution is, the better it is for the subsequent particle iterative search. When the population is initialized, it is necessary to ensure that the particles are evenly distributed in the solution space as much as possible, which makes it necessary to satisfy the solution that the particles have all directions in the initialization stage. In the PSO algorithm, the random initialization of the population does not guarantee that the particles can be uniformly distributed in the solution space, and it is possible that the particles are concentrated in one area, or the similarity of the particles is too high, which is not conducive to the subsequent optimization process. In this paper, an orthogonal matrix is used to initialize the population, so that the entire population is uniformly distributed in the feasible solution space.
When the system has elements (Ele), and each element has levels (Lev), a total of combinations will be generated. If all the combinations are tested in the experiment, when Lev and Ele are very large, many similar combinations will be produced. Therefore, constructing an orthogonal matrix can screen out the uniform distribution in initializing the population.
The particles in the solution space are used for experiments with fewer combinations, resulting in a smaller particle population. This uses fewer particles but achieves better results. Among them, row represents the total number of groups of horizontal combinations, and row is much smaller than .
For the cloud resource scheduling problem where the number of tasks is 4 and the number of virtual machines is 3, the method of orthogonal initialization particle population is used as an example. If each task is executed on each virtual machine, then 34 = 81 experiments are required. However, if an orthogonal initialization design is used, only 9 experiments are required to find a representative solution. Moreover, with the increase of the number of virtual machines and tasks, the orthogonal initialization can show the characteristics of “evenly dispersed, neat, and comparable.” Table 3 shows the allocation scheme of cloud resource scheduling with the use of orthogonal matrices, the number of initialized virtual machines is 3, and the number of tasks is 4. It can be seen from Table 3 that, using the orthogonal matrix to initialize the population, only 9 initial solutions can be uniformly distributed in the solution space. Each row represents a scheduling scheme, the number of columns represents the number of tasks, and each number represents the number of the virtual machines selected to execute the task. For building an orthogonal matrix, first need to determine the base column, then build the nonessential column based on the base column and finally merge the base column and the nonessential column.
It is the number of columns of the orthogonal matrix that needs to be constructed, that is, the total number of tasks that need to be executed as mentioned in Algorithm 1. The basic column J satisfies (16), and the following is the pseudocode for creating an orthogonal matrix.
|
After creating basic columns and nonbasic columns, a complete orthogonal matrix has been constructed, but, in the end, if you want to get a matrix suitable for particle swarm initialization, you need to choose columns for the created matrix. Algorithm 2 gets the final matrix.
|
This constructs the orthogonal matrix used to initialize the population.
4.3. Rerandomization
According to the shortcoming that the PSO algorithm is easy to fall into the local optimum, the rerandomization method that can make the particle jump out of the local optimum is used to optimize the particle optimization ability, so that the particle can explore a larger range in the solution space and the optimization efficiency is higher. The particles are iteratively updated by calculating the variance to ensure that the particles can obtain better solutions. According to equation (17), the variance value suitable for particle swarm is obtained, and then the particle swarm is iteratively updated according to the obtained variance.
In equation (17), k represents the current iteration number, A represents the effective initial value of rerandomization, and S represents the slope, which controls the search range of particles. In the search process, the first part is called large-scale search, that is, extensive search. At this time, the slope of the variance curve is large, so that particles can search randomly in the search space far from the global optimal particle gBest. The second part is called small-range search, that is, fine search. At this time, the slope of the variance curve is small, and the particles are randomly searched around the global optimal particle gBest. The combination of the two parts can make the final particle converge to the optimal solution, so that the particle can search in the global scope, jump out of the local optimum, and search for the optimal solution. M is the number of iterations corresponding to the midpoint of the slope of the variance curve, and the two parts are divided by the midpoint M, which determines the search time of the broad search and the fine search.
4.4. Updating Inertia Weights
The particle’s convergence speed impacts whether the particle can converge to the present optimal state throughout the search phase, while it is looking for optimization in the solution space. The particle’s inertia weight value is an essential element in the PSO algorithm. The particle’s convergence rate is regulated by altering the particle’s inertia weight in each iteration procedure. When the particle’s inertia weight is large, the particle search range expands, the convergence rate slows, and it is difficult to get an accurate solution; when the inertia weight is small, the particle search range shrinks, the convergence rate accelerates, and it is easy to fall into the local extreme value. This study employs the approach of real-time updating of inertia weight, along with rerandomization, to allow the particle to explore the complete solution space and manage the particle’s convergence rate, according to this characteristic in the particle convergence process. The inertia weight value is changed in each iteration based on the change in the particle’s evaluation function. When the evaluation function value of the particle after the iteration becomes smaller, the inertia weight value of the particle will increase or remain unchanged; if the evaluation function value of the particle becomes larger, the inertia weight value of the particle will decrease. Equation (18) is the analogous equation, and the ith particle’s inertia weight value is updated in real time.
In the above equation, represents the inertia weight value of the current ith particle, the value range is (0, 1), S represents the range of the expected evaluation function value, ΔJi(k) represents the difference between the current evaluation value of the particle and the previous one. The difference between the status evaluation values and velocity after re-randomization are updated by inertia weights, i.e., calculated by incorporating equation (16) into equation (21).
Equation (14) is changed into
The particle’s position update equation is
In equation (19), variance(k) is the variance in rerandomization, and its value is determined by equation (17). ωi(k) is the inertia weight value of each particle in each update, and the value is determined by equation (18).
5. Simulation Experiments
5.1. Data Generation and Parameter Selection
Simulation tests are carried out using the cloud computing simulation platform CloudSim in order to examine and compare the performances of the models and methods provided in this research. The experiment’s outcomes are influenced by both time and expense. Both the time factor t and the cost factor c in equation (6) are set to 0.5 in this work. The dataset in this work is created using the data creation approach described in literature [33], with job sizes and virtual machine processing speeds ranging between [3000, 130000] and [300, 1300]. The execution time and cost of the job on the virtual machine may be calculated using the calculation technique in Section 2.1, based on the size of the acquired task, the virtual machine’s processing speed, processing speed for velocity, location, and inertia of the particle swarm in the RIOPSO algorithm. The weight value is repeatedly changed in response to changes in the value of the evaluation function. The values of the individual learning factor C1 and the group learning factor C2 have an influence on the experimental findings when the parameters are established. When C1 = 1 and C2 = 0, the current particle is only affected by its individual optimal value pBest, and the learning ability of the current global optimal particle is 0; when C1 = 0 and C2 = 1, the current particle is only affected by its individual optimal value pBest, and the learning ability of the current global optimal particle is 0; and when C1 = 0 and C2 = 1, the current particle is only affected by its individual optimal value pBest, and the learning ability of the current global optimal particle has no effect on the updating of the global ideal gBest, and the learning ability of this particle individual is 0. The individual learning factor C1 and the group learning factor C2 are both set to 0.5 in this research. Table 4 shows the parameter settings used in this experiment. Except for the different algorithm enhancement methodologies employed in this article, the same dataset and experimental settings were used to address the cloud resource scheduling problem in the same model and environment.
5.2. Evaluation and Analysis of Algorithm Performance Index
The algorithms NSGA-I, NSGA-II, NSGA-III [10], and MOEA/D and the RIOPSO algorithm proposed in this paper are used to solve the problem of cloud resource scheduling.(1)Algorithm NSGA-I. In contrast to the genetic algorithm, the nondominated sorting genetic algorithm stratifies individuals based on their dominance relationship before applying the selection operator.(2)Algorithm NSGA-II. The fast nondominated multiobjective optimization method with elite retention strategy uses a crowding degree comparison approach and is a Pareto optimum solution optimization technique.(3)Algorithm NSGA-III. A reference point selection operation based on the NSGA-II algorithm is presented to replace the crowding distance selection operation in the NSGA-II algorithm.(4)Algorithm MOEA/D. It is a multiobjective evolutionary method based on decomposition.(5)The IGD, HV, and C-Metric are calculated at this moment to measure the performance of each method while applying the above techniques to solve the cloud resource scheduling problem.(a)IGD is the comprehensive performance evaluation index of the algorithm, which calculates the minimum distance between the solution on the Pareto front and the solution obtained by the algorithm. The smaller the value of IGD, the more uniform the solution distribution obtained by the algorithm and the better the convergence.(b)HV measures the size of the solution space covered by the nondominated solution. The larger the value of HV, the higher the quality of the solution of the algorithm.(c)The C-Metric is calculated using the Pareto solution set’s dominance, and the performance metric uses two sets of Pareto fronts to determine convergence performance. C(A, B) = 1 indicates that A dominates all people in B, whereas C(A, B) = 0 indicates that A does not dominate individuals in B. The performance comparison results of the five algorithms are shown in Table 5. The outcomes of performance measures are reported as mean values. Table 5 shows that, in the IGD performance comparison, the RIOPSO algorithm has the least IGD value, showing that the RIOPSO method achieves the lowest IGD value and improvement of optimization is shown in Table 6.
The distribution of the solution set is uniform, which can make the population better converge to the approximate Pareto front, and the comprehensive performance is better. It can be seen from the hypervolume (HV) index that the RIOPSO algorithm has the largest target area space between the nondominated solution and the reference point and also has the best solution performance compared to the other four algorithms. From the perspective of C-Metric coverage, the dominance relationship in the RIOPSO algorithm is less than those of other algorithms. This is because the RIOPSO algorithm uses the orthogonal initialization strategy and obtains a higher-quality solution set at the initial stage, achieving convergence the optimal with better solution. In summary, using the improved strategy of the algorithm in this paper can improve the quality of the solution and make the overall performance of the algorithm better when solving the cloud resource scheduling problem.
5.3. Model Comparison
Due to the asynchrony and uncertainty of task execution in the process of cloud resource scheduling in real life, this paper uses the triangular fuzzy method to deal with the problem model so that the results are closer to reality. The following paper compares the experimental results for both the deterministic and indeterminate execution times. When each determined execution time is represented by triangular fuzziness, it is necessary to determine the left and right range of the number after the fuzzy execution time, and set the parameters α1 = 0.9 and α2 = 1.2 in equations (8) and (9). The same task size is compared by the mean value of the evaluation function. Figure 1 shows the comparison results between the deterministic model and the fuzzy model. It can be seen from the image that, using the fuzzy model, the value of the evaluation function will be higher than the value of the evaluation function under the deterministic model, which means that it is necessary to consider uncertain factors. If these uncertain factors are ignored, this will lead to the deviation between the actual effect and the theoretical estimated effect and reduce the efficiency of the system.

5.4. Algorithm Performance Comparison
The RIOPSO algorithm provided in this study is compared to three other algorithms in order to examine the unique advantages of the enhanced approach proposed in this paper. The SPSO (rerandomization particle swarm optimization) technique, for example, employs a rerandomization strategy [34] without altering the problem’s beginning circumstances. To initialize, the random approach is employed while grouping. The SWPSO (rerandomization inertia weight particle swarm optimization) algorithm combines rerandomization and real-time update of inertia weight [35]. The OPSO (orthogonal initialization particle swarm optimization) algorithm optimizes the PSO algorithm using the orthogonal initialization swarm approach [31]. The RIOPSO method provided in this study is the same as that discussed before in order to more intuitively comprehend the efficiency and superior optimization capabilities of the algorithm proposed in this work.
The SPSO, SWPSO, and OPSO algorithms that are obtained are compared and examined. When the job scale is 100, 200, or 300 and the number of virtual machines is 5, optimization tests are performed and recorded for each of the four methods. The number of iterations and the value of the evaluation function related to the algorithm are shown by the horizontal and vertical coordinates, respectively.
The initial evaluation functions of the OPSO and RIOPSO algorithms are found to be superior to those of the other two methods. This is because the quality of the first solution is high, and the particle swarm’s search efficiency is correspondingly increased, proving orthogonality. However, the iterative curve shows that the OPSO method may not be able to get a better result than the other three algorithms since the particles may fall into a local optimum condition while looking for the best solution. The SWPSO method and the RIOPSO algorithm, in comparison to the SPSO algorithm, boost particle search speed control by managing the particle search range, allowing particles to leap out of the local optimum throughout the search process and better converge to the global optimum solution. The graphic also shows that the RIOPSO algorithm, which combines the three aforementioned techniques, may achieve better answers in less iterations than the other three algorithms in large-scale or small-scale problems, reducing the algorithm’s current iteration time. As a result, the RIOPSO algorithm has a rapid optimization speed, a strong optimization capability, and a good optimization impact.
Multiple experiments were run on four distinct datasets to demonstrate the RIOPSO algorithm’s capacity to handle the same number of jobs but with varying numbers of virtual machines. The task and virtual machine numbers were (100, 5), (100, 10), (100, 15), and (100, 20), respectively, and the experimental results were averaged as shown in Figure 2. When the number of tasks is 100, the abscissa indicates the number of virtual machines which matches the number of tasks, and the ordinate represents the value of the evaluation function. Figure 2 illustrates that when the number of tasks is the same but the number of virtual machines varies, RIOPSO may discover a better solution and offer a better scheduling scheme than the other three algorithms.

According to the evaluation function value of each technique in Figure 2, the relative difference percent images of the evaluation function value of the SPSO algorithm, SWPSO algorithm, and OPSO algorithm are larger than that of the RIOPSO algorithm. As shown in Figure 3, the abscissa represents the number of virtual machines, while the ordinate represents the relative difference percent. When compared to the other three algorithms, the RIOPSO technique has the greatest optimization capability, regardless of the size of the virtual machine, indicating the RIOPSO algorithm’s usefulness. When the time factor t and the cost factor c are both 0.5, the four following strategies are utilized to perform repeated tests on scheduling models of varying work sizes. There are an average of 5 virtual machines in use. Figure 4 depicts the overall execution time of the tasks performed by the four algorithms, whereas Figure 5 depicts the total cost of task execution. The abscissas depict the task scale, while the ordinates depict the total execution time and cost of the tasks under the specified scheduling scheme. Comparison with respect to time is shown in Table 7 for these algorithms and cost comparison is depicted in Table 8.



The RIOPSO method may produce the least value of both the total time of task execution and the total cost of task operation, as shown in the figure. The twin goal of getting the shortest execution time and the cheapest execution cost is obtained. According to the results of the aforementioned trials, the RIOPSO algorithm described in this work outperforms the other three algorithms in terms of optimization ability and efficiency, and, in terms of identifying the best cloud resource scheduling scheme, it can reduce task execution time and cost, which enhances cloud resource scheduling’s overall performance.
6. Conclusion
The main goal of this paper is to reduce the total completion time and total execution cost of the task, while also taking into account the impact of uncertain factors on task execution time, using a fuzzy cloud resource scheduling model, and propose an improved hybrid PSO algorithm for cloud resource scheduling. Three viable methodologies are employed in the RIOPSO algorithm suggested in this paper: orthogonal initialization of particle swarms, rerandomization of particles, and updating of inertia weights. The three methodologies have shown promising results in terms of identifying the best solution, leaping out of the local optimum, and enhancing particle convergence. To bring cloud resource scheduling closer to actual applications, fuzzy models are used. The task execution time is reduced and the execution cost is reduced when the RIOPSO algorithm is used to optimize the fuzzy cloud resource scheduling. When compared to single SPSO, OPSO, and SWPSO algorithms, RIOPSO produces superior results that are more in line with real-world applications.
Data Availability
The data can be obtained from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
The authors extend their appreciation to the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University for funding this work through Research Group no. RG-21-07-09.