Abstract

Cloud computing is a groundbreaking technique that provides a whole lot of facilities such as storage, memory, and CPU as well as facilities such as servers and web service. It allows businesses and individuals to subcontract their computing needs as well as trust a network provider with its data warehousing and processing. The fact remains that cloud computing is a resource-finite domain where cloud users contend for available resources to carry out desired tasks. Resource management (RM) is a process that deals with the procurement and release of resources. The management of cloud resources is desirable for improved usage and service delivery. In this paper, we reviewed various resource management techniques embraced in literature. We concentrated majorly on investigating game-theoretic submission for the management of required resources, as a potential solution in modeling the resource allocation, scheduling, provisioning, and load balancing problems in cloud computing. This paper presents a survey of several game-theoretic techniques implemented in cloud computing resource management. Based on this survey, we presented a guideline to aid the adoption and utilization of game-theoretic resource management strategy.

1. Introduction

Game theory is a formal framework that includes a set of mathematical tools for studying complex interactions between interdependent rational players. Strategic games have a variety of applications in economics, politics, sociology, and other fields. Over the last decade, there has been an increase in studies using game theory to model and evaluate modern communication networks, as well as upcoming technologies such as cloud computing and other internet computation platforms [1].

Cloud computing help provide potently gaged shared resources over the Internet to prevent costs of overprovisioning. Cloud computing supplies three main service models, namely, platform, software, and infrastructure services. With the arrival of improved research and technology, cloud computing was identified to deliver XaaS, meaning one or all as a service. X could be defined as communication, storage, data, network, and so on.

Cloud computing has had several positive effects on the industry since its inception such as easy software installation and enhancement of applications. Cloud computing, being a recent IT transformation, is still in its developmental stage.

Consequently, everyone, from the technologist to the salesperson, is giving their own definition of cloud computing, thereby creating obfuscation [2]. Thus, there are lots of cloud computing interpretations, but we would be considering the definition from the National Institute of Standards and Technology (NIST). The NIST defined cloud computing as a technique and on-demand network access to a shared pool of computing resources (such as memory and CPU, among others) that can be speedily distributed and released with little attempt or communication from the service provider [24].

On the other hand, resource management (RM) is a process that deals with the procurement and release of resources. It is a major issue in cloud computing architecture. This entails the impartial distribution of resources such as storage, central processing unit, servers, applications, and data. Cloud computing must avail resources requested by users, but this is inhibited by the presence of finite resources, making it quite demanding to meet users’ needs.

Another issue is how to ensure resource optimization based on cloud users’ request requirements based on the cloud provider’s infrastructure. And also, how new resources can be correctly modeled based on the diverse cloud computing services performed and to ensure exact resources can be provisioned. Cloud users have no power over resources, so they request resources, such as CPU, memory, and storage, from cloud providers to meet their required purpose.

It is also a challenge when large data is meant to be moved from one provider to another. Robust techniques are required to manage cloud computing resource management and optimization. Also, adequate knowledge and information are needed since it is dependent on the cloud service provider. This work analyses the ability of game theory as a possible solution in modeling the resource management and optimization problems in cloud computing. Game theory conforms absolutely with the architecture of cloud computing, bearing in mind that an array of actors with contradictory purposes can be scrutinized. Besides, the game theory model’s abundance allows the consideration of different cloud architectures and topologies.

1.1. Research Contributions

Cloud computing offers Internet-based computing services, by allocating resources to meet users’ requirements, but this is not usually the case. Due to the difficulties in availing all demanded resources, shared resources increase over time. This study tried to ensure efficient and effective access to a cloud computing resource, taking into account that inefficiency may cause a total system failure.

By employing an appropriate resource management strategy using game-theoretic models for resource management and optimization, we can improve cloud services delivery. This study gives a broader view and understanding of the concept of game theory and its various application areas, where game theory is used to suggest a systematic approach to decision-making. In this paper, we can categorically deduce that resource management problems in cloud computing architecture were tackled deeply, which brought about fairness, high utilization, and so on.

Having analyzed several game-theoretical models for resource management strategies, we can say that no cloud user receives more resources than the others or prefers the allocation of another. As a result, the total value of resources obtained by cloud users must equal the total resources accessible in the cloud.

This paper aims to present game-theoretic models as a potential solution for resources management in cloud computing architecture, to get the best strategy and obtain the most effective management strategy. The objectives are to: review various literature on management in cloud computing, analyze several game-theoretic models for resource management in cloud computing, discuss the findings of the analysis, and conclude.

2. Literature Review

2.1. Cloud Computing

Cloud computing emanated from the concept of utility computing. It is defined as the provision of computational and storage resources as a metered service to users [5]. A cloud is a portion of cluster resources capable of expanding and compressing to accommodate the load changes [3, 6].

This idea highlights the reality that modern information technology settings necessitate the ability to dynamically increase capacity or add capabilities while limiting the need to expend money and time on new infrastructure acquisition [7]. The data centers, which consist of networked servers, cables, power supplies, and other components, are at the backbone of cloud computing, hosting operating applications and storing business information [8].

2.1.1. Characteristics of Cloud Computing

The following are some of the characteristics:(i)Multitenancy: It allows a large group of users to share resources and costs; culminates in the centralization of infrastructure and, as a result, cost reductions due to economies of scale; and allows for dynamic resource allocation, which is monitored by the service provider(ii)On-demand services(iii)Network access by using the Internet as a medium(iv)Scalability by maintaining the elasticity of resources

2.1.2. Advantages of Cloud Computing

The various advantages of cloud computing are listed below:(i)Open access: With the help of a suitable web association, cloud specialists/organizations might be reached.(ii)Enhanced economies of scale: The client enjoyed lower venture and operating costs while the supplier has more revenue in masterminding the framework services with high sustainability and flexibility.(iii)Limit to the on-request foundation and computational control: Users may be interested in computational power, storage, and other foundations based on their requirements for a pay-per-use program.(iv)Enhanced asset usage: Clients use resources effectively because they return assets to the cloud provider when they no longer require them. As a result, adaptability and versatility can be increased.(v)Decreased data innovation (IT) framework needs: Distributed computing provides the client with a foundation as a benefit of interest. As a result, there is no longer any need to purchase the IT foundation. At any point in time, the client can purchase it from a cloud provider.(vi)Pooling of assets: The buyer, for the most part, has no knowledge of the expert organization’s territory. As a result, the supplier serves a variety of customers by appointing assets that are both powerful and practical.(vii)Association’s center around their center abilities: Non-IT clients can contact IT specialist organizations for their business movement needs [9].

2.2. Cloud Computing Services

These services are mainly based on three delivery models:(i)Software as a service (SaaS): This allows users of the cloud to access the provider’s apps (PA) over the Internet(ii)Platform as a service (PaaS): This allows users to deploy their apps on a platform that the service provider of cloud (SPC) provides(iii)Infrastructure as a service (IaaS): It allows users to rent information technology(IT) infrastructures such as storage, server and networking resources provided by service provider of cloud(SPC)

Virtual machines, cloud computing, and containers are just a few examples of IT innovation over the past two decades that focused on ensuring that consumers are isolated from the underlying physical system that runs code [10].

Some of these innovations are listed in the following section.

2.3. Serverless Computing (SC), Virtual Machine (VM), and Containers
2.3.1. Serverless Computing (SC)

Serverless computing is a type of cloud computing in which the servers required for computation have been hidden away, leaving the cloud provider to decide where and how to do the computation [11]. Serverless computing, also referred to as serverless architecture, refers to function-as-a-service solutions in which a client writes code that only handles business logic and uploads it to a provider. All hardware provisioning, virtual machine and container management, and even functions such as multithreading, which are frequently incorporated into application code, are handled by that provider [10].

In contrast to typical cloud computing, serverless computing is distinguished by the fact that the infrastructure and platforms on which the services are delivered are transparent to clients. Customers are only concerned with the desired functionality of their application under this method, and the rest is left to the service provider [12].

2.3.2. Virtual Machines (VM)

A virtual machine is a computer file, commonly referred to as an image that mimics the behavior of a real computer. A virtual machine (VM) is a device that works like a physical computer, such as a laptop, smartphone, or server. It features a CPU, RAM, and disks for storing your files, as well as the ability to connect to the Internet if necessary. Virtual machines (VMs) are software-defined computers that run on physical servers and exist only as code. In present cloud computing architecture, virtual machines (VMs) provide unique solutions to wasted resources, application inflexibility, software manageability, and security concerns.

VMs are viewed as a safe computing resource, but SC is treated as a common pool resource, vulnerable to potential overexploitation due to the uncertainty created by its shared nature [13]. Users’ computing tasks are specified as a pipeline of event-triggered functions in serverless computing (SC), whereas, in contrast to the virtual machines (VMs) model, in which users rent VMs from the cloud provider and resources may sit idle due to sporadic requests, resulting in unwanted monetary costs, the SC model allows users to offload computing tasks to the cloud provider, who remains responsible for managing the infrastructure and respective resources.

Each VM is classified as a “secure resource” because it is rented exclusively by a single user, who benefits from guaranteed computing service. The SC, on the other hand, is often more cost-effective and has the potential to provide great user satisfaction [14].

2.3.3. Containers

Containers have recently been shown to be a particularly successful lightweight method for virtualizing apps in the cloud [15]. Containers are software packages that include all of the components needed to run in any environment. Containers virtualize the operating system in this fashion, allowing them to run anywhere from a private data center to the public cloud or even on a developer’s laptop. Containers are a common approach to encapsulating the code, settings, and dependencies of an application into a single object. Containers run as resource-isolated processes and share an operating system deployed on the server, ensuring speedy, reliable, and consistent deployments independent of the environment.

2.4. Cloud Computing Resource Management (RM)

According to Gonzalez et al. [7], resource management is the process of providing computing, storage, networking, and energy resources to a set of applications in order to meet the infrastructure providers’ and cloud customers’ performance targets and requirements. According to Jennings [16], resource management is the process of assigning computing, storage, networking, and energy resources to a set of applications in order to meet the infrastructure providers’ and cloud clients’ performance targets and requirements.

Support for heterogeneity of hardware and capabilities, pay-per-use model, and on-demand service model are some of the important elements that make resource management more challenging in cloud computing [9].

Manvi and Shyam [17] classified resource management into nine components:(i)Provisioning: The allocation of resources to a workload(ii)Allocation: Resource allocation across competing workloads(iii)Adaptation: The ability to modify resources dynamically to meet workload demands(iv)Mapping: The relationship between the workload’s resource requirements and the cloud infrastructure’s resources(v)Modeling: A framework for predicting a workload’s resource requirements by describing the most significant resource management attributes, such as states, transitions, inputs, and outputs, within a given context(vi)Estimation: An intelligent guess about the real resources needed to complete a task(vii)Discovery: Identifying a list of resources that can be used to run a workload(viii)Brokering: The process of bargaining the availability of resources through an agent in order to ensure that they are available at the proper time to complete the assignment(ix)Scheduling: A timetable of events and resources that determines when a workload should begin or conclude based on the activity’s duration, predecessor activities, predecessor relationships, and assigned resources

Resource management in the cloud, according to Singh [18], consists of three functions: resource provisioning, resource scheduling, and resource monitoring.

2.4.1. Resource Provisioning

The authors characterize this stage as determining the appropriate resources for a given workload based on the QoS requirements stated by cloud users.

2.4.2. Resource Scheduling

This is the process of mapping, allocating, and executing workloads depending on the resources chosen during the resource provisioning stage.

Resource monitoring is a complementary phase to achieve better performance optimization.

Resource management is a critical component of any cloud, and inefficient resource management has a direct impact on performance and cost, as well as an indirect impact on system functioning since it becomes too expensive or ineffective as a result of poor performance [6]. The cloud resource management practices connected with the three cloud delivery formats—IaaS, PaaS, and SaaS—differ. When cloud service providers can forecast a rise in demand, they can provide resources ahead of time.

Also, cloud resources are controlled on three independent levels:(i)Cluster level: A cluster resource manager (CRM), a software complex that controls resources and tasks in a cluster to preserve its efficiency, represents the cluster level of power management. The CRM is in charge of cloud creation and deletion.(ii)Node level: An operating system (OS) controls the high-level state of equipment by managing power at the node level. For example, the OS can put a processor (CPU) into sleep mode or spin down drives to save energy.(iii)Hardware level: Modern CPUs have a large number of modules, some of which are not always active in a given process. As a result, unused modules can be turned off. This is accomplished using a specific circuit that is in charge of the CPU’s internal power management. As a result, all administration is done at the hardware level, with no involvement from the operating system [6].

For multiobjective optimization, cloud resource management necessitates complicated rules and judgments. This is why preparing ahead of time for the administration of these resources will aid in a smooth transition to cloud computing.

2.4.3. Classification of Cloud Resources

Cloud resources can be classified in a variety of ways, including as physical and logical resources or as hardware and software resources. Cloud computing resources could be classified as follows:(i)Compute resources: To create computational resources, many physical resources must be pooled. Multiple elements such as processors, memory, network, and local I/O make up the computing capacity of the cloud environment.(ii)Networking resources: Different physical resources must be combined in order to develop computing resources. The processing capability of the cloud environment is made up of multiple components such as processors, memory, network, and local I/O.(iii)Storage resources: These resources manage client information and make it accessible over the network. The cloud enables elasticity in storage resources by allowing users to scale up or down their storage space on a lease basis based on their needs, which is challenging in a traditional database [9].

The taxonomy of cloud resource management found in literature is shown in Table 1.

2.5. Resource Allocation

Allocation of resources is a method that makes sure virtual machines are assigned when there is demand for several applications in need of various resources such as CPU and memory among others. The cloud is derived from many real machines, and every real machine processes several virtual machines that are allocated to the final users as computing resources. Virtual machines are a program that behaves like real computer [23].

Allocation of resources in the cloud domain is done to ensure users or clients are satisfied with little time to process, while providers of these resources want to optimize the application of the resources and also make an expected profit. Distributed cloud domain is heterogeneous and potent. Heterogeneous cloud typically combines both public and private resources from more than one cloud provider that makes the distribution of resources quite challenging.

The job of allocating resources to cloud users as a result of growth in demand from users and availability of limited resources also makes it quite difficult. As a result of this, several methods and versions have been suggested to assign these resources in the most effective way possible [24].

Cloud computing makes it possible for data to be saved and utilized efficiently for reliable applications. It has been identified as a standard for big data issues; one major issue encountered when resources are to be shared across the Internet is the need for cloud users to perfectly allocate the required resources by the cloud provider.

Another challenge with the allocation of resources is aligning with the service-level agreement (SLA) bargained with the user involved. There are two stages of SLA. In class-based SLA, for each job class, QoS is measured based on performance metrics. In job-based SLA, QoS is measured using the metrics of individual jobs. Users believe job-based SLA is more robust of the two types. Users believe that job-based service-level agreement is stronger unlike providers [25].

2.6. Resource Optimization

Many researchers and scientists have proposed different techniques in the area of task stability in the cloud computing environment. Excess supply of resources will bring about unnecessary costs and wastages, while undersupply of resources can disrupt the effectiveness of the application.

Optimal allocation of resources to users in a finite time to achieve excellent service is subject to rightly allocating resources to users based on user requirements. Resource optimization refers to choosing the most appropriate choice from an array of alternatives based on standard criteria. The performance of cloud computing solely depends on the level of optimization of resources (virtual machines) allocated to users by the cloud providers [26].

Virtual Machines (VMs) imitate the physical computer system. It is actually software that uses physical resources of systems such as CPU, RAM, and disk storage but is technically isolated from other computer software. The allocation of these resources based on users’ requirements is a major challenge in cloud computing, and several algorithms have been presented to provide a solution.

The challenges presented as a result of resource allocation are dependent on the data from the resources made available by the cloud service provider. Cloud services provide almost the same resources but differ in other terms such as performance and service types [27].

In cloud computing, a productive resource allocation method allows readily accessible resources of service nodes to solicit jobs. It is pertinent to ensure that the entire resources on any service node should not exceed the size and subtasks that can be utilized to improve the resource implementation [28].

2.7. Review of Related Works

Cloud computing is the use of the Internet to deliver services and resources. Many programs are self-service enable. These dynamic networks required a proper resource management strategy to assign essential resources to the users’ demands.

Thakur et al. [29] investigated the design and implementation of a game-theoretic approach for resource management that took into account the trust values between the federation’s participating cryptographic service providers.

Liaqat et al. [30] restructured the nova-scheduler to propose a multiresource-based virtual machine (VM) placement approach to improving CPU utilization and execution time. When compared experimentally with other well-known techniques, the proposed method has improved execution time by 50%. The proposed solution covers only the computational resources. Thus, there is a need to extend the solution to also consider network and storage as a resource.

Shuja et al. [31] investigated the enabling methodologies and technologies for sustainable cloud data centers (CDCs) from multiple perspectives. In addition, case examples from academia and business were given to validate results for CDC sustainability initiatives. Sustainability solutions can both cut energy expenses and carbon footprint in CDCs, according to a comprehensive survey. The taxonomies offered categorize the parameters of CDC sustainability measures. Several hurdles to sustainable CDCs are also identified, such as dealing with the insecurity of renewable energy supplies.

Skourletopoulos et al. [32] provided a game-theoretic formulation of the technical debt management problem at the level of cloud-based services. A technical debt measuring game is created, with the current number of players per service parameterized, and each new end-user having the option of using any of the cloud-based services available.

Wang et al. [33] showed two unequal effective targets. The first technique is set up to reduce the level of load inequality, while the next technique is set up to optimize the use of resources and also limit energy utilization. For optimum virtual machines placement, resampled binary particle swarm optimization (RBPSO) was proposed. To control heterogeneity of the population, the unnecessary calculation was limited that enhances the possibility and effectiveness of the algorithm. The proposed model is logical and executes preferably more than BPSO and genetic algorithm (GA). But dynamic effectiveness of the virtual machine is required to limit the utilization of energy and enhance the service quality.

Wang et al. [34] applied a mapping and management architecture to explain the need for resource management in virtualized ultradense small cell networks and explore the challenge of user-oriented virtual resource management. They created closed-form solutions for spectrum, power, and price by modeling the virtual resource management problem as a hierarchical game. Furthermore, propose and examine the convergence virtualization of a customer-first algorithm that describes the user-oriented service virtualization.

Koloniari and Sifaleras [1] gave a classification of ways to deal with a number of challenges faced in the design and deployment of peer-to-peer (P2P) and cloud systems, as well as a study of current developments in game-theoretic approaches in P2P networks and cloud systems.

Mustafa et al. [35] presented two consolidation-based energy-efficient techniques: maximum capacity best-fit decreasing (MCBFD) and minimum power best-fit decreasing (MPBFD), to reduce energy consumption together with the resultant SLA violations. To achieve better energy efficiency, workload consolidation and lower threshold are utilized. The lower threshold identifies the underutilized server that leads to a decrease in energy consumption, whereas the upper threshold reduces the SLA violations by keeping some of the resources free to accommodate the ever-changing demands of VMs. The proposed techniques have better performance than the selected heuristic-based techniques in terms of energy, SLA, and migrations.

Xiong et al. [36] proposed a lightweight infrastructure of the proof of work-based blockchains, where the computation-intensive part of the consensus process is offloaded to the cloud/fog. They modeled the blockchain consensus process’ compute resource management as a two-stage Stackelberg game, in which the profit of the cloud/fog provider and the utility of individual miners are jointly optimized.

Yang et al. [37] utilized matching theory to construct a distributed matching method that maximizes the social welfare of resource-constrained fog nodes while ensuring various fog node mining requirements.

Zafari et al. [38] presented the resource-sharing model as a multiobjective optimization problem and provide a solution framework based on cooperative game theory (CGT). They analyze the technique of allocating resources first to native applications from each service provider and then sharing the remainder of resources with applications from other service providers. They proposed allocations for the core game-theoretic Pareto optimum allocation (GPOA) and polyandrous-polygamous matching based Pareto optimal allocation (PPMPOA). As a result, the results are Pareto optimal, and the grand coalition of all service providers is more stable.

Feng et al. [39] offered a novel gaming strategy for cyber risk management. To move cyber risks from the fog computing environment to a third party, they use the cyber-insurance idea. They use a dynamic Stackelberg game to depict this dynamic interactive decision-making dilemma. They create an evolutionary subgame to assess the provider’s protection and cyber-insurance subscription methods, as well as the attacker’s plan.

3. Methodology

In this section, the methodology applied to solve the resource management and optimization problems are presented.

3.1. Resource Management Modeling

Cloud computing providers have a large-scale dispersed data center with diverse physical servers that provides several computing resources using a pay-as-you-go model. Cloud computing providers offer a collection of conceivable virtual machine (VM) types to make simpler user’s selections. Each type of VM is well-defined by stipulating the number of CPU cores, storage and memory size, and sizes of other resources. In cloud computing, users position applications of high-performance on clusters of virtual machines to achieve set tasks such as web and enterprise services. Cloud computing providers regulate their resource management strategies dynamically, since user’s different requirements are heterogeneous and vary over time. We are concerned with an impartial and active resource management strategy for cloud computing architecture; thus, it is essential to unify control and manage the physical resources using a resource management system. In this paper, we present a comparative analysis of resource management strategies or metrics in cloud computing architecture using various game-theoretic models.

3.2. Game-Theoretic Resource Allocation Modeling

Cloud computing users ask for various types of virtual machines to accomplish diverse tasks. The implementation involves multidimensional resources with changing requirements per task.

3.2.1. Resource Allocation Problem

In this paper, impartial resource allocation means all users have an equal portion of resources. In cloud computing, where users’ requirements are heterogeneous, resources are allotted to users for their requirements. Users have a determined portion of total cloud capacity between diverse resources, which is called the main portion. The objective of impartial resource allocation is to balance the main portion of users. This resource allocation problem can be modeled using game theory.

3.2.2. Resource Optimization Problem

During run time, the resources of the physical server may not be completely utilized. In a cloud computing environment, improving resource utilization requires the consideration of resource usage on each resource measurement. To attain optimum utilization of resources, cloud computing providers combine virtual machines with accessible machines wherein the resource requirements on a single server are balanced. This virtual machine placement problem can be addressed as a resource optimization problem.

To achieve the set objectives, research on game-theoretic resource allocation strategy was reviewed. A comparative analysis of game-theoretic resource allocation strategies was provided in Table 2, to understand the various game theories employed, as well as their benefits and limitations.

3.3. Game Theoretic in Load Balancing

The goal of cloud computing is to share resources in a consistent manner while also achieving economies of scale. Load balancing ensures resource availability while reducing server performance overhead. A load balancer’s job is to distribute traffic to various servers so that they are all equally loaded. This could lead to a rise in the number of users as well as cloud application reliability. Load-balancing algorithms can be static or dynamic, and they can be centralized or decentralized.

One node in the system operates as a scheduler in the centralized model, for example, and makes all load-balancing choices. This node receives information from the other nodes. The load-balancing decisions are made by all nodes in the system in a decentralized manner. As a result, obtaining and maintaining the dynamic state information of the entire system is exceedingly expensive for each node. To make suboptimal decisions, most decentralized systems need each node to collect and keep only partial knowledge locally.

3.4. Game Theoretic in Resource Provisioning

Due to their intensive calculations and interdependence across procedures, resource management is a crucial issue for scientific workflows. To manage cloud resources, a number of algorithms and strategies have been created. Resource provisioning time, or the period between scaling up/down and actual resource providing/deprovisioning, is one of the key challenges of cloud computing.

Wu et al. [50] investigated how cloud resources are managed using a combination of state-action-reward-state-action learning and genetic algorithms. This is accomplished by picking the most appropriate set of activities to maximize resource use. The suggested method’s agents are converged using a genetic algorithm, and global optimization is achieved. By keeping track of work deadlines, the fitness function used by this evolutionary algorithm aims to achieve more effective resource use and better load balancing.

3.5. Games Theoretic in Task Scheduler

Task scheduling is the process of arranging incoming requests (tasks) in a specific order to maximize the use of available resources. One of the most challenging aspects of cloud computing is efficiently scheduling jobs and completing them before the deadline in order to maximize processor usage, throughput, and task waiting time.

The scheduler’s main purpose is to receive tasks from a group of users and assign them to a group of virtual machines. Service users must submit their requests online because cloud computing is a technology that offers services through the Internet.

4. Discussion of Findings

In this paper, we attempted a glimpse at the crucial issue of resource management problems in cloud computing architecture since it affects the efficacy and proficiency of the system. This is presently experiencing intense investigation by the communities of game theorists and computer scientists.

Through the literature reviewed, we were able to get an understanding of the concept of game theory and its various application areas. The game theory aims to provide a systematic approach to decision-making. This posed the question of whether the game theory can accurately model resource management problems in cloud computing architecture. To answer the question, further research was done to understand how game theory and cloud computing can coexist. On this note, research on game-theoretic models for resource management was reviewed, along with methods employed and their brief descriptions, in Section 3.

The goal of resource management in cloud computing is to mitigate the underutilization of resources. Through this paper, we have shown the various approaches of game theory in modeling resource management problems in cloud computing architecture.

FromTables 36, some of the game theories employed are: a finite extensive-form game with perfect information [45], a repetitive game with incomplete information in a non-cooperative environment [46], a cooperative and non-cooperative gaming model [47], cooperative game and bargaining game algorithm [48], evolutionary game theory [49], Stackelberg game [50], and non-cooperation model [51].

These were applied to satisfy diverse users’ requests needs outside that of a single physical machine. Virtualization and placement of the required resources are enabled by optimization theory and algorithms such as the backward induction approach [45], Nash equilibrium [46], evaporation-based water cycle algorithm [49], Pareto efficient [48], load balancer [50], and a decision model [51]. By so doing, a resource can be provisioned optimally and efficiently.

In addition, the advantages of these resource management strategies, when provisioned, are found to improve resource allocation concerning lesser task allocation time, lesser resource wastage, and higher request satisfaction.

The comparative analysis in Tables 36 can serve as a specification guide for researchers, academic, and industry experts, in choosing that game-theoretic resource management strategy is suitable for their needs. Also, the following guidelines were developed and should be considered when choosing a resource management strategy based on game theory, to achieve impartial resource allocation and optimized resource utilization:(i)The number of resources received by users should be equivalent to splitting equally the total resources(ii)Each user does not prefer the allocation strategy of another user(iii)Increasing the volume of a user’s resource without decreasing the resource allocated to another user is impossible(iv)A change in the user’s strategy does not mean the user can get more utilities(v)Minimum usage is maximized between the several resources of the physical servers(vi)Minimizing the irregular usage of multidimensional resources, subsequently, most resource fragmentations are caused by unequal multiresource requirements

5. Conclusion and Recommendation

This paper offer resolution to the resource management problem in cloud computing architecture wherein cloud computing providers can effectively and optimally respond to different users’ request for resources.

Many essential issues are considered, such as fairness, availability, optimization, utilization, provisioning, scheduling, and so on. To address the growing intricacy of the resource management problem in a dynamic and constantly evolving setting, this paper focused on game-theoretical methods and presented a comparison of various strategies.

Game-theoretical resource management strategies have received substantial attention in cloud computing communities and applications in resolving resource optimization, problem of resource allocation, task scheduling, and resource provisioning.

Furthermore, the security issues were not considered in this paper when resources are managed in a cloud computing setting. This poses another risk that has hindered the adoption of a game-theoretic resource management strategy in cloud computing architecture. We recommend future studies into the security issues inherent in game-theoretic resource management strategies.

Conflicts of Interest

The authors declare that they have no conflicts of interest.