Abstract
By offloading computation tasks, multi-access edge computing (MEC) supports diverse services and reduces delay and energy consumption of mobile devices (MDs). However, limited resources of edge servers may be the bottleneck for task computing in high-density scenarios. To address this challenge, by leveraging the underutilized resources of parked vehicles to execute tasks, we propose a parked vehicle-assisted multi-access edge computing (PV-assisted MEC) architecture, which enables MEC servers to expand their capability flexibly. To achieve efficient offloading, we propose a PV-assisted MEC offloading scheme in a multi-MD environment. We design a game-based distributed algorithm to minimize the overhead of MDs and further reduce the burden on the MEC server. Simulation results show that compared with the common MEC system, our scheme can reduce the burden on the MEC server by 5% and the offloading overhead by 17%.
1. Introduction
With the improvement of mobile devices’ capabilities and the ever-increasing interest in mobile applications, delay-sensitive and computation-intensive mobile applications have been emerging and drawing significant attentions, spanning technologies such as augmented reality, speech-to-text conversion, image processing, and interactive online games. However, due to the scarcity of resources, mobile devices are usually unable to meet the massive computing demands. The solution to this problem lies in improving the communication infrastructure by computation offloading [1–3]. Multi-access edge computing (MEC) is regarded as a key technology and architectural concept for the improvement of the computation offloading efficiency. MEC aims at extending cloud computing capabilities to the edge. Mobile devices (MDs) can offload tasks to nearby network edge servers [4]. For example, video streams and images collected through sensors or cameras mounted on the vehicles must be processed in real time to detect surrounding objects, recognize traffic lights, etc., to ensure the safety of autonomous driving. However, vehicles do not have the capacity to process large amounts of images and videos instantly, so tasks are offloaded to edge server for processing, reducing the incidence of traffic accidents. Computation offloading technology in MEC not only overcomes the shortage of computing capabilities on mobile terminals but also avoids huge latency caused by transferring tasks to the cloud [5, 6]. However, existing MEC servers tend to have lightweight computing resources due to cost constraints, which means they are still not well equipped to handle the ever-growing task demands.
Scholars have studied the problem that it is difficult for a single VEC server to meet the strict latency requirements of MDs. The authors in [5] proposed a tiered offloading framework for edge computing, which utilizes nearby backup computing servers to make up for the insufficient MEC server resources. Guo and Liu [2] proposed a cloud-MEC collaborative computation offloading scheme with centralized cloud and multi-access edge computing over Fi-Wi network architecture. In addition, idle resources in unmanned aerial vehicles (UAVs) were used as a supplement to the edge computing server to provide effective resource utilization and reliable service [7].
With the rapid development of the automotive industry, vehicles are equipped with an ever-increasing amount of communication and computing resources. Several works have focused on vehicle-assisted edge network to improve network service quality by leveraging idle resources in vehicles. In this network, idle resources are used to compute tasks to assist the edge network as vehicles with idle resources approaching vehicles carrying computation tasks. In daily life, 70% of personal vehicles are parked for an average of more than 20 hours per day [8]. These parked vehicles have a lot of idle computing, storage, and communication resources, as well as plenty of energy. Therefore, utilizing these idle resources is a promising way to improve network efficiency.
The use of parked vehicles to support network services has two advantages that cannot be ignored. On the one hand, parked vehicles are relatively stable in terms of communication. A moving vehicle may change its position frequently, which may cause the connection between the vehicle and the server to become unstable and affect the efficiency of task execution. In contrast, parked vehicles may remain stationary for long periods of time. On the other hand, parked vehicles involved in task offloading indirectly extend the service area of VEC. Outside the coverage of roadside units (RSUs), parked vehicles can serve as static nodes and service infrastructure, alleviating the shortage of edge server resources and supporting interconnection between vehicles and servers [9].
In this work, unlike existing computation offloading studies, we focus on reducing MDs’ delay and energy consumption to improve quality of service (QoS). In addition, parked vehicles that can be used as service nodes in this work include not only those parked centrally in parking lots but also those parked scattered on the roadside, where legally permitted. We focus on the design of parked vehicle-assisted MEC architecture and the corresponding efficient computation offloading scheme. The main contributions of this study are as follows:(i)A parked vehicle-assisted multi-access edge computing (PV-assisted MEC) architecture is presented, in which nearby parked vehicles can help extend the service capabilities of the MEC system.(ii)The offloading decision problem is formulated as a noncooperation game. A game-based PV-assisted task offloading algorithm (GPTOA) is proposed, which decides whether each MD should offload and, if so, to which channel of MEC server or which PV.(iii)Simulation results show that the GPTOA not only effectively reduces the burden on the MEC server but also achieves significant performance improvements in terms of offloading overhead.
The rest of this study is organized as follows. First, related works are discussed in Section 2. Second, the PV-assisted MEC architecture is described in Section 3. Next, Section 4 presents the system model. After that, Section 5 formulates the task offloading problem and proposes a game-based PV-assisted task offloading algorithm. Extensive simulation results are provided in Section 6, followed by conclusions in Section 7.
2. Related Work
There are a number of studies focusing on mobile applications in MEC. Most of these focused on processing data and improving service qualities [10–15]. Zhang et al. [16] considered load balancing of computation resources on the edge servers and the highly dynamical nature of the vehicular networks, which led them to introduce fiber-wireless (Fi-Wi) technology to enhance vehicle edge computing network (VECN). Then, they used a game theory-based nearest task offloading algorithm and an approximate load balancing task offloading algorithm to solve the delay minimization problem. Cheng et al. [17] proposed a method to predict Wi-Fi offload potential and access costs by jointly considering user satisfaction, offload performance, and mobile network operators’ revenues. The results showed that this scheme can improve the average utility of users and reduce service latency. Chen et al. [18] showed that it is NP-hard to find centralized optimum for task offloading in MEC with the goal of minimizing the overall computation overhead. Hence, they adopt a game-theoretic approach for achieving efficient offloading in a distributed manner.
The recent advent of vehicle-to-everything (V2X) communication technology makes vehicles an important network resource for improving network performance. Ding et al. [19] used CR (cognitive radio) router-enabled vehicles to transmit data to the desired location. Feng [20] proposed the hybrid vehicle edge cloud (HVC) framework, which made it possible to share available resources with neighboring vehicles through vehicle-to-vehicle (V2V) communication. Zhang et al. [21] investigated the effectiveness of computational transport strategies for vehicle-to-infrastructure (V2I) and V2V communication modes. They proposed an efficient predictive combination-mode relegation scheme that adaptively offloaded tasks to the MEC servers via direct uploading or predictive relay transmissions. Huang et al. [22] introduced the concept of vehicle neighbor group (VNG), which made it convenient to share similar services through V2V communication. Considering the similarity of tasks and computational capability of vehicles, Qiao et al. [23] divided vehicles into task computing sub-cloudlet and task offloading sub-cloudlet. Based on the two sub-cloudlets, they proposed a collaborative task offloading scheme that can effectively reduce the number of similar tasks transferred to MEC servers.
Furthermore, certain existing works focused on exploring ways to leverage the communication, storage, and computation capacity of parked vehicles, in which vehicles became service nodes for computation offloading. Liu et al. [24] proposed a vehicle edge computing network architecture in which vehicles act as edge servers to compute tasks. A problem with the objective of maximizing the long-term utility of the VEC network was presented in the study, modeled as a Markov decision process, and solved using two reinforcement learning methods. Huang et al. [25] modeled the relationship between users, MEC server, and parking lot as a Stackelberg game. They presented a sub-gradient-based iterative algorithm to determine the workload distribution among parked vehicles and minimize the overall cost to the users. Li et al. [26] proposed a three-stage contract-Stackelberg offloading incentive mechanism to maximize the utility of vehicles, operators, and parking lot agents. Han et al. [27] proposed a dynamic pricing strategy that minimizes the average cost of the MEC system under the constraints on service quality by continuously adjusting the price according to the current system state.
By introducing parking lots as agents, many existing studies focused on utilizing the communication and computation capabilities of parked vehicles. The benefits and costs of parking vehicles and the costs to service users were taken into account. However, in addition to the vehicles parked centrally in parking lots, computing and communication channel resources of vehicles scattered on the roadside are not negligible. Moreover, in most cases, the quality of user experience should be prioritized. Therefore, in this study, based on the research work proposed in [18], we propose an PV-assisted MEC architecture to enhance the MEC network, in which parked vehicles can serve MDs directly. In addition, we propose a game-based task offloading algorithm to minimize the delay and energy consumption for service users.
3. Parked Vehicle-Assisted Multi-Access Edge Computing Architecture
With the advent of smart cars, more and more cars can be awakened to perform tasks even when parked. For example, when a parked Tesla car is in sentry mode or dog mode, some of its safety-related features are still working. With the increasing development of artificial intelligence, we believe that cars will become increasingly more intelligent. In the future, parked cars may support some modes that could provide services to other vehicles. The research presented in this study is conducted on this premise.
Although aspects such as incentives, communication costs, security, and scheduling should be considered if onboard computers in parked vehicles are to be used for edge computing, the focus of this study was on computation offloading strategy. Therefore, these aspects are not considered in this study, but should be taken into consideration in future studies to pursue a more complete solution.
A representative PV-assisted MEC service scenario is illustrated in Figure 1. There are a large number of MDs and parked vehicles running computationally intensive and delay-sensitive mobile applications. However, lightweight MEC servers and limited bandwidth resources are insufficient for these applications. Idle resources in parked vehicles can be used to relieve the pressure on the MEC. However, due to “selfishness,” not all parked vehicles are willing to provide resources. We assume that some parked vehicles can be recruited through certain incentives, such as extended parking opportunities or reduced parking fees. In addition, we assume that the MEC system can certify recruited parked vehicles to ensure the security of the service and can update and monitor available resources of these parked vehicles in real time to improve resource utilization. We refer to these recruited certified parked vehicles as PVs. In summary, both MEC servers and PVs can provide services to MDs.

Figure 2 illustrates a representative PV-assisted MEC network architecture. Based on the original vehicular edge computing architecture, we move the vehicles capable of providing services from the device layer to the MEC layer to enable utilization of parked vehicles’ resources and allow them to provide services directly to MDs.(1)Cloud Layer. The first layer provides centralized cloud computing services and management functions such as critical or complex event handling, key data backup, and information authentication. The PV-assisted MEC architecture employs a software-defined network (SDN) controller to program, manipulate, and configure network in a logically centralized way.(2)Edge Cloud Layer (MEC Layer). The second layer consists of edge network access devices (e.g., RSU and base station (BS)) and data service devices (e.g., MEC servers and PVs). Edge network access devices are used for communication among edge facilities or between layers. MEC servers with lightweight storage and computing capabilities are deployed on edge network access devices. MEC servers are responsible for collecting service status information from themselves and from PV service nodes parked in the coverage area of RSU. Based on this information, MEC servers can process or assign tasks to MDs. By moving PVs from mobile device layer to MEC layer, the service capacity can be improved and bandwidth consumption can be reduced.(3)Mobile Device Layer. The third layer consists of mobile devices requesting services, such as vehicles, smartphones, tablets, and laptops. MDs request services by connecting to BSs via cellular network. MEC server and parked vehicles can provide services to terminal devices via cellular network or V2X. Here, V2X may be a link via cellular network or a link via dedicated short-range communications (DSRCs). Note that as a special kind of mobile device, vehicles are divided into two categories in this study: PVs and others. The former are located at the MEC layer as service providers, while the latter are located at the mobile device layer as service requesters.

Figure 3 illustrates the communication procedure between MD, MEC server, and PVs. First, when an MD generates a task, it sends the task request to the MEC server. Second, through iterative negotiation between the MEC server and the MDs, the task allocation result is calculated based on the status of the MDs and the MEC server. Then, the MEC server returns the task allocation result to the MD. When the task allocation result indicates that the task should be offloaded to a PV, the MEC also needs to notify the relevant PV (dotted arrow). Third, the MD sends task input data to the MEC server (solid line) or PV (dotted line) according to the task allocation information. Fourth, the MEC server (solid line) or PV (dotted line) processes the task and then returns the task result to the MD. Finally, the MD obtains the result and sends service satisfaction information back to the MEC server to reward the specific PV.

4. System Model
4.1. Network Model
We assign a unique identifier to each task and record the characteristics of tasks, such as traffic size and computation workload, in a globally shared feature table: . Without loss of generality, we assume that each MD generates only one task , , in a time period and tasks cannot be further divided. Here, denotes the size of the task generated by MD and denotes the computation resources required by this task. We assume the existence of a wireless BS through which any MD can offload its computation task to a nearby MEC server (MS). Each wireless BS has orthogonal frequency channels, denoted as . Besides, in the coverage area of a BS, there is a set of PVs, denoted by . We consider a quasi-static scenario where the status of MDs, PVs, channels, and the MEC server remains unchanged for a given time period, whereas in different time periods, the status may change. For simplicity, we ignore the cost of establishing secure connections during transmissions. We denote , , as the selection decision variable. As shown in (1), let denote that MD executes its task locally, and denotes that MD chooses to offload this task. When , indicates that MD will offload task to MEC server via channel , while indicates that the task will be executed by PV . Let denote the set of selection decisions for all MDs. For ease of reference, we list key notations used in this study in Table 1.
4.2. Communication Model
In this section, we try to define the transmission rate of offloading. It is assumed that mobile device is equipped with a single antenna that can transmit data for one task at a time. When many MDs offload their tasks to the same MEC server, severe wireless channel interference may occur. Therefore, wireless channel conditions should be considered during transmission. If MD chooses to offload its task to the MS via wireless channel, the data transmission rate for can be expressed as follows:
Here, is the bandwidth, and are the transmission power and channel gain of MD to the MS via nearby BS, respectively, and is the background noise; is the wireless channel interference generated by other MDs using the same channel.
MD and PV can communicate with each other only if the distance between them is less than a certain distance . We assume that any PV can only serve one MD during the computation offloading period. Therefore, there are no channel conflicts between MDs when tasks are offloaded to PVs. When MD offloads its task to PV that is not occupied by other MDs, the data transmission rate can be expressed as follows:
Here, is the bandwidth between MD and PV.
4.3. Computation Model of Mobile Devices
We use to denote the computational power of MD . Thus, the delay of the locally executed task can be expressed as follows:
Similar to [28], we assume that the power consumption of a certain MD is proportional to the cube of its computational power. The energy consumption coefficient is related to the chip’s hardware architecture. The device’s energy consumption for local execution can be expressed as follows:
Considering that MDs are usually energy and delay sensitive, we define parameters and (, ) as the weights for delay and energy in the computing of overhead for MD , respectively. MDs tend to save time (larger ) when tasks are delay sensitive, and they tend to save energy (larger ) when batteries are low.
Thus, the overhead of local execution can be expressed as follows:
4.4. Computation Model of MEC Server
For most mobile applications, such as fingerprint, face, or iris recognition, and sensor data processing, the size of the computation result is much smaller than the size of the input data. We ignore the transmission time of computation results. Therefore, the delay for offloading task to MEC server MS can be divided into two parts: data uploading time and task execution time, expressed as follows:where is MS′ computing capability.
Usually, the MEC server has sufficient power supply, so the energy consumption on the MEC server can be ignored. From MD’s perspective, the energy consumption of offloading task to MS comes from transmitting data over wireless network and can be expressed as follows:
Thus, the overhead for offloading task to MEC server can be expressed as follows:
4.5. Computation Model of Parked Vehicles
Let denote the computing resource allocated to task from PV . The delay for offloading task to PV can be expressed as follows:
Similarly, energy consumption on PV is ignored (which will be considered in future works), and energy consumption on MD for offloading task to PV can be expressed as follows:
Thus, the overhead of MD for offloading task to PV can be expressed as follows:
5. Problem Formulation and Algorithm Design
5.1. Problem Formulation
According to Section 4, the overhead of task can be expressed as follows:
There are choices available for each task. Delay and energy consumption may vary depending on offloading strategies. Therefore, the overall goal is to minimize the total overhead of all MDs. Thus, the problem of optimizing the total overhead for all MDs can be expressed as follows:
Here, is the indicator function with if the event is true and otherwise. There are four constraints for problem (P). Constraint (C1) is that every task should be executed. Constraint (C2) is that each PV serves at most one MD. Constraint (C3) is that MD and PV can communicate only when they are close enough to each other. Similarly, constraint (C4) is that MD and the MEC server can communicate only when they are close enough to each other.
The task set can be divided into three mutually exclusive subsets by the selection decisions: . means that tasks are processed locally, means that tasks are offloaded to the MS, and means that tasks are offloaded to some PV.
By incorporating PVs as extra service providers for computation offloading, the problem proposed in this study is essentially a generalization of that proposed in [18]. However, it has been shown in [18] that the centralized optimization problem for minimizing the system-wide computation overhead is NP-hard. Therefore, with PVs as additional computation offloading providers, the problem proposed in this study is also NP-hard and difficult to solve. Similar to [18], the centralized cost minimizing problem for PV-assisted MEC computation offloading can be transformed into a distributed computation offloading decision problem among mobile device users. In the computation offloading process, each MD wants to reduce its overhead as much as possible. Therefore, they need to be aware of the choices made by other MDs. Let be the selection decisions by all other MDs except MD . Based on , MD can make a proper decision to reduce its overhead. The distributed computation offloading problem ( can be defined as follows:in which the overhead function of mobile device can be defined as follows:
Problem can be formulated as a noncooperative game: with finite players, where is the set of players, is the set of selection decisions for player/MD , and the overhead function is the cost function to be minimized by each MD .
In the next subsection, we will analyze the existence of Nash equilibrium in the PV-assisted MEC computation offloading game.
5.2. Nash Equilibrium Analysis
Here is the definition of the important concept of Nash equilibrium [29].
Definition 1. A selection decision set is a Nash equilibrium of the PV-assisted MEC computation offloading game, if at the equilibrium , no MD can further reduce its overhead by unilaterally changing its selection decision, i.e.,To study the existence of Nash equilibrium, we will first introduce the concept of potential game [30].
Definition 2. A game is said to be an ordinal potential game if the incentive of all players to change their strategy can be expressed using a single global function called the potential function: , such that , and , ifThen,An important feature of the finite ordinal potential game is that it always has a Nash equilibrium and it has the finite improvement property. In other words, if finite players start with an arbitrary strategy profile and iteratively deviate to their unique best replies in each period, the process terminates in an NE after finite steps. Next, before giving detailed proof that the PV-assisted MEC computation offloading game is an ordinal potential game, we have the following lemma.
Lemma 1. Given a strategy profile , MD can reduce its computation overhead by offloading its task to the MEC server if condition (Cm) holds and offloading its task to PV if condition (C ) holds.
(Cm) : the received interference satisfies and , with the following thresholds:
(C ): PV is not occupied by any other MD; i.e., , and the computation overhead satisfies , and
Proof. For condition C : according to equation (14), we know that when the overhead satisfies , i.e., and , the best strategy for MD is to offload its task to the MEC server.
According to equations (7) to (9), the condition is equivalent to the following:That is,According to (2), we then have the following:which is in condition (C ).
According to equations (7) to (12), the condition is equivalent to the following:Then, we have the following:Furthermore, according to (2), we can get in condition (C ).
For condition C : the proof is straightforward and is omitted here.
Based on Lemma 1, we will show that the PV-assisted MEC computation offloading game is a potential game with the potential function as follows:
Theorem 1. The PV-assisted MEC computation offloading game is a potential game with (equation 21) as the potential function and hence always has a Nash equilibrium and the finite improvement property.
Proof. Suppose that MD updates its decision selection from to , and this leads to a decrease in the overhead function, i.e., . According to Definition 2, we must show that this also leads to a decrease in the potential function, i.e., . There are eight possible cases.(1) and (2) and (3) and (4) and (5) and (6) and (7) and (8) and For case (1), according to equations (7) to (9), the premise implies that . Then, since the function increases monotonically with the variable , according to equation (2), implies thatSince and , according to (28) and (29), we have the following:For case (2), the premise is equivalent to . According to the definition of (equation 19), if , then . Then, we have the following:For case (3), since and , from Lemma 1(condition C ), we have . This implies thatCase (4) is the opposite of case (3), and its proof is omitted here.
For case (5), since and , it can be deduced from that . According to Lemma 1 (condition C ), we have . Then,Case (6) is the opposite of case (5), and thus, its proof is omitted here.
For case (7), and imply that . Then, according to the definitions of (Eq. 18) and (equation 19), we have . Therefore,Case (8) is the opposite of case (7), and thus, its proof is omitted here.
Combining results from the above cases, we can conclude that the PV-assisted MEC computation offloading game is a potential game.
5.3. Algorithm Design
Algorithm 1 illustrates the game-based PV-assisted task offloading algorithm (GPTOA) for problem (). Similar to [18], the algorithm will run iteratively on each MD. The main idea of GPTOA is that, based on the current state, each MD makes the best decision by calculating the overhead according to (16) (Line 3). Meanwhile, constraints (C1)–(C4) will be checked in each iteration. When constraints (C3) and (C4) cannot be satisfied, we set the overhead to infinity. During each iteration , MD updates its decision selection based on the best response and sends it to the MEC server as an update request, if . The MEC server randomly selects one decision selection from all update requests and sends back to MD for updating its decision for the next iteration (Lines 4–8). The iteration continues until the decision selection remains unchanged. At the end, the MEC server will broadcast end message to all MDs and each MD will execute the computation task according to the last decision selection. According to the finite improvement property of potential game (Theorem 1), the algorithm will converge to a Nash equilibrium within finite number of iterations.
|
In GPTOA, MDs execute operations in parallel in each time slot. The most time-consuming operation is the computing of the best response update process in Line 3, which mainly involves the sorting operation over the overhead of available offloading strategies for all MDs. Since the sorting operation typically has a time complexity of , and the maximum number of available choices for all MDs is not greater than , therefore, the computational complexity of each time slot will not exceed , in which . If the algorithm takes time slots to terminate, the total computational complexity of Algorithm 1 is .
Let , , , and . For the upper bound of , similar to [18], we have the following result.
Theorem 2. When and are nonnegative integers for any , the game-based PV-assisted task offloading algorithm will terminate within at most time slots, i.e., .
Proof. According to equation (21) and the definition of , , , and , we have the following:According to Theorem 1, during each time slot, MD updates its decision to decision and this action leads to a decrease in its overhead function, i.e., . The key idea of this proof is to show that this also leads to a decrease in the potential function by at least , i.e.,Similar to the proof of Theorem 1, there are eight cases to consider.
For case (1), and ; according to (23), we have the following:Since are nonnegative integers, we have the following:Then, based on (37):For other cases, the proofs are similar and are omitted here.
6. Simulation Results
6.1. Parameter Settings
The GPTOA was simulated and evaluated using Python with packages such as NumPy, random, and SciPy. We considered the scenario where the wireless BS had a coverage area of m. Each BS had channels with a channel bandwidth of MHz. The transmission power was mWatts, and the background noise was dBm. Based on the radio interference model for urban cellular radio environment, we set the channel gain to , where was the distance between MD and the wireless BS, and was the path loss factor [18]. The maximum communication distances of V2X were m and m, respectively [31]. The energy consumption coefficient was .
For computational tasks, the data size of offloaded task was MB. The total number of CPU cycles required by task was randomly distributed in the interval of megacycles. The weight parameters for all MDs were . We assumed that the values of weight parameters remained constant during a single offloading process. Since most MEC servers are equipped with multiple CPUs, and multiple CPUs can be allocated to one MD at a time, for ease of computation, it was assumed that the computation power allocated to an MD by the MEC server was GHz. The computation power of MDs was randomly distributed between GHz. The computation power of PVs was randomly distributed between GHz. The communication bandwidth between PV and MD was MHz.
6.2. Performance Analysis
To evaluate the scheme proposed in this work, we compared three schemes: (1) local only (scheme 1): all MDs decide to compute their own tasks locally. (2) MEC offloading (scheme 2): the tasks are either computed locally or are offloaded to the MEC server [18]. (3) PV-assisted MEC offloading (our scheme): the tasks are computed locally, offloaded to the MEC or PVs. The work presented in [18] was treated as a special case with number of PVs set to 0 in this study. To eliminate the effect of randomness on the algorithm results, we conducted 1000 tests and performed statistical analysis of the results as follows.
First, we fixed the number of PVs (service vehicles) to 40 to observe the changes in the metrics (average delay, energy consumption, and total overhead of tasks, as well as task assignment results and load on the MEC server) as the number of MDs (service requesters) increased.
In Figure 4, the average delay, energy consumption, and total overhead of three schemes are compared. We can see that all three metrics of scheme 1 are higher than the other schemes due to the limited local computation power. When the number of MDs is less than 10, the three metrics are the same for schemes 2 and 3. This can be explained by the lack of tasks offloaded to PVs. As the number of MDs increases, the metrics of scheme 3 grow less rapidly than the other two schemes. When the number of MDs is 30, scheme 3 results in a 26% reduction in delay and a 17% reduction in total overhead (on average) compared with scheme 2.

Figure 5 shows the task assignment results of the proposed scheme. When the number of MDs is less than 10, all tasks will be offloaded to MS, because MS has a shorter task execution time. However, as the number of MDs increases, no more tasks can be offloaded to MS. This is due to the fact that when multiple tasks are offloaded to MS, it leads to strong channel interference and heavier computational load, which in turn causes intolerable delays. This causes some MDs to give up offloading their tasks to the MS. In addition, due to limited resources and short communication distance, only a small portion of the PVs can serve MDs. Therefore, as the number of MDs increases, eventually, the number of tasks executed locally will exceed the number of tasks offloaded to PVs.

Figure 6 shows the total workload allocated to the MEC server under the three schemes. In scheme 1, no computation tasks are offloaded, so the burden on the MEC server is 0. The workload allocated to the MEC server in scheme 3 is lower than that in scheme 2, because PVs share some of the computation tasks. When the number of MDs is 30, scheme 3 reduces the workload for MS by 5%.

Then, we fixed the number of MDs to 30 to observe the change in the metrics as the number of PVs increases.
In Figure 7, the average delay, energy consumption, and overhead of the three schemes are compared with different numbers of PVs. Scheme 3 outperforms both schemes 1 and 2. This is because as the density of PVs increases, it leads to higher utilization of idle computing resources of PVs.

(a)

(b)

(c)
Figure 8 shows the results of task allocation for the proposed scheme. As the number of PVs increases, the number of tasks offloaded to PVs also increases, while the number of locally executed tasks continues to decrease. From another perspective, as the density of PVs increases, more MDs are likely to be connected to PVs, so that MDs that have given up offloading to MS have more opportunities to offload. In addition, the computational power of the MS is much greater than that of PVs. Tasks are assigned to PVs only when MS cannot serve more tasks. Therefore, the number of tasks executed by MS will not decrease as the number of PVs increases.

7. Conclusion
In this study, we proposed a parked vehicle-assisted mobile edge computing architecture that enhances the task processing capability of MEC servers and improves the resource utilization of parked vehicles. In this work, we first discussed in detail the design principles behind the system model of PV-assisted MEC architecture, which served as a premise for the formulation of computation offloading scheme. Next, by formulating the computation offloading problem as a noncooperative game, we proposed a PV-assisted MEC computation offloading scheme that effectively reduces the burden on the MEC server. Simulation results confirmed the feasibility and high efficiency of the proposed computation offloading scheme. As mentioned in Section 3, incentives are not considered in this study; thus in the future, we will further investigate how to incorporate incentives into the PV-assisted MEC task offloading scheme proposed in this study. Deep reinforcement learning-based techniques have obvious advantages when the problem size is large or when there are multiple conflicting offloading goals [6, 32]. Therefore, another feasible research direction is to apply deep reinforcement learning to further improve the task offloading scheme proposed in this study.
Data Availability
The data used to support the findings of this study are simulated by the algorithm proposed in this article, and the parameters used in the simulation are included within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported by Qin Xin Talents Cultivation Program, Beijing Information Science & Technology University (No. QXTCP C202111).