Abstract

Roadside unit (RSU) cloud and its vehicle-to-infrastructure (V2I) connectivity can enable various security, entertainment, and shared mobility applications for vehicles in intelligent transportation systems (ITS) through wireless communications. In this article, the deep programmability of software-defined networking (SDN) is employed to dynamically reconfigure network hosting services and their data forwarding information for effectively meeting the basic shared mobility applications’ needs in vehicle ad hoc networks (VANETs). Multipath is also enabled to forward data flow for balancing network links utilization rate and SDN is thus utilized to achieve the minimum cloud delay with the least number of hosts, which can be summarized as a mixed-integer linear programming (MILP) problem. The joint optimization (JO) algorithm is proposed and in contrast to the two single-objective algorithms which are the delay optimization (DO) algorithm and host optimization (HO) algorithm, respectively. Results show that, for the single-threading instance, the JO and DO algorithms are the same in essence. For the multithreading instance, the JO algorithm generally outperforms the two single-objective optimization algorithms, respectively, under given demands. Furthermore, results also demonstrate that the services should be deployed globally in a distributed manner rather than in the centralized manner for achieving the minimized cloud delay in designing an RSU cloud.

1. Introduction

Transportation is leading an important service industry in the national economy while traffic congestion and safety are the major problem faced by almost all core cities in the world [1]. The traditional method of controlling traffic flow based on traffic lights and stop signs cannot dynamically adjust the interval time in real-time according to the actual traffic flows. Variable speed limit is a major ITS technology for controlling freeway mainline traffic, which have been increasingly used to improve traffic safety and operations efficiency of freeway traffic management [2]. What is more, electric vehicles and its related applications are the main trend in recent years, and its configuration optimization to prolong driving mileage has become an essential issue in affecting the future prospect of electric vehicles [3]. Though there are some shared mobility applications, such as ride splitting, enabling riders with similar routes to share a ride sourcing trip, which is a promising transportation technology to reduce traffic congestions and air pollution, the traffic management can be enhanced through effective communications among the vehicles [4, 5]. However, the pure vehicle-to-vehicle (v2v) communications are unstable, due to the occlusion of towering buildings on the roadside or the other obstacles resisting the communication link, in which case the links are likely to be unstable and decrease the communication quality. Besides, the network topologies are dynamic, because of the high-speed movements of the vehicles. Therefore, the established communication links among vehicles may be disconnected, which affects the success rate of data transmission. In V2V, the communications among the vehicles are not stable enough and the quality of communication is determined by many uncontrollable factors, such as vehicle speeds, geographical locations, and weather conditions. Considering the problems that existed in the V2V communications, the RSUs are utilized for assisting the efficient communications to form the VANETs environment.

The RSUs play a vital role in the VANETs system and are usually deployed on both sides or the middle of the road, which can be equivalent to an AP (access point) in a wireless network. Vehicles driving on the roads can use RSUs to realize interconnection and intercommunication. At the same time, the existence of RSUs makes the network more stable and reliable when uploading and downloading data, and the bandwidth will increase accordingly. To improve the network performance in the VANETs, Yang et al. propose a connectivity-oriented maximum coverage RSU (CMCS) deployment scheme which aims to maximize the performance of V2I communication in urban environments [6]. Another role of the RSUs is to provide computing services and serve as a temporary storage point for data. Some data content such as video data with high throughput rates from vehicles and road condition information that are commonly requested can be temporarily stored in RSUs, without repeated requests to the Internet. VANETs play a vital role in assisting driving, traffic accident warning, traffic management, and Internet services. To better adjust the traffic congestion situation, intelligent transportation system (ITS) applications would be of great importance with the help of VANETs [7]. Under this circumstance, smart cars can automatically adjust the interval between traffic lights through interaction with the local vehicle-road collaboration system under VANETs environment, dynamically optimize the traffic flow at road intersections, improve vehicle traffic efficiency, and alleviate congestion.

The main communication ways in VANETs include the communications among vehicles named V2V (vehicle-to-vehicle) and the communications between the vehicles and RSUs called V2I (vehicle-to-infrastructure), where the typical features of V2V are described above. As far as V2I communication is concerned, due to the fixed geographic location of the RSUs, the relatively stable network, and relatively few interference factors, V2I has been studied in-depth. At present, there are still many problems in V2I communication, such as access selection, data distribution, and frequent switching of RSUs, which restrict the further development of VANETs and cannot provide users with more real-time and high-quality communication services. In terms of RSUs service, due to the high dynamics of VANETs and the limited coverage areas of RSUs, multiple vehicles enter the areas covered by RSUs at the same time. Delay-sensitive safety information would probably result in not being delivered in time, and nonsafety application entertainment information might occupy most of it. It is related to whether the vehicles can get timely processing information and satisfactory services. It will also affect the operating status of the RSUs in the entire region, including throughput, bandwidth utilization, and load balance. Network congestion sometimes occurs in some RSUs, while other RSUs have idle bandwidth and waste. The unbalanced load of RSUs will directly lead to the degradation of network performance [8]. Therefore, how the vehicles choose the appropriate RSUs to get served will be of great importance. In the RSU cloud where there are multiple RSUs, a more scientific algorithm is needed to obtain the best solution for vehicles to get served so that the performance of all aspects of the network is maintained stable. In the same way, in the current VANET data distribution system, due to the limitation of the coverage of RSU and the solidification of the Internet of vehicles architecture and communication methods, the service instances of the vehicles need to be transferred or copied to other RSUs when transmitting large-volume data.

Various methods have been studied with different ways to achieve efficient communications among the vehicles in the ITS systems under VANETs environment. As a matter of fact, the RSU clouds are proposed to help build the VANETs environment, for example, to increase the number of communication links for sending data packets. When the service of the current vehicles cannot be met or need to be transferred, deleted, or copied to other RSUs, how to manage the computing and communication resources in the RSU cloud is an essential problem. Therefore, to overcome this problem, the SDN-based approach with an RSU-based resources management framework for VANETs is proposed. The problems are firstly focused on how to manage the RSUs’ characters in the process of setting data dissemination policies and then optimize the RSUs’ resources deployment with the dynamic existing communication demands. In this case, a mixed-integer linear programming (MILP) model is developed to optimize the allocation of the RSUs’ resources in the designed SDN RSU cloud architecture while minimizing the communication delays and number of working hosts. Furthermore, the RSUs’ resources redundancy is also considered upon the basis of ensuring the requirements of communication delay and working hosts’ number.

For the rest part of the paper, the related work is firstly reviewed and then the proposed SDN-based RSU cloud framework is introduced. Next, the SDN-based RSU cloud resource management problem is formulated into the mixed-integer linear programming (MILP) problem with some restrictions, which is solved by building a simulation framework using Python that is mixed programmed with Gurobi. Meanwhile, the JO algorithm is compared with the single-objective optimization algorithms which are HO and DO algorithms, respectively. Finally, the paper is concluded, and the future work is summarized.

2. Literature Review

To ensure the efficient communication in the VANETs, constructing the cloud of OBUs and (or) RSUs and (or) the hybrid cloud are the typical ways for achieving the generality of the entire network. In this section, lots of existing work on the various clouds in the VANETs shall be briefly reviewed.

On the one hand, a cloud of on-board units (OBUs) are playing vital roles in the VANETs, where some RSUs still act as the gateways to the traditional communication networks [9]. Behbehani et al. [10] introduce a vehicular cloud architecture supported by fixed edge computing nodes and the central cloud; mixed-integer linear programming (MLP) model is developed to optimize the allocation of the computing demands in the distributed architecture while minimizing power consumption. Nodes that are consisted of “SmartCloud” vehicles in the minimum connected dominating set form a virtual backbone in the VANETs [11], where a “SmartCloud” vehicle sends periodic beacon messages to the nearest RSU. For the above method, the common communication evaluation metrics such as the transmission range, processing power, and delay reduction are greatly improved. Meanwhile, the methods mentioned above mainly devote their efforts into the utilization of the vehicles’ OBUs, rather than taking advantages of the RSUs.

On the other side, Wang et al. [12] construct a scene where information centres (or base stations) are distributed along the road so that the information centres can broadcast messages in time, aiming to work out the information dissemination problem in a joint vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) communication system. Though it is possible to interact between the vehicles and roads in the local area, the decision-making is based on the individuals or the local groups, they do not consider the problem from the global situation in a road network, which cannot make full use of the RSUs’ extra communication and computing resources and it is impossible to achieve the overall situation from the perspective of safety or efficiency optimization. Meanwhile, cloud computing is a new and promising paradigm delivering IT services as computing utilities, which is designed to provide services to external users, providers for sharing their resources, and capabilities [13]. For a cloud which is composed of vehicles, RSUs, and passengers, the traffic flows are usually distributed in an unbalanced way in the region, and it remains dynamic as time goes by, the types of the ITS applications and the bandwidth demands of the vehicles are different for the various cloudlets of the entire network. Therefore, it is time to make full use of the RSUs’ extra communication and computing resources with less demands, letting the RSUs with larger demands alleviated in serving the cars. Tang et al. have proposed an RSU empowered vehicular network that consists of three hierarchical layers-vehicular cloud, RSU-enabled cloudlet, and central cloud, respectively, where RSUs are enhanced with edge servers so that the entire system can intelligently respond to the real-time resource requests [14]. Zheng et al. employ the knapsack problem to formulate the problem of cooperative scheduling and radio resource management in vehicular networks, but their research only orients to the radio resources for nonreal-time services where they assume the problem in the 2-hop vehicular network, greatly restricting the application scenario [15]. Their proposed methods are not efficient enough considering the high mobility of vehicles in VANETs with strict delay requirements. In fact, VANET evolves with two emerging paradigms: vehicle cloud computing and information centric networks, where vehicle cloud computing brings the mobile cloud model to vehicle networks and information centric networks change the notion of data routing and dissemination [16].

Besides, to date, there is no set architecture for cloud computing from academic point. Olariu et al. envisioned the paradigm shift from conventional VANET to vehicular cloud computing by merging VANET with cloud computing [17], whereas they did not carry out their thoughts in a specific scenario. With the development of computer networks, McKeown et al. have proposed OpenFlow which is the de facto standard of software-defined networks, where it can provide an excellent opportunity to improve network performance through flexible centralized control, decoupling control plane (network control functions) from the data plane (forwarding functions) [18]. The control plane communicates with the data plane using OpenFlow protocol, where the controller enables the ITS operators to set data dissemination policies and integrate services. Salahuddin et al. have proposed a unique SDN Cloud RSU architecture and its SDN advances from the previous in two parts: firstly, it consists of traditional RSUs and small data centres; secondly, it can be dynamically reconfigured to meet changes in service demands ensuring QoS [19]. However, they do not take the RSUs’ resources redundancy into consideration in case that the networks communications demands are too fluctuant to meet. In this research, a mixed linear integer (MILP) programming model is developed to optimize the RSU resource allocation in the designed SDN RSU cloud architecture. While minimizing communication delays and the number of hosts in the cloud, the redundancy of RSU resources is also considered.

3. Materials and Methods

Since that SDN was proposed as a new-generation network architecture, its unique concept of separation for data and control brings us convenience and flexibility in designing a cloud. The SDN architecture based on the OpenFlow protocol is divided into three planes: application plane, control plane, and infrastructure plane. Control plane mainly includes OpenFlow controller and network operation system. The OpenFlow controller encapsulates the northbound interface driver that connects to the required plane; the southbound interface driver connects to the switch, ensuring the internal intelligent logic of SDN control. The control plane realizes data transmission with OpenFlow switches via the southbound interface, obtaining the status information of the underlying infrastructure and performing reasonable scheduling of the underlying network resources. Obviously, the controller is the key to the centralized control function of the entire SDN network. To achieve the best performance of the RSU cloud infrastructure, it is also necessary to implement multipaths in an SDN that supports OpenFlow. In the proposed SDN RSU cloud framework, the RSUs are composed of the physical servers which can store and compute data and switches that can exchange nonsafety apps information from the vehicles on the road sections and nearby RSUs. The physical machine of the RSU enables virtualization so that the virtual machines can run multiple replications to host services on a single physical machine. Meanwhile, the services can be deleted, transferred, or copied to other RSUs when the vehicles are travelling to the next RSU. The switches on the RSUs are all OpenFlow switches which are the de facto standards of SDN, performing as the corresponding functions of the data forwarding layer. However, though there is no stringent QoS requirement for ITS apps, it is highly desirable and usually crucial for its users [20]. The proposed SDN RSU cloud framework is shown in Figure 1.

Figure 1 depicts the SDN RSU cloud framework and there are n RSUs which are connected to each other through a high-speed backbone. The development of the vehicle-road collaborative technology via cloud computing aims to achieve vehicle-road information sharing system based on advanced communication technologies by deploying roadside intelligent sensing computing equipment (collectively referred to as roadside infrastructure) and complementing the car-side through roadside perception. The RSUs are deployed in the centre of the roads or intersections for maximizing the coverage area of the vehicles to be served. For the entire region, since there are more vehicles on the arterial roads and less vehicles on the secondary roads, the demands are unbalanced distributed. The unevenly distributed network demands would greatly fluctuate as the traffic density changes. As the density of traffic flows is changing with time, network demands are varying to a large extent. Due to the varying network demands, it is necessary to dynamically reconfigure the SDN RSU cloud for optimizing the communication efficiency.

There are three RSU function characters which are the OpenFlow controllers, RSU cloud resource managers (CRMs), and normal RSUs. OpenFlow controllers are responsible for formulating and managing the forwarding strategy of the flow table which can update the OpenFlow switch flow rules via the control plane. Meanwhile, the CRMs would disseminate information regarding service hosting, service migration, data flow changes, and virtual machines (VMs) instantiating and (or) eliminating via the data plane; the designed characters are simplified compared with the research by Salahuddin et al. [21] where there are three characters. It is worth mentioning that the RSU cloud resource managers communicate with the OpenFlow controllers via the data plane.

The top layer of the architecture in Figure 1 is the Internet, which contains a large quantity of Internet servers to provide data upload and download services. The RSUs in the cloud not only transmit information of ITS services but also serve as the different characters, respectively, ordered by the controller information that the network manager has given. The fixed SDN RSU cloud upper layer connections in Figure 1 are all wired broadband connections, including the connections between normal character RSUs and other normal character RSUs, the normal character RSUs and OpenFlow controllers, normal character RSUs and CRMs, and OpenFlow controllers and CRMs, ensuring the reliability of the data transmission. The communications between mobile vehicles and the RSUs adopt IEEE 802.11p wireless communication. There are mostly RSUs with normal characters which are given nor the function of OpenFlow controllers neither cloud resource managers as shown in Figure 1.

4. SDN RSU Cloud Resource Management

Cloud resource management pays attention on the optimal allocation of computing and network resources while meeting the fluctuating demands required by the ITS apps. Normally, optimizing the placement of virtual machines on physical host machines is determined by various capacity planning tools such as VMware Capacity Planner, IBM WebSphere CloudBurst, Micro Focus, and OneIQ; these cloud resource management tools seek to consolidate physical memory, VMs for CPU, and power consumption savings, etc., yet without considering consumption of network resources [22]. Consequently, some RSUs will inevitably cost lots of network resources, while others are almost idle, causing the unbalanced load of the entire network, which would directly lead to the degradation of network performance. Defining the configuration as the number of service hosts and effective data forwarding rules, the mission is to find the best configuration in order that the whole SDN RSU cloud will achieve the minimum delay and minimum number of hosts when the network demands change.

4.1. Problem Formulation

Considering a commonly seen traffic network which is fixed with a bunch of RSUs facing the fluctuating communication demands, the mathematical problem is then abstracted as follows. Assume that a network graph is given with fixed number of RSUs, whose are representing the RSUs and the wired connections among the RSUs, respectively.

From what has been shown in Figure 2, given a set of traffic information services S, a set of average demands that change over time periods . The vehicles are constantly sending their demands at each RSU and requesting for services from the RSU cloud. There is an initial network configuration during period . For any edge, there is a bandwidth capacity . During the time period, it is assumed that there is a demand at RSU for service with the average demand . The mathematical model is designed to compute the optimal network configuration while minimizing the host numbers and RSU cloud delay during period , under the premise of ensuring a certain amount of service resources redundancy. In this case, when some over-burdened RSUs send part of demands to other RSUs with less demands, the demands in the cloud are all met with a load-balanced network.

Defining the as the sum of the delays in all edges that a single routing path traverses from RSU to , which includes processing time , queuing time , transmission time , and propagation time , processing time is a constant, which is usually 10 μs. For the queuing process of a data packet, the Poisson distribution combined with the traffic background is used for modelling. For the Poisson process with the packet interarrival times and packet processing times , whose mean and standard deviation are , and , , respectively, the coefficients , are defined by the Kingsman formula to describe queuing delay [23]:

According to the basic knowledge of physics, the transmission speed of signal in the wire equals about 2/3 light speed. Propagation time is determined by the size of packet divided by the capacity :

As for the convenience of describing the parameters in the model, the notations of the input parameters are listed in Table 1, and output parameters and constants are depicted in Table 2, respectively.

As what can be seen in Table 1, the essential variables are all displayed with the detailed explanations and there are several binary variables which are of great significance in determining the output results. In the output parameters, there are also several variables which are corresponding to the ones in Table 1, which are listed in Table 2.

Table 2 lists the output parameters and constants, parts of which are related to some variables in Table 1. Since all the parameters needed for this paper are depicted in detail above, it is time to design a joint optimization model to find the best configuration that can satisfy the delay requirements and minimize the number of hosts simultaneously. For the implementation of the SDN RSU cloud problem, two objective functions are involved: the minimization of the delay problem, which is designed to be at a given bandwidth to optimize the delay between the RSUs and users, and the minimization of service hosts number problem, which aims to deploy the minimum number of service hosts to maximize the entire network’s service capabilities. What is more, all the things would be done under the premise of ensuring a certain amount of service resources redundancy so that the network instability caused by sudden service demand increase would be prevented. The object function of the problem is formulated in equation (4) and the formula rounds the number to the next largest integer.

In the process of solving this multiobjective function, due to the reason that the priority of the two goals is hard to be determined, two strategies are tried to control the priority for the object function. That is, the strategy that the delay takes priority of the host numbers, its worked-out results are compared to the results of the strategy that the host numbers take priority of the delay. After trying many times, finally, it is found that the priority of the two goals has almost no effect on the working-out results, and the priority of the two goals is set to equal with each other. The goal of achieving the minimum service hosts number is contradictory to the minimization of delay. By transforming the proposed multiobjective problem into a single-objective optimization problem, linear scaling technology is used to solve the proposed multiobjective problem [24]. The delay requirements are firstly ensured, and then the number of hosts is minimized. Besides, the constraints of the goal (4) are listed from (5) to (22) as follows:

The constraints (5) and (6) restrict the definition of the variable , where if , equals , otherwise .

The general communication processes can be concluded as if demands exist, there must be loads. As a matter of fact, the RSU servers must meet or exceed the demand of the vehicles, listed as (7). Constraints (8) and (9) ensure that the routes carrying the network loads are only between the RSU servers and the users. To step further, (10) and (11) ensure that there are enough loads for carrying the rule of the control plane which is the natural cost of the system itself.

Equation (12) describes the loads on the of the flow between RSU and RSU , which contains the generating and forwarding traffic on the routes. The delay on the edge matches with the load on it, where the index is used to find it in the lookup table (LUT). Equation (13) represents that the load on an edge is mapped to a bunch of . Constraint (14) ensures that the traffic load on an edge cannot exceed its capacity, where (15) and (16) state that there is only one specific value for the traffic load on edge and the delay corresponding to it.

Moreover, the delay of the specific path between the RSU and equals the summation of all the delays on its edges, which is shown in (17). Constraint (18) represents the service ’s delay threshold, while constraint (19) describes whether a routing path is being used or not.

On the other side, constraint (20) ensures that there is at least one host working on the RSU m. What is more, constraint (21) represents service k’s threshold on number of service hosts. Salahuddin et al. [19] assume that all RSUs have unlimited resources to meet any service demand for simplicity. However, it would be hard and nearly impossible to make the RSUs working without resource restriction.

To further figure out this problem, it is necessary to take the number of working hosts for a single RSU into consideration, which is usually limited.

The maximum number of hosts for a single RSU is limited, which is reflected in constraint (22). Besides, formula (22) is used to present the resource constraints in sharp contrast to the work of Salahuddin et al. [19], where they assume that the RSU’s resource is infinite to hold the services.

4.2. Hardness Analysis

In this section, the complexity of the proposed model is analysed via testing the complexities of the two terms of it. For the minimization of the delay problem in the implementation of SDN RSU cloud, all OpenFlow switches utilized to route the traffic loads in the network are multipath enabled.

Theorem 1. The minimization of the delay problem is NP-hard.

Proof 1. The minimization of the delay problem can be converted into the minimum multicommodity flow (MCF) problem [25]. For an example of the MCF, where a set of nodes exists, which contains elements and an edge set with elements , there is a total of traffic demand, and each traffic demand contains a source node and a destination node . A route exists from source to the destination whichever paths it chooses when multipath is enabled. The delay on each edge in the network topology is . A graph is therefore formed by all the nodes and edges, where each node in the network is considered as a point in the graph. The delay of a path corresponds to the sum of the delay on every edge it traverses.
It is easy to prove that there is a minimum path delay if and only if there is an optimal solution to the MCF problem via mathematic construction. The construction can be done in polynomial time and the MCF problem is an NP problem [25]. Therefore, minimization of the delay problem is also an NP-hard problem.
To ensure the best reachability and reliability of the entire network, it is best to enable all the RSUs to work on with the threshold number of hosts simultaneously. However, it is wasteful and not acceptable for the users nor for the providers from a pure cost of money perspective, since high operating cost would cost a lot of money. The minimization of service hosts number problem, which is aiming at deploying the minimum number of service hosts without exceeding the threshold while serving all flows in the network.

Theorem 2. The minimization of service hosts number problem is NP-hard.

Proof 2. The minimization of service hosts number problem can be converted into the minimum weighted vertex coverage (MWVC) problem. Here it shows an example of the MWVC problem, containing a set of nodes , which has elements and an edge set with elements . These nodes have corresponding weights . All nodes and edges form a graph. Here, it treats the network flow set as a set of edges and takes each vehicle or RSU in the network as the node in the graph. Therefore, the flow corresponding to the edges passes from the origin connected to the destination , and the hosts number of each traffic flow corresponds to the weight of destination .
Through mathematic construction, it is easy to prove that only and only if there is an optimal solution to the MWVC problem, the SDN RSU cloud can serve all flows with the minimum number of hosts service. The construction can be done in polynomial time, and the minimum weighted vertex cover problem is NP-hard to solve [25]. Hence, the minimization of service hosts number problem is also an NP-hard problem.
Furthermore, is a continuous variable that is larger than 0, while is an integer variable. Therefore, the proposed model is a mixed-integer linear programming problem (MILP). The MILP problem is solved with the Python-programmed simulation framework which is mixed programmed with Gurobi. All the work done above for minimizing the delay and number of hosts simultaneously in the proposed model is called for short in the abbreviation form JO which means joint optimization algorithm. The single-objective algorithms which are host optimization (HO) and delay optimization (DO), respectively, are also analysed. HO hosts the service in the best way to saving the number of hosts in meeting the network demand, regardless of the delay caused by the service; DO seamlessly hosts the best service to minimize delay, without considering the number of hosting services.

5. Results and Discussion

In this section, the ITS application scenery in the simulation scenario is the coverage area of fixed RSUs in the downtown area of Longhua District, Shenzhen, China, where there are totally ten RSUs in the topology, as shown in Figure 3.

The network topology shown above represents the possible paths among the ten RSUs which can be utilized to configure the allocation of the cloud resources when it comes to matching the demands of the vehicles. The left part of Figure 3 is the scenario map of the simulated area, while the right part describes the detailed topology of the designed SDN RSUs network. In Figure 3, each RSU is decomposed into a OpenFlow switch and a server, the consists of and , so for the rest RSUs. It is assumed that the paths represented in a list from origin to destination are acyclic elementary chains, where the labelled notations are from 0 to 9. For the topology of the RSU cloud, it is easy to work out that there are 500 possible paths from origin to destination , in summary. From what is shown in Figure 3, considering a confined geographical region, vehicles are entering and leaving the region continuously and the RSUs are deployed along the roads. Taking an example of the flow , which means the traffic flow from to , there are totally 12 possible different paths for the flow: : [0, 1, 2, 3, 4, 5, 6, 7, 9], : [0, 1, 2, 3, 4, 5, 7, 9], : [0, 1, 2, 3, 4, 8, 7, 9], : [0, 1, 2, 4, 5, 6, 7, 9], : [0, 1, 2, 4, 5, 7, 9], : [0, 1, 2, 4, 8, 7, 9], : [0, 2, 3, 4, 5, 6, 7, 9], : [0, 2, 3, 4, 5, 7, 9], : [0, 2, 3, 4, 8, 7, 9], : : : [0, 2, 4, 8, 7, 9].

5.1. Parameter Settings

To analyse the experimental results in a representative way, the number of services is set to , which typically represent the single-threading and the multithreading instances, that is, the simple and complex scenario for services, respectively. During period , the average demand is composed of a multiple of demands at node for service . Since that there are many cases in nature that conform to the normal distribution, it can describe a common distribution of samples affected by a large amount of small random disturbances. In this case, the normal distribution is adopted here, similar to the experiments in [19]. For each group of the experiments, the normal distribution is applied with the configuration that the mean value equals to the D (demand), and the standard deviation of the distribution equals to 0.05  demand. Under this circumstance, one moment of is taken out with , respectively, at 50 and 60 Mbps on behalf of all the average demands ranging from 0 Mbps to 99 Mbps.

In the preliminary experiment, the size of one single packet for the test experiment is 1240 bytes and multiple times are tried with a bunch of same service demands ranging from 0 Mbps to 99 Mbps for each service until there is no feasible solution and is set to 1 or 0 randomly with the random function of NumPy [26]. In this way, the demands for the services are around D or 0 otherwise, which means the demand differences among the 10 RSUs are great and the SDN RSU cloud resources can therefore be managed.

For , 86 Mbps is the largest flow demand with feasible solution and 97 Mbps is the largest flow demand with feasible solution for , where the entire SDN RSUs cloud is working in saturating state. As a matter of fact, when the flow demands are lower than 30 Mbps, the results of delay and number of hosts change to a small extent, and this is due to the reason that the flow demands are too small. Furthermore, the model is in a multiple dimension when it comes to the circumstance of . While brings the model with nothing but making the network configuration more complex and hard to show the result analysis, therefore is set when it comes to the circumstance of complex scenario for services. Seven different experimental flow demands D = {40 Mbps, 50 Mbps, 60 Mbps, 70 Mbps,80 Mbps, 90 Mbps, 97 Mbps} are set for , and D = {30 Mbps,40 Mbps, 50 Mbps, 60 Mbps, 70 Mbps,80 Mbps, 86 Mbps} for . All the initial network configuration parameters are shown in Table 3, which depicts all the numerical parameters that have impacts on the experimental results in detail.

As what can be shown in Table 3, the bandwidth capacity of each edge is set to 100 Mbps, and RSU’s threshold on number of service hosts is 5. Besides, for one service, its threshold on number of hosts is also set to 5. For the convenience of splitting the network load, the is set to 1, which can split the load into smallest pieces. The initial network configuration assumes that there is no control plane in the network. For the redundancy of the network, is set to 1.5.

5.2. Experimental Results

In this section, the experiment data are worked out and the results of the cloud delays and hosts numbers for all algorithms are analysed, respectively. To begin with, for the topology of the RSU cloud, the sum of cloud delay refers that all the delays summed for 500 paths from origin RSU to destination RSU , during period .

For , the hosts numbers of joint optimization and delay optimization are all the same with the demands arising from 30 to 86 Mbps. What is more, for , the sum of cloud delay of JO and DO is almost the same as shown in the right part of Figure 4. The above results show that when the service is set to 1, the joint optimization and delay optimization algorithm are the same in essence. On the other side, when compared with the host optimization algorithm, though the number of hosts is less than the other two algorithms, the sum of cloud delay increases explosively.

As shown in the left part of Figure 4, when the average demands are below 60 Mbps, the number of hosts can be set as only 2 in HO, greatly less than 6 in JO and HO, which reduces 66.67% number of hosts. What is more, when the average demands are below 80 Mbps, the sums of cloud delay in JO and DO are all much less than HO. When the average demand comes to 86 Mbps which are the maximum bearable demands for each service in the SDN RSU cloud, the sums of cloud delay for all 3 algorithms increase substantially. Though the HO could handle the 86 Mbps situation with only 3 hosts, the sum of delay comes to 8144 ms which is greatly larger than 2961.10 ms in JO and DO. To step further, the HO algorithm represents the minimum number of hosts needed for the SDN RSU cloud.

represents the complex scenario for services, and the joint optimization algorithm would distinguish from the delay optimization algorithm and host optimization algorithm which is different from the circumstance of . From what can be seen in Figure 5, when the average demands are below 90 Mbps, the JO could achieve near the same sum of cloud delay as DO with the same or even less hosts. Taking 40 Mbps as an example, the sum of cloud delay in JO is 319.40 ms which is only 2.10 ms greater than DO’s while the number of hosts in JO is 8 which is 3 less than DO whose number of hosts is 11. For the 40 Mbps circumstance, the delay of JO only increases 0.66% than DO, but the number of hosts in JO decreases 27.27% than JO, which is of great significance for the improvement of the overall cloud efficiency.

From Figure 5, it can be observed that the sum of cloud delays rises at an explosive speed and reaches to over 8000 ms when the network demand arrives at 97 Mbps. The reason why delay grows up drastically when communication demand grows up slightly from 90 Mbps to 97 Mbps accounts for the transfer of the network status. When the network demand is 90 Mbps, the network status is far away from being saturated. However, network status becomes saturated when the network demand comes to 97 Mbps. If the clients and servers are connected with the limited 100 Mbps bandwidth, but there are multiple applications that need to transmit data approximating the bandwidth at the same time, a large amount of data will be waiting in a queue at this time, which makes the delay increase greatly. Furthermore, it can be easily seen that the sum of cloud delays is rising with the demands for the JO and DO algorithms which is in accordance with the normal recognition. As the average demands for each service arise from 40 to 97 Mbps, the number of hosts is rising in HO. However, things get a little bit complex when it comes to the sums of cloud delay in HO. The sum of cloud delay at 50 Mbps in HO is 31.26% larger than 60 Mbps in HO, while the number of hosts of these two is the same. To explain the phenomena, the network configurations must be further taken out, which is computed from the HO algorithm in contrast to the initial network configurations . The network configurations of the HO algorithm in 50 Mbps and 60 Mbps are shown in Table 4.

When , in Figure 5, the numbers of hosts for 50 Mbps and 60 Mbps equal to 5, where there are 2 hosts running idly for back up and they do not afford any loads. This is due to the reason that R = 1.5 is set to ensure a certain amount of service resource redundancy and . Therefore, the two idling hosts in Table 4 are removed for the convenience of analysing the network configurations of HO algorithm. For , as shown in Table 4, when the average demands arrive at 50 Mbps and 60 Mbps in HO algorithm, and mean there are two services hosted on at 50 Mbps, while the 3 hosts are distributed on different RSUs at 60 Mbps. Due to the reasons depicted above, there are only 2 RSUs working at 50 Mbps, in contrast to 3 RSUs working at 50 Mbps, leading to the fact that the sum of cloud delay at 50 Mbps is greater than at 60 Mbps with the same number of hosts in HO algorithms. What is more, when it comes to designing the RSU cloud, the above phenomenon reminds us that the services should be deployed globally in a distributed manner rather than in the centralized manner for achieving the minimized cloud delay. The services are recommended distributed rather than concentrated on some specific RSUs.

5.2.1. The Analysis of Network Configurations

Since the sum of cloud delay and number of hosts for JO, DO, and HO algorithms in 60 Mbps when are already displayed in Figures 4 and 5, here the idling hosts are removed for the convenience of analysing the network configurations. To further demonstrate the reasons why there are different sums of cloud delay for the 3 different optimization algorithms, the working hosts’ configurations are shown in detail at Table 5.

From what can be seen in the left part of Table 5, the network configurations of JO and DO algorithms in 50 and 60 Mbps when are exactly the same, which further verifies that these two algorithms are the same in essence when . Meanwhile, when at 50 Mbps, JO algorithm needs 1 less host than DO algorithm where their other host network configurations are the same, except for the . That is, while the in JO is 0, the is set to 1 in DO. Under this circumstance, the sum of cloud delay in JO is 341.00 ms while 340.48 ms in DO, showing that JO could achieve almost the same sum of cloud delay as DO even with 1 less host. For the HO results compared with the JO and DO, though the number of hosts is reduced to a great extent, the sum of delay is much larger than JO and DO and unsuitable for use in the application. Still, it gives us the emergency resources management solution in the SDN RSU cloud when the network congestion happened.

6. Conclusions and Future Works

To reduce environmental pollution, achieving a full range of safe, efficient, and green transportation environment, VANETs have been regarded as a key enabling technology for improving the road safety and building intelligent transportation systems. VANETs can provide safety and nonsafety-related applications and services to improve the safety and shared mobility of vehicle users. It is a multidisciplinary technical field involving road traffic, wireless communication, self-organizing systems, etc. The cloud resource management system realizes the interoperability of resource data in the VANETs. It can integrate similar or related resources through resource integration services to reduce the transmission of redundant data. On this basis, through resource data distribution planning services, according to the priority of the resource, under the condition of limited bandwidth, the priority transmission of key data is realized.

In this paper, the deep programmability of SDN is used to dynamically reconfigure the network for hosting, routing the services, and effectively meeting the basic requirements of the VANETs. Multipath is also enabled to forward data streams to balance the loads on the network. To achieve the minimum cloud delay with the least number of hosts, it can be summarized as a mixed-integer linear programming problem (MILP). Based on the research of the architecture technology in cloud resource management system, a cloud resource management system for vehicle collaborative perception scenarios is realized, and the feasibility of the system is verified. When the service is set to 1, which typically represents the single-threading instance scenario, the common optimization and delay optimization algorithms are essentially the same. Besides, the results show that, given the requirements, when the services are larger or equal to 2, the proposed algorithm is superior to single-objective optimization algorithms, namely, the delay optimization (DO) algorithm and the host optimization (HO) algorithms. In addition, the results also indicate that services should be distributed on a global scale, rather than provide services centrally.

For future work, the proposed algorithm can be used to the shared mobility services in connected and automated environment to improve the safety. The inclement road conditions can have impact on the traffic safety, level of service, traffic mobility, and fuel efficiency of the vehicle flows [27], and its communication mode can be further studied and combined with our proposed model in the future research. Besides, the cloud resource management system could be enhanced by comparing it with the centralized resource management system under the scenario of unmanned vehicle collaborative perception. Furthermore, in terms of RSU access, more complex scenarios can be considered in follow-up research, such as overlapping coverage of multiple RSUs, variable speed of vehicles can also be considered. Such in-depth research will be more practical. The follow-up work can further study this special circumstance and combine with other advanced algorithms to propose more effective data distribution algorithms. There are also lots of security and privacy issues and research challenges in RSU clouds.

Data Availability

The data used to support the findings of this research are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest for the publication of the paper.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (52172329) and the 13th Five-Year National Key Research and Development Plan (2018YFB1201403).