Abstract

Software-defined networking (SDN) decouples the control plane and data forwarding plane to overcome the limitations of traditional networking infrastructure. Among several communication protocols employed for SDN, OpenFlow is most widely used for the communication between the controller and switch. In this paper two packet scheduling schemes, FCFS-Pushout (FCFS-PO) and FCFS-Pushout-Priority (FCFS-PO-P), are proposed to effectively handle the overload issue of multiple-switch SDN targeting the edge computing environment. Analytical models on their operations are developed, and extensive experiment based on a testbed is carried out to evaluate the schemes. They reveal that both of them are better than the typical FCFS-Block (FCFS-BL) scheduling algorithm in terms of packet wait time. Furthermore, FCFS-PO-P is found to be more effective than FCFS-PO in the edge computing environment.

1. Introduction

Nowadays, the traditional IP network can hardly deal with the huge volume of traffic generated in the rapidly growing Internet of Things (IoT). As an efficient solution to this issue, a new type of networking paradigm called software-defined networking (SDN) had been proposed [1]. In SDN the control plane and data plane of the network are decoupled, unlike the traditional network. The role of the switches in SDN is delivering the data packets, while the controller is installed separately for controlling the whole network. OpenFlow [2] is one of the most popular protocols developed for the communication between the controller and switch in SDN, and it is only an open protocol [3].

The proliferation of IoT and the success of rich cloud service induced the horizon of a new computing paradigm called edge computing [4]. Here how to efficiently deliver the data from the IoT nodes to the relevant switches is a key issue. While there exist numerous nodes in the edge computing environment, some of them are deployed to detect critical events such as fire or earthquakes. For such peculiar edge nodes, the data must be delivered with the highest priority due to its importance. Therefore, effective priority-based scheduling and a robust queueing mechanism are imperative to maximize the performance of the network where the data processing occurs at the edge of the network. A variety of network traffics are also needed to be considered for accurately estimating the performance [5], allowing early identification of a potential traffic hotspot or bottleneck. This is a fundamental issue in the deployment of SDN for edge computing.

Recently, numerous studies have been conducted to maximize the performance of the controller and OpenFlow switch of SDN. However, to the best of our knowledge, little explorations exist for the performance of SDN switches with priority scheduling. This is an undoubtedly important issue for edge computing since the edge nodes deal with the received data based on the priority. Furthermore, priority scheduling is necessary with multiple switches. In this paper, thus, the priority scheduling scheme for multiple-switch SDN is proposed. Here packet-in messages might be sent to the controller from each of the switches. On the basis of the M/G/1 queueing model for the OpenFlow switch, the First Come First Served-Pushout (FCFS-PO) and First Come First Served-Pushout-Priority (FCFS-PO-P) mechanism are proposed for efficiently handling the overload issue. With FCFS-PO, the packets are served based on the order of arrival, while the oldest one is pushed out when the buffer is full. With FCFS-PO-P, the packets are serviced by the order of the priority on top of the FCFS-PO. Extensive experiment on a testbed reveals that FCFS-PO-P scheduling allows better performance than FCFS-PO in terms of sojourn and wait time for various traffic conditions. Furthermore, both of them display much smaller wait time than the typical FCFS-Block (FCFS-BL) scheduling. The main contributions of the paper are summarized as follows:(i)The packet scheduling issue is identified with the SDN controller in edge computing environment. The FCFS-PO and FCFS-PO-P scheduling scheme are proposed for coping with this issue(ii)Analytical model of the proposed scheduling scheme is developed using queueing theory, which can be applied to the evaluation of priority-based scheduling scheme(iii)The scheduling for solving the packet scheduling problem of SDN controller in edge computing environment is revealed by comparing the proposed method with existing methods

The rest of the paper is structured as follows. Section 2 provides an overview of SDN and edge computing. In Section 3 the priority-based scheduling schemes for multiple-switch SDN are proposed along with their analytical modeling. Section 4 evaluates the performance of the proposed schemes. At last, Section 5 concludes the paper and outlines the future research direction.

SDN decouples the control logic from the underlying routers and switches. It promotes the logical centralization of network control and introduces the capability of programming the network [68]. As a result, flexible, dynamic, and programmable functionality of network operations could be offered. However, the merits are achieved with the sacrifice on essential network performance such as packet processing speed and throughput [9], which is attributed to the involvement of a remote controller for the administration of all forwarding devices.

2.1. SDN

SDN was originated from a project of UC Berkeley and Stanford University [10]. In SDN the network control plane making decisions on traffic routing and load balancing is decoupled from the data plane forwarding the traffic to the destination. The network is directly programmable, and the infrastructure is allowed to be abstracted for flexibly supporting various applications and services. The experts and vendors claim that this greatly simplifies the networking task [11]. There exist three layers in SDN unlike the traditional network of two layers, and a typical structure of SDN based on the OpenFlow protocol is depicted in Figure 1.

The separation of control plane and data plane can be realized by means of a well-defined programming interface between the controller and switches. The controller exercises direct control covering the state of the data plane elements via a well-defined application programming interface (API) as shown in Figure 1. The most notable example of such API is OpenFlow. There exists one or more tables of packet-handling rules (flow table) in an OpenFlow switch. Each entry of a flow table consists of mainly six fields as listed in Table 1. The rule is matched with a subset of the traffic, and proper actions such as dropping, forwarding, and modifying are applied to the traffic. The flow tables are used to determine how to deal with the incoming packets.

The communication protocol between the controller and switches is particularly important due to the decoupling of the two planes. The OpenFlow protocol is widely used for SDN, which defines the API for the communication between them as Figure 2 illustrates.

In SDN the controller manipulates the flow tables of the switch by adding, updating, and deleting the flow entries. The operation occurs either reactively as the controller receives a packet from the switch or proactively according to the implementation of the OpenFlow controller. A secure channel is supported for the communication between the controller and switch. The OpenFlow switch supports the flow-based forwarding by keeping one or more flow tables [12, 13].

In Xiong et al. [14] a queueing model was proposed for estimating the performance of the SDN controller with the input of a hybrid Poisson stream of packet-in messages. They also modeled the packet forwarding of OpenFlow switches in terms of packet sojourn time [15]. In Sood et al. [16] an analytical model for an SDN switch was developed, which includes the key factors such as flow-table size, packet arrival rate, number of rules, and position of rules. Ma et al. [17] focused on delay estimation with real traffic and end-to-end delay control. A model was proposed in Mahmood et al. [18] which approximates multiple nodes of the data plane as an open Jackson network with the controller. A hybrid routing forwarding scheme as well as a congestion control algorithm was proposed in Shen et al. [19] to solve the traffic scheduling and load balancing problem in software-defined mobile wireless networks. In Miao et al. [20] the performance of SDN was investigated using the Markov Modulated Poisson Process (MMPP) in the presence of bursty and correlated arrivals.

2.2. Edge Computing

With the new era of computing paradigm called edge computing, data are transferred to the network edge for the service including analytics. As a result, computing occurs close to the data sources [21]. The number of sensor nodes deployed on the surroundings is rapidly increasing due to the prevalence of IoT in recent years. Edge computing is an emerging ecosystem which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the whole network. It offers storage and computational resources at the edge, reducing the latency and increasing resource utilization. A simplified structure of edge computing is depicted in Figure 3.

As the futuristic vision of IoT, the latest development of SDN could be integrated with edge computing to provide more efficient service. Here how to make the edge nodes more efficient in processing the data is a matter of great concern. Besides, it is essential to accurately estimate the performance of the OpenFlow switch for better usage. While more than thousands of edge nodes may exist in the real environment, some of them are deployed in the critical area whose data must be processed with high priority. Therefore, priority-based scheduling is inevitable to dynamically control the operation of the network considering the property of the traffics.

3. The Proposed Scheme

In this section the proposed scheduling schemes for SDN having multiple switches and the analytical models of the performance are presented. First, the priority scheduling scheme with a single switch and controller is presented. Then, the FCFS-PO and FCFS-PO-P scheme based on multiple switches are introduced which can effectively handle the switch overload issue.

3.1. Priority Scheduling with Single Switch
3.1.1. Basic Structure

SDN allows easy implementation of a new scheme on the existing network infrastructure by decoupling the control plane from the data plane and connecting them via an open interface. The SDN of a single switch and controller is depicted in Figure 4.

Both the controller and switch support the OpenFlow protocol for the communication between them. When a packet arrives at the OpenFlow switch, the switch performs lookup with its flow tables. If a table entry matches, the switch forwards the packet in the conventional way. Otherwise, the switch requests the controller for the instruction by sending a packet-in message, which encapsulates the packet information. The controller then determines the rule for it, which is installed in other switches. After that, all the packets belonging to the flow are forwarded to the destination without requesting the controller again [15]. The process of packet matching operation is illustrated in Figure 5. Notice from the figure that both the incoming packets and packet-in messages are queued. Here the incoming packets and packet-in messages are handled with the M/G/1 model and M/M/1 model, respectively, as in the existing researches [14, 16].

3.1.2. Priority-Based Scheduling

Refer to Figure 5. The incoming packets to the OpenFlow switch are queued, and they are scheduled by either the FCFS or FCFS with priority (FCFS-P) scheduling policy. FCFS is applied for regular traffic, while FCFS-P is applied for urgent packets.

In the FCFS M/G/1 queueing system, the average sojourn time of a packet consists of two components. The first component is the time required for the processing of the packets which are waiting in the queue at the time of the packet arrival, and the second one is the time due to the packet which is in service. The second component is nonzero if the system is busy, while it is zero if the system is empty. Considering the two components, the average sojourn time of a packet with FCFS scheduling is obtained as follows. With the M/G/1 queue, λ is the arrival rate of the Poisson process and is average service time. The models in this subsection can be referred to in [22].

Here is the number of packets in the queue, ρ is the probability that the queue is busy, and x2/(2) is the average residual lifetime of the service. By adopting Little’s equation and the P-K (Pollaczek and Khinchin) equation, (1) can be expressed as follows:

The model for FCFS-P is different from the FCFS M/G/1 queue due to the out-of-order operation. There exist several classes of packets having different Poisson arrival rates and service times. Assuming that there are m classes, and are average service time and arrival rate of class-i, respectively. The time fraction of the system working on class-i is then . The packets are processed according to the priority with preemptive node, while the class of a smaller number is given higher priority.

In the M/G/1 queueing system of FCFS-P, the average sojourn time of an incoming packet consists of three components. The first component is the time taken for the packet currently in service to be finished, S0. The second one is the time taken for the service of the packets waiting in the queue whose priority is higher than or equal to it. The third component is for the service of the packets arriving after itself but having higher priority. The number of packets of class-m is according to Little’s result theorem [23]. Hence, the sojourn time of the FCFS-P M/G/1 queueing system is as follows:

Equation (3) can be solved by recursively applying the equation as follows:

With preemptive scheduling, S0 is due to the packets of higher priority. Therefore, it can be expressed as follows:

3.1.3. Packet-In Messages

As previously stated, the OpenFlow switch sends a packet-in message to the relevant controller as a flow setup request when a packet fails to match the flow table. The diagram of the processing of the packet-in message is illustrated in Figure 6. Notice that the process of packet-in message has a one-to-one correspondence with the process of flow arrival regardless of the number of switches connected to the controller. Each flow arrival is assumed to follow the Poisson distribution. Therefore, the packet-in messages coming from several switches constitute a Poisson stream by the superposition theorem [15].

3.2. Priority Scheduling with Multiple Switches
3.2.1. Basic Structure

Since SDN decouples the control plane and data forwarding plane, the data transmission consists of two types. One is from the edge nodes to the switches, and the other one is from a switch to the remote controller for the packet-in message. They are illustrated in Figure 7.

There is at least one flow table in an OpenFlow switch, and the incoming packets are matched from the first flow table. Upon a match with a flow entry, the instruction set of the flow entry is executed. On table miss, there are three options: dropping, forwarding to the subsequent flow tables, and transmitting to the controller as a packet-in message. One option is selected by the programmer when the flow tables are sent to the OpenFlow switches.

3.2.2. FCFS-PO Scheduling

With FCFS-PO, if the buffer is full when a packet enters, it is put at the tail of the queue while the packet at the head is pushed out. All the packets move forward one position when a packet is served. Here the position of a packet in the buffer is decided by the number of waiting packets.

The FCFS-PO mechanism is modeled assuming that the packets arriving at the switch follow the Poisson arrival with the rate of . This means that the interarrival times are independent and follow exponential distribution. The packet service times are also independent and follow general distribution (1/μ). Also, assume that the system can contain N packets at most and the buffer size is (). At last, the arrival process and service process are assumed to be independent.

Assume that ()) is the steady state probability of the M/G/1/N queue, while represents the steady state probability of the M/G/1 queue. Qi is obtained as follows using the approach employed in [24]:

is the steady state probability of a packets already in the system when a new packet arrives.

Then, according to the Poisson Arrivals See Time Averages (PASTA) character [24], the following equation can be obtained:

Here . And two performance indicators of the queue in the steady state are dealt with as follows:(i)The probability of the packets to get the service, ,(ii)The packet loss rate of the system, ,

With the FCFS-PO mechanism, when the buffer is full of packets, the incoming packet will be put at the end, and the buffer management policy will push out the head-of-line (HOL) packet for the newly incoming packet. Let us denote by the steady state probability for the packet in state , and it finally gets the service when a packet leaves the system. When , the packet is in service. When and , needs to wait for the service. Assume that there are incoming packets while a packet is in service. Then, for to get the service. If , there is no packet to be pushed out. Then, the position of the packet is changed from to (, ). The buffer is full while , and the incoming packet pushes out the packet at position_2. Then, the position of the packet is changed from to , . Finally, the state-transition equation of is obtained as follows [25]:

Here can be obtained from the result of imbedded Markov points in [26]. Then, the average waiting time is analyzed. The following Boolean function is used to show if the packet was served or not:

Using (9) and (10), the probability of and functions can be easily obtained.

Let    represent the number of incoming packets while packets are being served, and is the remaining time of the current packet in service. The Laplace-Stieltjes Transform (LST) of is obtained as follows:Denote by the average waiting time for until it finally gets the service, and the waiting time is no longer than . Then, the probability of is shown as follows:So, can be obtained:

The conditional average waiting time for is as follows:

Here and are given by (8) and (13), respectively. is given by (15).

3.2.3. FCFS-PO-P Scheduling

In the modeling of FCFS-PO-P, the arrival process and service process of the incoming packets are the same as the modeling of FCFS-PO. In this paper, assume that the newly incoming packet has the highest priority among the packets already in the queue. Then, the newly incoming packet will be put in the front position to get service first. And the buffer management policy pushes out the packet in position_N which waited longest in the queue. Let denote the steady state probability of packet in position_a and it is finally served. With the FCFS-PO-P policy, waits in the sequence. Assume that m incoming packets arrived while a packet is in service. Then, for the packet to be served, and moves to position_ when the service ends. The state-transition equation of is as follows [25]:

Similarly, can be obtained from [26]. Now, the average waiting time is analyzed. The probability of to get the service and the service time not being larger than is . Assume that the following equation is the LST of :

Similar to (17), the recursive equation of is obtained:

Let denote average waiting time for until it finally gets the service. Similar to (14) and (15), can be obtained as follows:

At last, the average waiting time for is as follows:

4. Performance Evaluation

In this section the performance of the proposed scheduling schemes for the OpenFlow switch is evaluated.

4.1. Experiment Environment

A testbed is implemented with three Raspberry Pi nodes for the experiment. The data with the priority scheduling are collected from the testbed, and they are compared with the analytical models. Open vSwitch is installed in the Raspberry Pi node to implement the real switch with the OpenFlow protocol. Open vSwitch is a multilayer, open source virtual switch targeting all major hypervisor platforms. It was designed for networking in virtual environment, and thus it is quite different from the traditional software switch architecture [27]. The parameters of the Raspberry Pi nodes are listed in Table 2.

For the experiment one remote controller and three OpenFlow switches are implemented. Three Raspberry Pi nodes operate as the OpenFlow switches, and Floodlight was installed in another computer working as a remote controller. Floodlight is a Java-based open source controller, and it is one of the most popular SDN controllers supporting physical and virtual OpenFlow compatible switches. It is based on the Bacon controller from Stanford University [28]. Note that four network interfaces are supplied with the Raspberry Pi node. Therefore, three OpenFlow switches are installed for collecting data, and one interface is used to be connected to the controller network. The network traffics are generated by the software tool “iPerf” [29]. One host is selected as the server and another as the terminal by iPerf. Then, the host sends packets to the server by the UDP protocol in the bandwidth of 200Mbps.

4.2. Simulation Results

First, the simulation results of the proposed scheduling algorithm with a single switch are presented. Figure 8 compares the FCFS and FCFS-P scheduling algorithm. Notice that the FCFS-P algorithm mostly causes longer sojourn time than the FCFS algorithm. The FCFS-P algorithm occasionally displays similar performance to FCFS when the incoming packets have high priority.

Figure 9 compares the data from the experiment and analytical model. As aforementioned, three variables, λ, , and ρ, are used in the FCFS scheduling algorithm. Similar to the previous research [13], λ and are empirically set to be 2 and 0.016, respectively. ρ is set to be 0.1, 0.5, and 0.9 to check the performance with various conditions. It reveals that the data stabilizes as the simulation time passes, and ρ of 0.5 allows the closest estimation.

Figure 10 is for the case of FCFS-P. Here the values of ω0, , λi, and are 0.02, 0.018, 0.016, and 2, respectively. In this case, ρ of 0.9 shows the best result.

Assume that and . Figure 11 compares FCFS-PO and FCFS-PO-P as varies. Observe from the figure that the average waiting time with FCFS-PO-P is smaller than with the FCFS-PO mechanism.

The probability density function of wait time with FCFS-PO in the steady state is as follows [30]:

Here is the n-fold convolution of and is the convolution of and . The incoming packet will be processed directly if there is no packet waiting in the system, by which means the waiting time of the packet is 0. is the probability of the system to be idle and is the probability that there are n packets waiting in the system. Then, the incoming packet will wait at position_. The total wait time consists of the remaining service time and the service time of the packet in front of it. The average wait time can then be obtained as follows:

The FCFS-PO-P mechanism is compared with FCFS-BL and FCFS-PO in Figure 12. With FCFS-BL, the incoming packet is lost if the buffer is full. Notice from the figure that the pushout mechanism is more effective than the BL mechanism. Particularly, the superiority gets higher as λ increases. Generally speaking, the FCFS-PO-P mechanism is the best among the three mechanisms.

5. Conclusion

In this paper the FCFS-PO and FCFS-PO-P scheduling algorithms for multiple-switch SDN have been proposed targeting the edge computing environment. Since there exist some nodes requiring urgent processing in this environment, the priority-based scheduling approach was proposed. Here some switches might be overloaded if the packet arrival rate is high, and the proposed FCFS-PO and FCFS-PO-P mechanism are able to effectively deal with the situation. The SDN environment was implemented with three Raspberry Pi node switches and one independent computer of remote controller, and the sojourn time and wait time were analyzed. The measured data from the testbed were compared with those from the analytical models to validate them. The experiment results show that the FCFS-PO-P scheduling is better than the FCFS-PO scheduling based on the wait time. The comparison with the existing FCFS-BL scheduling algorithm reveals that FCFS-PO-P scheduling is the best among them, while FCFS-BL is the worst.

Several parameter values have been set empirically for the network operation. In the future the analytical models will be developed by which the value of the parameters can be properly decided. Also, for a more comprehensive understanding of the scheduling issues in SDN, the process of packet-in messages based on the M/M/1 model will be developed.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was partly supported by the Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No. 2016-0-00133, research on edge computing via collective intelligence of hyperconnection IoT nodes), Korea, under the National Program for Excellence in SW supervised by the IITP (Institute for Information & Communications Technology Promotion) (2015-0-00914), the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2016R1A6A3A11931385, research of key technologies based on software-defined wireless sensor network for real-time public safety service; 2017R1A2B2009095, research on SDN-based WSN supporting real-time stream data processing and multiconnectivity), the second Brain Korea 21 PLUS project, and Samsung Electronics.