Abstract

In WSNs the communication traffic is often time and space correlated, where multiple nodes in a proximity start transmitting simultaneously. Such a situation is known as spatially correlated contention. The random access method to resolve such contention suffers from high collision rate, whereas the traditional distributed TDMA scheduling techniques primarily try to improve the network capacity by reducing the schedule length. Usually, the situation of spatially correlated contention persists only for a short duration, and therefore generating an optimal or suboptimal schedule is not very useful. Additionally, if an algorithm takes very long time to schedule, it will not only introduce additional delay in the data transfer but also consume more energy. In this paper, we present a distributed TDMA slot scheduling (DTSS) algorithm, which considerably reduces the time required to perform scheduling, while restricting the schedule length to the maximum degree of interference graph. The DTSS algorithm supports unicast, multicast, and broadcast scheduling, simultaneously without any modification in the protocol. We have analyzed the protocol for average case performance and also simulated it using Castalia simulator to evaluate its runtime performance. Both analytical and simulation results show that our protocol is able to considerably reduce the time required for scheduling.

1. Introduction

A wireless sensor network (WSN) is a collection of sensor nodes distributed over a geographical region to monitor events of interest in the region. To effectively exchange data among multiple sensor nodes, WSNs employ the medium access control (MAC) protocol to coordinate the transmission over the shared wireless radio channel. Many times in WSNs, communication traffic is space and time correlated; that is, all the nodes in the same proximity transmit at the same time. Such a situation is known as spatially correlated contention. There exist many applications and protocols in WSNs, where the situations of spatially correlated contention can occur. Some of them are as follows.(i)Event Detection: whenever an event occurs, all the nodes that sense the event will start transmitting the details of the event to the base station. Typical examples of such situations are the detection of earthquake and wildfire in WSNs for disaster recovery, fall-and-posture detection in healthcare WSNs [1], and intrusion detection [2] in WSNs for military applications, in which sensor nodes only have data to send when a specific event occurs. As multiple nodes that detect the event are quite possibly in close proximity of each other, they would share the same transmission medium. Eventually, if all the nodes report the event at the same time, the situation would lead to spatially correlated contention.(ii)Multicast Communication: in WSNs, the applications should be configured and updated in the sensor nodes multiple times during the lifetime of the network. An update by transmitting the content to each individual sensor node separately would be very inefficient and would consume a lot of resources such as bandwidth and energy. In this situation, multicasting provides an efficient configuration and update of applications running over sensor nodes by reducing the number of transmitted packets. Another example of multicast communication in WSN is on-demand data collection, where the base station (sink node) sends a data query to a prespecified group of nodes asking them to send their sensory data. The WSN is usually multihop in nature, and therefore direct transmission of multicast messages from the sink is not possible. To achieve this, the sensor nodes also work as routers and forward the received multicast packet to their one-hop neighbors. This simultaneous forwarding of the same packet in a proximity by multiple routers leads to spatially correlated contention. A detailed discussion on multicast in WSN can be found in [3].(iii)Routing Protocols: the on-demand routing protocols, for example, Ad Hoc On-Demand Distance Vector Routing (AODV) [4], try to find the appropriate path from source to destination only when the data transfer is required. This process is called route discovery and it is typically achieved by broadcasting a route request message in the network and consequently leads to the collision of request message due to its simultaneous forwarding by the neighboring nodes.(iv)Clock Synchronization: clock synchronization protocols, for example [5], typically use message passing mechanism to share their local time information with other neighboring nodes. Since, initially, there is no coordination between the nodes, they may transmit the protocol message simultaneously with high probability, and therefore the transmitted messages might collide. This can considerably delay the process of synchronization.(v)Tree Based Convergecasting: convergecast, that is, gathering of information towards a central node, is important communication paradigms across all application domains in WSN. This is mainly accomplished by constructing an efficient tree in terms of delay, energy, and bandwidth. The algorithm for construction of such a tree typically involves simultaneous transmission of protocol messages by the sensor nodes and hence causes specially correlated contention.

Thus the above discussion suggests that the MAC protocols for WSNs should effectively handle the correlated contention. MAC protocols for WSNs can be mainly classified into two major categories, namely, random access based and schedule access based. Random access methods do not use any topology or clock information and resolve contention among neighboring nodes for every data transmission. Thus, it is highly robust to any change in the network. But its performance under high contention suffers because of high overhead in resolving contention and collisions [6]. Contention causes message collisions, which are very likely to occur when traffic is spatially correlated. This, in turn, degrades the data transmission reliability and wastes the energy of sensor nodes.

A MAC protocol is contention-free if it does not allow any collisions. Assuming that the clocks of sensor nodes are synchronized, data transmissions by the nodes are scheduled in such a way that no two interfering nodes transmit at the same time. Early works [79] on scheduling are centralized in nature and normally need complete topology information, and therefore, they are not scalable. To overcome the difficulty of obtaining global topology information in large size networks, many distributed slot assignment schemes [1014] have been proposed. The primary objective of traditional distributed TDMA scheduling techniques is to improve the network capacity by reducing the schedule length. It is effective for the kind of applications where a fixed schedule can be used for a sufficiently longer time. All the scenarios for correlated contention discussed previously occur in form of sessions and the nodes require a time slot to transmit a sequence of data, only during these sessions. Moreover, the same schedule cannot be reused for multiple future sessions, because at that time the network topology might have changed, due to dynamic channel conditions and occasional sleeping of sensor nodes to conserve their energy. For example, in [15], a different set of nodes are selected as routers (to equally distribute the consumption of energy among sensor nodes) every time the algorithm for construction of data collection tree is executed, and therefore this changes the network topology. This suggests that the scheduling has to be performed for every instance of correlated contention, and therefore the effective benefit of reducing schedule length vanishes. If an algorithm takes too long to perform scheduling, as compared to the duration of correlated contention, it will not only degrade the QoS (e.g., delay in detection of event at the base station) but will also lead to poor bandwidth utilization and higher energy consumption. The preceding discussion emphasizes that in order to effectively handle the correlated contention, the TDMA scheduling algorithms should take very less time to perform scheduling.

In this paper, we propose a distributed TDMA slot scheduling (DTSS) algorithm for WSNs. The primary objective of DTSS algorithm is to reduce the time required to perform scheduling while restricting the schedule length to maximum degree of interference graph. The proposed algorithm is unified in the sense that the same algorithm can be used to schedule slots for different modes of communication, namely, unicast, multicast, and broadcast. In addition, the DTSS algorithm also supports heterogeneous mode of communication, where simultaneously a few nodes can take a slot for unicast, while other nodes can take it for multicast or broadcast purpose. In DTSS algorithm, a node is required to know only the IDs of its intended receivers, instead of all its two-hop neighbors. Also, in DTSS algorithm, the nodes in a neighborhood can take different slots simultaneously, if the resultant schedule is feasible. This is unlike the class of greedy algorithms where ordering between the nodes puts a constraint on the distributed algorithm to run sequentially and restricts the parallel implementation of the algorithm. The DTSS algorithm does not make use of any ordering among the nodes.

The rest of the paper is organized as follows. Section 2 discusses the related work. Section 3 gives the assumptions we make in the design of our algorithm, introduces some definitions, and explains the basic idea of our algorithm. In Section 4, we present a detailed description of the DTSS algorithm. Section 5 gives the proof of correctness of the DTSS algorithm. Section 6 presents the average case complexity analysis of the DTSS algorithm. Section 7 discusses the simulation results and performance comparison of DTSS algorithm with existing work. Section 8 concludes the paper with suggestions for future work.

The broadcast scheduling problem to find optimal solution is NP-complete [16]. A different, but related, problem to TDMA node slot assignment is the problem of TDMA edge slot assignment, in which radio links (or edges) are assigned time slots, instead of nodes. Finding the minimum number of time slots for a conflict-free edge slot assignment is also an NP-complete problem [17]. In [18], another specific scheduling problem for wireless sensor network converge-cast transmission is considered in which the scheduling problem is to find a minimum length frame during which all nodes can send their packets to access point (AP), and it is shown to be NP-complete. Previous work [79, 19] on scheduling algorithms primarily focuses on decreasing the length of schedules. They are centralized in nature and normally need complete topology information and are, therefore, not scalable.

Cluster based TDMA protocols in [20, 21] prove to be having good scalability. The common feature of these protocols is to partition the network into some clusters, in which cluster heads are responsible for scheduling their members. However, cluster based TDMA protocols introduce intercluster transmission interference because clusters created by distributed clustering algorithms are often overlapped and several cluster heads may cover the same nodes.

Moscibroda and Wattenhofer [11] have proposed a distributed graph coloring scheme with a time complexity of , where is the maximum node degree and is the number of nodes, in the network. The scheme performs distance-1 coloring such that adjacent nodes have different colors. Note that this does not prevent nodes within two hops of each other from being assigned the same color potentially causing hidden terminal collisions between such nodes. The NAMA protocol in [10] has proposed a distributed scheduling scheme based on hash function to determine the priority among contending neighbors. A major limitation of this hashing based technique is that even though a node gets a higher priority in one neighborhood, it may still have a lower priority in other neighborhoods. Thus the maximum slot number could be of . Secondly, since each node calculates the priority of all of its two-hop neighbors for every slot, it leads to computational complexity, and hence, the scheme is not scalable for large network with resource constraint nodes. SEEDEX [22] uses a similar hashing scheme as NAMA based on a random seed exchanged in a two-hop neighborhood. In SEEDEX, at the beginning of each slot, if a node has a packet ready for transmission, it draws a “lottery” with probability . If it wins, it becomes eligible to transmit. A node knows the seeds of the random number generators of its two-hop neighbors, and hence it also knows the number of nodes (including itself) , within two hops which are also eligible to transmit. It then transmits with probability . This technique is also called topology independent scheduling. In this case, collisions may still occur if two nodes select the same slot and decide to transmit.

Another distributed TDMA scheduling scheme, called DRAND [12], proposes a distributed randomized time slot scheduling algorithm based on centralized scheduling scheme RAND [9]. DRAND is also used within a MAC protocol, called Zebra-MAC [23], to improve performance in sensor networks by combining the strength of scheduled access during high loads and random access during low loads. The runtime complexity of DRAND is , where is the maximum size of a two-hop neighborhood in a wireless network. The simulation results presented by the author show that the runtime actually becomes due to unbounded message delays. FPRP [14], Five-Phase Reservation Protocol, is a distributed heuristic TDMA slot assignment algorithm. FPRP is designed for dynamic slot assignment, in which the real time is divided into a series of pairs of reservation and data transmission phases. For each time slot of the data transmission phase, FPRP runs a five-phase protocol for a number of times (cycles) to pick a winner of each slot. In another distributed slot scheduling algorithm, DD-TDMA [13], a node decides slot as its own slot if all the nodes with ID less than the ID of node have already decided their slot, where is the minimum available slot. The scheduled node broadcasts its slot assignment to one-hop neighbors. Then the one-hop neighbors of node broadcast this information to update two-hop neighbors. This process is repeated in every frame until all nodes are scheduled.

The protocol in [24] proposes a contention-free MAC for correlated contention, which does not assume global time reference. The protocol is based on local frame approach where each node divides time into equal sized frames. Each frame is further divided into equal sized time slots; a time slot corresponds to the time duration of sending one message. The basic idea is that each node selects a slot in its own frame such that selected slots of any two-hop neighbor nodes must not overlap. The protocol assumes that a node can detect a collision if two or more nodes (including itself) within its transmission radius attempt to transmit, which has its own practical limitations with wireless transceivers. These scheduling algorithms [1214] commonly have the following issues.(i)All algorithms use greedy approach for graph colouring which is inherently sequential in nature and put a constraint on distributed algorithm to run sequentially. This restricts the parallel implementation of the algorithm. Because of large runtime of these protocols, they are more suitable for wireless networks where interference relationship or network topology does not change for a long period of time.(ii)They perform two-hop neighbor discovery, which adds considerable additional cost to runtime to perform scheduling. Additionally, the two-hop neighbors are calculated based on transmission range instead of interference range, which is normally higher than the transmission range.(iii)They perform either broadcast (node) scheduling or unicast (link) scheduling but not both and also do not consider multicast scheduling separately.

Finally, a classification of different scheduling algorithms based on problem setting, problem goal, type of inputs, and solution techniques can be found in [25].

3. Our Approach to TDMA Scheduling in WSNs

In many applications such as weather monitoring, intrusion detection, sensor nodes are usually static. In this work also, we assume them to be static. Also, it is assumed that, for any task in an application, every node knows its receivers. Before a task begins its execution, the DTSS algorithm is executed to generate a TDMA schedule. After the task is finished, the TDMA schedule is discarded. We assume that each node in the WSN has a unique identifier. All the nodes in a WSN have some processing capability along with a radio to enable communication among them. Each node uses the same radio frequency. The communication capability is bidirectional and symmetric. The mode of communication between any two neighboring nodes is half-duplex; that is, only one node at a time can transmit. The transmission is omnidirectional; that is, each message sent by a node is inherently received at all the nodes determined by its transmission range.

Timeline is divided into fixed size frames and each frame is further divided into fixed number of time slots, called schedule length. The nodes are assumed to be synchronized with respect to slot 0 and are aware of the slot size and the schedule length. The time of slot 0 is defined by the node which starts the scheduling process. To better understand the proposed algorithm, we introduce the following definitions.

Definition 1. The interference set of a node is defined as the set of nodes which are within the interference range of node . That is, we say that a node if it cannot successfully receive any message transmitted by any other node, at the same time when node is also transmitting. Moreover, if only node has transmitted in a slot, then a node in may or may not receive the message successfully.

Note that the interference set is different from the set of one-hop neighbors which depends upon the transmission range of node . Usually, the interference range is higher than the transmission range.

Definition 2. The receiver set of a node is defined as the set of intended receivers of node .

The size of the set , depends upon the type of communication, namely, unicast, multicast, or broadcast transmission. Note that . The DTSS algorithm assumes that only the subset is known to the node instead of all its two-hop neighbors.

Definition 3. The sender set of a node is defined as the set of nodes such that .

A node need not know the set before the start of the algorithm. It can be populated when the node receives protocol messages with destination ID as its own ID.

Definition 4. The interference graph of a WSN is defined as follows. is the set of nodes in the WSN, and is the set of edges, where edge exists if and only if . The number of edges with which a node is connected to the other nodes is called the degree of the node.

Note that or is also possible. We say that node and node conflict and are adjacent to each other, if there exists an edge between them. An edge exists if and only if node and node cannot take the same slot. Two nodes cannot take the same slot, if the transmission of one node interferes at one of the receivers of the other node. The conflict between nodes depends not only upon their respective positions and transmission power but also on the type of communication, namely, unicast, multicast, or broadcast. Two nodes within the interference range of each other () can even take the same slot, if their transmissions do not interfere at each other’s receivers (Figure 1). Therefore, our definition of interference graph is free from well known exposed-node problem. This fact is usually ignored by most of the existing algorithms. On the other hand, two nodes which are not in the interference range of each other () cannot transmit simultaneously, if their transmissions interfere at each other’s receivers. In this manner, our definition of interference graph is also free from the hidden-node problem.

A sensor node requires a slot to transmit data packets such that data can be received successfully at all of its receivers without any interference.

The following two types of conflict relations can exist, between a pair of nodes.

Strong-Conflict Relation. Two nodes and have strong-conflict relation if .

Weak-Conflict Relation. Two nodes and have weak-conflict relation if either or , but not both.

Figures 2(a) and 2(b) depict the situation when node and node have strong-conflict and weak-conflict relations, respectively. In case of weak-conflict relation, if but , we say that node is stronger than node and denote it by .

Definition 5. The interference degree of a WSN is defined as the maximum of all degrees of nodes in the interference graph of the WSN.

The DTSS algorithm runs in time. In case of broadcast transmission; and are roughly the same, where is the size of two-hop neighborhood set. But for unicast or multicast transmission .

The TDMA slot scheduling problem can be formally defined as the problem of assignment of a time slot to each node, such that if any two nodes are in conflict (strong or weak), they do not take the same time slot. Such an assignment is called a feasible TDMA schedule. A feasible TDMA schedule which takes minimum number of slots is called an optimal TDMA schedule. Our goal in this paper is to develop an algorithm which can find a feasible but not necessarily optimal TDMA schedule and to minimize the time required to perform scheduling.

The basic idea of the DTSS algorithm is as follows. For each slot in a frame, each node checks whether it can take the slot , by sending a request message to the first receiver in with slot probability , which depends upon the remaining number of free slots in the frame (slots which are not taken by others) known at node at the current time. If node receives response message from the first receiver, then it blocks the slot and tries to get the responses from the remaining receivers in using the same slot in subsequent frames. After receiving the response from all the receivers, it assigns the time slot to itself; otherwise, it unblocks the slot, as soon as the response from one of the receivers is not received, and repeats the above process all over again. In case of unicast communication, node can directly assign the slot to itself as soon as it receives the response message from its receiver instead of blocking the slot. This is because receiving a response message from node tells that no other node adjacent to in has also blocked the same slot; otherwise, the REQ message transmissions of nodes and would have collided at node . An adjacent node of a node in a graph is a node that is connected to by an edge.

Once a slot is assigned to a node , it continuously transmits at the same slot in subsequent frames. This would ensure that a conflicting node in cannot assign the same slot to itself, because of the collision between the transmission of node and node at one of the receivers in . Furthermore, the nodes in also propagate this information to next hop through their own transmissions. Note that the receivers of messages transmitted by the nodes in are adjacent to in .

When a node hears, from one of the receivers of node , that slot is blocked by node , it leaves the slot temporarily and avoids further collisions to increase the chance of getting the slot by node . Similarly, when node hears, from one of the receivers of node , that slot is assigned to node , it leaves the slot permanently and increases its slot probability for other free slots.

4. The DTSS Algorithm

In this section, we describe the proposed DTSS algorithm for TDMA slot scheduling problem as defined in Section 2. The number of slots, , in a frame is taken to be at least . The slots are numbered from 1 to . Table 1 summarizes the set of data structures and variables maintained by a node , to implement the algorithm. The DTSS algorithm uses two protocol messages, namely, request (REQ) and response (RES), for signaling purpose. The RES message is sent by a node, whenever its ID is the same as the destination ID in the received REQ message. The REQ/RES messages contain four fields, namely, source ID, destination ID, , and . The value of field in both REQ and RES messages is the copy of corresponding local variable. The value of field in REQ message is the same as the value of the local variable while its value in RES message contains the value of field as received in the corresponding REQ message. The field in REQ/RES message is used to inform a node about the slots which are already taken by other nodes conflicting with node , whereas the field helps the nodes to know the status of the node from where the REQ message has been transmitted. The higher level description of the DTSS algorithm is shown in the pseudocode given in Algorithm 1. The pseudocode describes the DTSS algorithm as executed on each node at the current slot .

if i.slot = i.b_slot = null and   (3 or 1) then
With probability,   do
   send REQ(, rx_ID, 2, )
   rx_ID =
End do
end if
if i.slot =   or i.b_slot =   then
 send REQ(, rx_ID, 2, nr)
 rx_ID =
end if
//perform channel listening
if   receives a REQ(, dest_ID, 2, state) then
if REQ.dest_ID =   then
  add REQ. in ,
  send RES(, , 2, REQ.state)
end if
if   and REQ.state = 0 then
  slot has been taken by node , add to 1
  if   add to 2
  end if
end if
if   or and REQ.state 0 then
  slot is blocked, add to 3
end if
end if
if   receives a RES(, dest_ID, 2, state)
if RES.dest_ID =   then
  if nr =   then i.b_slot =
  end if
  nr = nr − 1
  if nr = 0 then i.slot =
  end if
else
  if RES.state = 0 then
   slot is taken by RES.dest_ID, add s to 1
   if RES.dest_ID add to 2
   end if
  else
   slot is blocked, add to 3
  end if
end if
end if
if i.slot = null and not received the RES for transmitted REQ message then
 i.b_slot = null, nr =
end if
if  3
 Remove from 3 if blocked duration has expired
end if

Each node , contending for a time slot, passes through several states. Figure 3 shows the finite state transition diagram for a node . Initially, node enters contention state (CS), where it sends a REQ message in the current time slot , with probability . We call this probability as slot probability and it is equal to . On receipt of a REQ message at a node from node , it sends a RES message immediately in the current slot and also adds the node to if its ID is the same as the destination ID in the received REQ message. The duration of slot is kept sufficiently large to carry out the transmission of a pair of REQ and RES messages. The destination ID of REQ message transmitted by node in CS state can be any node from the set . If a node receives a RES message at time slot in response to the REQ message sent by it and , then it blocks the time slot and enters the verification state (VS). However, if , it assigns the slot to itself and enters the scheduled state (SS) directly. In VS state, it sends REQ messages one by one to the remaining nodes in at the same time slot in subsequent frames, by setting the pointer to the next node in the list . Furthermore, it does not transmit in slots other than , while it is in VS state. If the node successfully receives the RES messages from all of its receivers in , it assigns the slot to itself and enters the scheduled state (SS); otherwise, it goes back to the CS state and starts the process all over again. In SS state, the node always transmits REQ message in slot so that no other node can take the same slot and also it does not transmit at slots other than . The destination of REQ message in SS state is selected in a round robin fashion among the nodes in . Moreover, it does not progress to the next receiver until it receives a RES message from the current receiver. If a node does not receive RES message consecutively times from the same receiver node, then it goes back to the CS state.

If a node in CS state receives a REQ message from node at slot with and or , then it adds the slot in the list . If a node in CS state receives a REQ message from node at slot with and it adds the slot in the list and also updates the slot probability of other slots not in as . Additionally, if , then it also adds the slot in the list .

If a node in CS state receives a RES message in response to REQ message from itself and , then it blocks the slot. Then it decreases by 1, and if becomes zero, it assigns the slot to itself. If a node in CS state receives a RES message from node in response to REQ message from node at slot with , then it adds the slot in the list . A node does not transmit its own REQ messages in the slots belonging to for the number of subsequent frames specified in the field of the received RES message. This allows node in VS state to successfully transmit its remaining REQ messages and subsequently move to SS state. If a node in CS state receives a RES message from node in response to REQ message from node at slot with , it adds the slot in the list and also updates the slot probability of other slots not in as . Node permanently leaves the slots in and does not transmit any further REQ messages in these slots. Additionally, if , then it adds the slot in the list .

It could be possible that a node does not receive the transmission of a REQ message from node or RES message in response to the REQ message from node in slot because REQ/RES messages could get lost. In this situation, node would not come to know that the slot is either blocked or taken by node until the transmissions of REQ/RES message at the same slot in the next frame. To avoid this delay, the nodes in convey the same information through the field of their own REQ messages transmitted in slots other than . In this case, while a node is trying to take a slot, it is helping others to know the slots which are already taken by other conflicting nodes.

Finally, a node , with , can enter into state SS, while node is already in state SS. This is because the transmission of REQ messages from node in slot cannot interfere at any of the nodes in , and therefore, node will receive the RES messages from all of its receivers and move to state SS. On the other hand, the REQ messages sent by node in SS state will collide at one or more receivers in due to transmission from node and therefore node will not receive the corresponding RES messages. The above situation is shown in Figure 2(b). However, this can only happen if node is not aware that the slot is already taken by node . To avoid this, if the RES is not received by node for consecutive times, it leaves the slot by adding it to the list and comes back to the CS state. If node is not in SS state, it cannot collide with the transmission of node in the same slot consecutively times. This ensures that node will only leave the slot , if and is in SS state.

5. Correctness of the DTSS Algorithm

In this section, we prove that the schedule created by the DTSS algorithm is a feasible TDMA schedule. In a feasible schedule, two conflicting nodes will not transmit in the same time slot. That is, two conflicting nodes will not be assigned the same time slot by the DTSS algorithm. This happens because after the execution of the DTSS algorithm is completed, only one node (among the conflicting nodes) will remain in the SS state for a particular time slot. In the following, we prove this fact as Theorems 6 and 7 for strong-conflict and weak-conflict relationship, respectively.

Theorem 6. If two nodes and have strong-conflict relationship, then they cannot be in SS state for the same time slot, at any time during the execution of DTSS algorithm.

Proof. Let and be the frame indexes when node enters VS and SS state, respectively, for a time slot . It is possible that if . Furthermore, let and be the corresponding frame indexes for node for the same time slot . We can assume that once a node enters VS state from CS state, it remains in VS state until it enters SS state. If it is not so, then it goes back to CS state, and the argument can be repeated. It is to be noted that both cannot enter SS state at the same frame index without at least one of them going into VS state first. Now the following three cases arise.
Case 1 (). In this case, only node can enter SS state provided it has got response from all nodes prior to . There is no way node can enter SS state since node will be continuously transmitting REQ messages from frame index , and node cannot get response from any node in (Figure 4). Hence, in this case only node will be able to enter SS state.
Case 2 (). In this case, both node and node will be transmitting REQ messages from frame index onwards. Therefore, node will not be able to get response from any node in . Similarly, node will not get response from any node in . So, the node which does not receive the response first will go back to CS state. As a result neither node nor node will be able to enter SS state as Case 2.
Case 3 (). This case is similar to Case 1 except that now node can enter SS state provided it satisfies the corresponding condition.
From the above argument, it is clear that only one of nodes or will be in SS state for the same time slot during the execution of the DTSS algorithm. Hence, the theorem is proved.

In case of weak-conflict relationship, it could be possible that while a node is already in SS state, another node (stronger than node ) can enter SS state for the same time slot, during the execution of DTSS algorithm. Therefore, Theorem 6 does not sufficiently prove the correctness of DTSS algorithm when weak-conflict relationship also exists between the nodes.

Theorem 7. If two nodes and have weak-conflict relationship, then eventually only one of nodes and will remain in SS state for the same time slot after the execution of the DTSS algorithm is completed.

Proof. Let , , , and be frames indexes of nodes and as in Theorem 6. Also, without loss of generality assume that and . That is, node is stronger than node . As in Theorem 6, we also assume that, after entering VS state, both remain there until they enter SS state. It is to be noted that both cannot enter SS state at the same frame index directly from CS state, without at least one of them going into VS state first. In this case also, the following three cases arise.
Case 1 (). This case is similar to Case 1 of Theorem 6, and only node will be able to enter SS state, and node will not be able to enter SS state.
Case 2 (). In this case also, only node will be able to enter SS state, and node will not be able to enter SS state.
Case 3 (). In this case, node can enter SS state anyway. However node can also enter SS state provided it has got response from all the nodes in before frame index . Let us assume that node satisfies this condition. Now nodes and can enter SS state in any order. Assume that both nodes and are in SS state; then node would not be able to get response continuously times from a node in . As a result, node would move back to CS state, and it would not be able to enter SS state again.

6. Complexity Analysis of DTSS Algorithm

In this section, we evaluate the expected runtime of DTSS algorithm, that is, the time when all nodes in the network reach SS state. Table 2 summarizes the set of notations used in this section. First, we consider the situation, when all nodes in the network interfere with each other’s transmission; that is, the interference graph is complete. This situation mainly occurs in single-hop WSNs. In this case, only a single transmission of REQ message in a slot can be successful, and therefore, nodes can enter SS state one at a time in each slot, as shown in Figure 5. Furthermore, we assume that , for each node . In this case nodes directly enter SS state without entering the VS state. The analysis can be further extended for the case when . Initially, every node transmits REQ message with probability in every slot.

Let the time slot at which th node enters SS state be . Note that is a random variable. Clearly, is the time slot when last node enters SS state, which is exactly the desired runtime of DTSS algorithm. Let . In this case,

Theorem 8. is for single-hop WSNs.

Proof. At time slot , exactly nodes are in SS state and for the remaining nodes which are not in SS state, set their slot probability to , for unoccupied slots. Let be the probability that only one node transmits a REQ message in a slot after time slot and the message is received successfully at the intended receiver. Note that the REQ message could be lost not only due to collisions but also because of channel impairment. Therefore, a successful packet transmission also depends upon packer error rate (PER). is a geometric random variable with success probability and expectation . The upper bound of (runtime) can be calculated as follows:

Now, we will consider a more generalized situation when not all nodes in the network interfere with each other’s transmission; that is, the interference graph is not necessarily complete. Again we assume that . The above situation mainly occurs in multihop WSNs. Further, we assume that the graph is regular with degree . Note that is always taken to be greater than , the maximum degree of interference graph. Therefore, assuming the graph to be regular with degree will give the worst case analysis; that is, the expected runtime of DTSS algorithm for a nonregular interference graph with will always be less than or equal to the expected runtime for a regular interference graph of degree .

We will first find out for an arbitrary node what the minimum value is that can take, irrespective of the slot probabilities of other nodes in the network. This will help us to set an upper bound on the expected time, required by any node, to reach SS state.

Let us rearrange the IDs of the nodes in the following manner.(i)The ID of node is changed to 1.(ii)The IDs of nodes adjacent to node in would be changed from 2 to . The ordering among these nodes could be arbitrary.(iii)The IDs of all other nodes would become to . The ordering among these nodes could be arbitrary. Note that, the above rearrangement will not change the probability of node entering SS state in a round. Our task is to find out instead of . Further, we can also omit the subscript 1 from for sake of clarity.

Let the probability of node with ID 1 (after rearrangement) entering SS state at slot in a round be . The value of depends upon the transmission probability of node 1 in slot and the transmission probabilities of its adjacent nodes in the same slot. Consider The probability that node 1 can enter SS state in a round is equal to the probability that it can enter SS state in at least one slot of the round. Therefore, can be written in terms of as

In order to find the minimum value of , we define another term as a function of ’s as follows:

We know that, for a constant sum, the product can be maximized when the sum is partitioned equally [26]. Therefore, for a constant value of , can achieve its minimum value , if , ,  . Obviously, is a function of .

Theorem 9. Let . Then is a monotonically increasing function.

Proof. Let and be the two values of , such that > . We know that is achieved when , , . Let and be the corresponding values of , for all with respect to and . In this case, the value of and would be and , respectively (4). Therefore,

It is clear from Theorem 9 that, to find , we need to first minimize the . Let us define a binary square matrix, , of size , in the following manner:

The matrices and show an example of probability matrix and its corresponding binary transformation for . Consider

Let (number of 1’s in row of matrix ) and (number of 1’s in column of matrix , excluding first row). The can be rewritten in terms of and , , as follows:

Let be the matrix for which the value of is minimum. To find out the properties of , we start with the hypothesis that would be minimum, if none of the nodes adjacent to node 1 is in SS state. This implies that node 1 is still transmitting in all the slots with probability ; that is, , for all , . Now, we will present two lemmas based on the above hypothesis; this hypothesis will be used to find out the properties of in Theorem 12, where we also explain the need for it.

Lemma 10. For a given instance of matrix , let , for all , , and for a slot , , for all . Then, for any row , reduces or remains the same, if is changed from 1 to 0.

Proof. Let and be the respective sums before and after the conversion of to 0. We need to show that . Similarly, and can be defined. Since , for all , , the can be written as and since becomes 0, after the conversion, would be Therefore, from (10) and (11), we get Similarly, for all other slots and , To show that , we calculate as follows:
Since the number of 1’s in row is , the number of terms in the first summation of above equation would be exactly . Therefore,

Lemma 11. For a given instance of matrix , let , for all , , and , for all , . Then the following holds. For any two columns and and, for any row , such that , and , either reduces or remains the same if the values of and are interchanged.

Proof. Consider , , , and as defined in Lemma 10. We need to show that . Here, , , , and . Therefore,

Now we will try to prove that should satisfy a few constraints, in terms of and , , with the help of Lemmas 10 and 11.

Theorem 12. has the following properties:(1), for all , ;(2)for exactly two columns and , and for all other columns , .

Proof. We prove both the properties for two different cases: and .
Case 1 (). The property can be proved by contradiction. First, we show that , . If with , for some row , then node is in SS state. Therefore, node 1 should have stopped transmitting in slot , that is, , which contradicts our assumption that . Now, we show that , . Let and the set of column indexes for which ; then , such that , for all , . Therefore, by the virtue of Lemma 10, reduces or remains the same, if is changed from 1 to 0. The same process can be repeated till .
The property can also be proved by contradiction. We know that and , . Therefore, . First, we show that , . For a column , ; otherwise would become less than . In this case, for any row , such that and , the value of and can be interchanged by virtue of Lemma 11. This proves that could be either 0, 1 or 2, 1 . Since, any column can have at most two 1’s, this implies that at most one column of type can exist and that also can be increased to 1 by virtue of Lemma 11. Furthermore, the number of columns of type cannot be one, since is even. Finally, we can say that number of columns of type is exactly 2; otherwise, the total sum will be less than .
Case 2 (). Let and be the corresponding summation for Cases 1 and 2, respectively. The value of would be . We will prove that by showing that any perturbation in the matrix corresponding to Case 1 will increase the value of . We have already proved, in Case 1, that any modification in any of the rows from 2 to row and leaving row 1 unchanged will increase . Now, let us change a single entry 1 to 0; that is, node 1 has decided not to transmit in slot . This only happens when at least one adjacent node in has gone to SS state for slot , which implies that and , for all . Let us interchange the row with row and column with column . In this case, , , for all , and and , for all . Consider the submatrix of size times . The minimum value of which can be achieved by this submatrix would be . Moreover, , because . Therefore, .

From Theorem 9, we know that the can be achieved when is minimum and should satisfy the properties as given in Theorem 12. Therefore,

The following matrix shows one of such matrix for :

To calculate the expected runtime of DTSS algorithm, we model the behavior of the system using a discrete time Markov chain (DTMC), with the number of nodes in SS state, , at the beginning of round , as a random variable. The transition probabilities, , are defined as follows:

In this DTMC (see Figure 6), all states are transient except state “” which is an absorbing state. The probability of leaving a transient state “” is always greater than 0; that is, . A transient state cannot be visited again, once it is left. This shows that the DTSS algorithm converges in a finite time. Let be the number of rounds required to reach state “” starting from state “.” Our goal is to find , which can be calculated using the following recurrence relation:

Note that the above DTMC is the approximation of actual stochastic process, where the transition probabilities not only depend upon the number of nodes in SS state, but also depend on exact nodes belonging to SS state.

We show that the value of is greater than actual expected time required to reach state “” starting from state “” in DTMC, by proving that the transition probability of moving from nodes in SS state to nodes in SS state, in an actual stochastic process, is always greater than . We know that the probability of each node moving from CS state to SS state in a round is always greater than (17) and therefore the probability that, out of nodes in CS state, exactly nodes enter into the SS state is greater than ().

Figure 7 shows the graph for along with function . The graph shows that is upper bounded by , and therefore, for a fixed frame size, is . We know from (17) that depends only upon , which is a measure of two-hop network density ().

Another method to analyze the expected runtime of DTSS algorithm is to calculate the expectation of maximum of all ’s, where is the time taken by node to reach SS state. Consider The s can be assumed as i.i.d (independent and identically distributed) geometric random variable with parameter . In this case the would be higher than the actual expected time to enter SS state, by node . The value of can be calculated as where . By considering the above infinite sum as right and left hand Riemann sum approximations [29] of the corresponding integral, we obtain

With the change of variable , we have

From (24), we can conclude that is the , for a fixed neighborhood density, . We know from (17) that depends only upon , which is a measure of . A more rigorous analysis on expectation of the maximum of IID geometric random variables can be found in [30].

7. Simulation Results

We have used Castalia simulator [27] to study the performance of DTSS algorithm. A multihop network, based on TelosB node hardware platform that uses CC2420 transceiver [28] for communication, is used in the simulation. The transceivers run at 250 kbps data rate and  dbm transmission power which approximately gives 40 m of transmission range in the absence of interference. All nodes are distributed randomly within 250 m × 250 m area. Note that, at 250 kbps, it takes about 0.5 ms to transmit a packet of size 128 bits (80 bits for the MAC header, and 48 bits for and state payload). Hence, we set TDMA time slots to a period of 1 ms, which is sufficiently long for the transmission of REQ/RES messages. The performance of protocol has been averaged over 100 simulation runs. The neighborhood size of the network is changed by varying the number of nodes from 50 to 300. This setup produces topologies with different neighborhood density, , values varying between 5 and 50.

Figure 8 shows the average number of slots taken by all the nodes to decide their slot in case of broadcast scheduling for frame sizes and , respectively. The error bars denote 95% confidence intervals. Figure 8 shows that runtime increases linearly with neighborhood density, . Given slot size as 1 ms, the total runtime for very high density network with is approximately 7 s. Furthermore, if we take more slots per frame, then runtime decreases and also confidence interval improves.

Figure 9 shows the average of the number of slots taken by all the nodes to decide their slot in case of broadcast scheduling for varying values starting from . Figure 9 shows that the runtime reduces rapidly with small increase in and further increase in does not have much impact on runtime. This fact can be utilized as a tradeoff between runtime and frame length.

Figure 10 shows the average of the number of slots taken by all the nodes to decide their slot for varying the number of receivers (unicast to broadcast) with . Figure 10 suggests that unicast or link scheduling can be performed in less than one second for a network with fairly high network density.

We now compare DTSS with DRAND [12] and DD-TDMA [13]. Figure 11 shows the performance results of DTSS along with DRAND and DD-TDMA with respect to runtime of each algorithm. The comparison is based on broadcast transmission because both DRAND and DD-TDMA only implement this mode of transmission. The primary reason of getting less runtime is because the DTSS generates a feasible schedule when the number of available slots is already fixed, whereas other algorithms try to generate a suboptimal schedule by using greedy approach, which is inherently sequential. In case of unicast and multicast scheduling, the DTSS even takes lesser time to compute the schedule as compared to broadcast transmission. The number of slots taken by DTSS is always as shown in Figure 12, whereas the number of time slots taken by DRAND and DD-TDMA can be less than .

8. Conclusions and Future Work

For many applications in WSNs, efficiently handling the spatially correlated contention is an important requirement. The DTSS takes very less time to perform the scheduling as compared to other existing distributed scheduling algorithms. We have shown that the runtime of DTSS algorithm is and for single-hop and multihop WSNs, respectively, and therefore it is scalable for WSNs with large number of nodes. The interference model used by DTSS is more realistic than conventional protocol interference model. Additionally, the DTSS has a unique feature of unified scheduling in which simultaneously a few nodes can take a slot for unicast, while other nodes can take it for multicast or broadcast purpose. Although the number of slots taken by DTSS is bounded by , further efforts can be applied to reduce the number of slots. In future, we plan to work on the variation of DTSS algorithm, for the situation, when nodes are not assumed to be synchronized before performing the slot scheduling.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.