Abstract
Named data networking (NDN) is a newly proposed paradigm for future Internet, in which communication among nodes is based on data names, decoupling from their locations. In dynamic and self-organized cognitive radio ad hoc networks (CRAHNs), it is difficult to maintain end-to-end connectivity between ad hoc nodes especially in the presence of licensed users and intermittent wireless channels. Moreover, IP-based CRAHNs have several issues like scalability, inefficient-mapping, poor resource utilization, and location dependence. By leveraging the advantages of NDN, in this paper, we propose a new cross layer fine-grained architecture called named data networking for cognitive radio ad hoc networks (NDN-CRAHNs). The proposed architecture provides distinct features such as in-networking caching, security, scalability, and multipath routing. The performances of the proposed scheme are evaluated comparing to IP-based scheme in terms of average end-to-end delay and packet delivery ratio. Simulation results show that the proposed scheme is effective in terms of average contents download time and packet delivery ratios comparing to conventional cognitive radio ad hoc networks.
1. Introduction
Dynamic Spectrum Access (DSA) became a hot research area after a report published by Federal Communication Commission (FCC) [1]. Different smart and intelligent programmable devices have been evolved in the field of DSA. By using these devices, we can easily sense and share the unused part of the spectrum band which is also called spectrum holes or spectrum opportunities (SOPs) [2]. Among these devices, cognitive radio (CR) technology attracts more attention, where devices can sense, share, and change their operational parameters according to the environment [2]. CR devices have been widely used in many practical applications like military, medication, smart grid, mission critical networks [3], and public safety [4]. In cellular networks, a large part of spectrum band is unused and by using this technology, we can easily utilize the free spectrum. CR technology is also useful in delay-tolerant networks [5] for delay sensitive applications.
Cognitive radio networks (CRNs) are made of different CR devices and contain two types of users: licensed or primary users (PUs) and unlicensed or secondary users (SUs). PUs utilize the spectrum by adopting traditional static spectrum allocation policies while SUs exploit the SOPs in the absence of PUs. CRNs can be further categorized into two types: infrastructure-based cognitive radio networks and infrastructureless cognitive radio networks or cognitive radio ad hoc networks (CRAHNs). CRAHNs are self-reconfigurable and self-organized networks as shown in Figure 1.

Due to the rapid proliferation in technology, various cheaper and powerful devices such as tablets and smart phones are common in our daily life. People can easily generate and publish their own data on the Internet and want to retrieve similar data with minimum delay. Recently, Cisco VNI report [6] shows that Internet traffic increased eight times in the past five years and video traffic will be around 80% of all the IP traffic in 2016. Today, traffic on Internet is mostly content-based and nowadays users are more interested to get their desired data instead of its location information. For example, a YouTube or Facebook user needs only to fetch required content without much care about from where the content comes. While traditional IP-based Internet architecture is based on end-to-end communication paradigm, it is now not appropriate according to the future applications requirements, in which the demand of time-shift TV, high definition VoD, and user-generated content (UGC) increase rapidly.
In [7], the authors describe the basic idea of content-based networking without any implementation work. They introduce the fundamental concepts of content-based networking and formulate the two new models: named datagram and predicate models. Moreover, in the paper, the relationship between content-based and traditional networking is also highlighted.
Information-Centric Networking (ICN) [8] is a newly emerging futuristic Internet paradigm that brings a great revolution in the field of communication. ICN is based on receiver-driven communication paradigm, in which communication is based on contents name instead of their locations. Various actively research projects such as 4WARD [9], CCN [10], COAST [11], COMET [12], PURSUIT [13], and CONVERGENCE [14] are currently working on this area. Among all these proposals, Content-Centric Network (CCN) also known as named data networking (NDN) [15] attracts the research community due to its simple operations and decoupling data from its location. It provides the content-based communication mechanism instead of host-based communication. In this paradigm, the narrow waist of IP is replaced by named content chunks as shown in Figure 2. In the system, each content is retrieved based on its unique name identifier. Moreover, communication is based on what (content) instead of where (location). Initially, NDN is mainly investigated in wired networks but it also shows its fruitful results in wireless ad hoc networks [16–20].

In CRAHNs, nodes do not fully utilize broadcast nature of the wireless channels and communication is based on single end-to-end connectivity. In this dynamic network, it is difficult for each node to maintain accurate routing state and end-to-end connectivity in the presence of PU activity and channels fluctuations. Therefore, in this position paper, we apply the NDN communication paradigm, thanks to the simple design, in CRAHNs environment and proposed a new cross layer fine-grained architecture called named data networking for cognitive radio ad hoc networks (NDN-CRAHNs).
The rest of the paper is organized as follows. Section 2 provides an overview of NDN paradigm. Section 3 describes the advantages of NDN approach in CRAHNs. In Section 4, we provide the proposed architecture in detail. Section 5 describes the simulation results and analysis. Section 6 provides the further research challenges in NDN-CRAHNs and finally conclusions are presented in Section 7.
2. NDN in a Nutshell
NDN as a future Internet architecture provides an efficient way for the retrieval of data based on their names. Each data name looks like uniform resource identifier (URI) and has unique and hierarchical characteristics. NDN performs its operations by using two types of packets named Interest and Data packet. Interest is issued by the source or consumer node and destination or provider node that has content in the network and sends desired data back to the source. NDN also provides in-network caching mechanism in which every node has functionality to store data for further usage. Figure 3 shows the internal structure of NDN node.

Every node has three types of data structures: (i) content store (CS) that is used to store data on temporary basis; (ii) pending interest table (PIT) which keeps the record of unsatisfied Interest packets; (iii) forwarding information base (FIB) which looks like routing table and is used to traverse the Interest packet towards data provider. Communication mechanism in NDN is as follows: when a source node wants to get some data, it sends an Interest packet that contains the name of that data towards destination by using information from FIB. When some intermediate node receives that Interest packet, it first checks its own CS for the availability of the data. If match is found, it sends back data to the source. Otherwise, it checks its PIT table to ensure that either before some other node already sent this request. If entry is found in PIT, the node discards that interest for further processing and adds a new incoming interface entry in the existing PIT table. Otherwise, it creates a new PIT entry and sends Interest packet to further different nodes based on the interface(s) information stored in FIB. Moreover, FIB is populated by name-prefix based routing protocol and it contains multiple output interfaces information for each name prefix.
When Data packet is coming back, the received node first checks its PIT table. If some entry is matched, it forwards the data to the interface(s) from where the corresponding interest(s) came. It also stores that data in its CS and deletes the existing PIT entry. On the other hand, if there is no entry in PIT table then received Data packet is considered as dump data and it is dropped. However, the Data packet follows the same route of PIT entries till it reached the source(s). Figure 4 shows the basic communication process in NDN.

3. Advantages of NDN in CRAHNs
In literature, content-centric roots can be found in wireless content-based models [21], opportunistic networking architectures [22], and delay-tolerant networks. However, these solutions are implemented as an overlay on the traditional TCP/IP architecture. Instead of these, NDN is a simple and clear-state paradigm that can be implemented on any layer 2 technologies. Some main advatages of NDN in CRAHNs are highlighted as follows:(1)maintaining routing path between source and destination CR nodes is not required, so that it eliminates overheads required to set up routing path and to exchange neighbor information,(2)assigning IP address to each CR node is not required, so that it removes the concern about the lack of the number of IP addresses and IP-management issues such as assigning, renewing, and retrieving IP addresses to/from CR nodes over ad hoc networks,(3)NDN forwarding engine is designed with simple layer architecture and prevents from any routing loops, so that unnecessary flooding is prevented,(4)a customer can receive contents from the closest CR node having same contents (i.e., any CR node can become a provider) instead of requesting the content all the way to the remote provider again and maintaining connectivity with the provider, so that the content download time is reduced and CR node’s movements do not affect the communication process much,(5)CR nodes can also store data by using in-network caching functionality which further can be utilized to satisfy the same upcoming requests, so that it also reduces the data retrieval latency,(6)NDN-based CRAHNs transmit packets in the manner of broadcasting or multicasting by using SOPs information and flooding without loop, so that it achieves comparatively reliable transmissions (the higher packet delivery ratio) since packets are transmitted in many alternative paths and higher channel utilization since one-time packet transmission is received by multiple users simultaneously.
4. Proposed NDN-CRAHNs Architecture
We are proposing a cross layer architecture named NDN-CRAHNs for cognitive radio ad hoc networks. It is basically high-layer architecture as shown in Figure 5. At the bottom, there are conventional cognitive medium access control (MAC) and physical (PHY) layers which are responsible for performing spectrum sensing, sharing, and mobility.

However, in this work, our focus is only on upper layers. By replacing traditional IP layer, we introduce a new NDN layer in the stack, in which communication is based on data names. Two types of packets, named Cognitive-Interest (C-Interest) and Cognitive-Data (C-Data), are proposed for NDN-CRAHNs. C-Interest and C-Data messages are based on Interest and Data messages of basic NDN [15] which include some additional field, named channel information. This field contains the PU activity free channel information that can be used by other CR nodes for communication. When an intermediate node receives the C-Interest or C-Data message, it knows the PU activity free channel information and sends it again by utilizing this field. If the channel is not free at that time, it waits for the channel availability. Moreover, if free channel is available, it updates the channel information and forwards the message towards other nodes.
Figures 6 and 7 depict the basic structure of both C-Interest and C-Data messages, respectively. Every node in NDN-CRAHNs has two different types of data structure, named content store (CoS) and Unsatisfied Interest Table (UIT), for keeping records of C-Interest and C-Data packets. Proposed NDN-based skeleton has four main constituents: naming, security, caching, and NDN-cognitive strategy. In the following sections, each component is exemplified in detail.


4.1. Naming
It is an indispensable component of NDN-CRAHNs because it deals with data names. In proposed architecture, data is retrieved based on its name rather than its location information. Proposed scheme used hierarchical naming approach here for the simplicity. By using this approach, names look like human readable, highly customized, and expressive. Each name in NDN-CRAHNs contains alphanumeric strings and is separated by “/’’ such as URI (i.e., hongik/networking/video/v1). Naming scheme can also be customized based on applications requirements, but it must be persistent.
4.2. Security
Security is an essential part of our proposed NDN-CRAHNs architecture. Each C-Data packet in NDN-CRAHNs contains a signature over the name and the actual data included in the packet plus information about the key to produce the signature. By using this mechanism, every node in NDN-CRAHNs can verify the binding between the information and name. Unlike conventional CRAHNs, in NDN-CRAHNs, we only need to secure data instead of channels itself. Proposed architecture provides the robust security mechanism for each CR node that is independent from its location, for example, a well-known distributed denial of service (DDoS) attack, in which different nasty handlers and agents nodes have tried to forward the false traffic to the victim node by using its location information. However, in NDN-CRAHNs where communication is name-based instead of host-based, nobody knows about the exact location of destination node. As a result, DDoS attack is difficult to deploy in NDN-CRAHNs based environment.
4.3. Caching
NDN-CRAHN natively supports in-networking caching which makes it unique as comparing to conventional IP-based CRAHNs. By using this mechanism, it increases the availability of the data in the network. In IP-based CRAHNs, for fetching the desired data every time, CR node sends request to the destination node by using its location information. On the other side, in NDN-based CRAHNs, intermediate CR node that has the desired data can also reply to the source node in order to reduce the data retrieval latency. In NDN-CRAHNs, every node can store Data packets and satisfy the requests as well. Nowadays, due to remarkable growth in technologies such as semiconductor integration technologies, the storage capacity in recent communication devices increases significantly, so that it seems not to be a major issue, anymore. On the other hands, in the case of memory overflow, the proposed architecture uses least recently used (LRU) policy for the replacement of data in the CoS. Furthermore, different other types of cache decisions and replacement policies as discussed in [23] can also be used in this perspective.
4.4. NDN-Cognitive Strategy
This section describes the core processing of the proposed NDN-CRAHNs architecture. NDN-cognitive strategy deals with all the communication functionalities in the proposed architecture in cognitive environment. This strategy component disseminates the C-Interest and C-Data packets in the networks by utilizing the underlying layers information (i.e., channel information, primary user activity, etc.). It provides the efficient C-Interest and C-Data forwarding and transport mechanisms in CRAHNs. In the next sections, two main components of strategy module are discussed in detail.
Figure 8 shows the canonical communication process in proposed NDN-CRAHNs, in which requester CR node D sends a C-Interest towards source node A. CR node B as intermediate node receives that C-Interest packet and checks its CoS and UIT table, respectively, for corresponding entries. If both entries are not available, it forwards that packet towards source node A by making an entry in its UIT table. Node A sends C-Data back to node B which also stores that data in its CoS and then forwards it to node D. In forwarding process of both C-Interest and C-Data, every CR node also takes into account channel and PU activity information which will be further exemplified in Section 4.4.1. In future, if some other CR node (i.e., node C) needs the same data then node B will satisfy this request in order to reduce data retrieval latency.

4.4.1. Forwarding
NDN-cognitive strategy forwarding engine processes the C-Interest and C-Data packets by taking into account the available channel and PU activity information. Unlike IP-based CRAHNs, proposed scheme does not rely on end-to-end decision mechanism. Instead, forwarding decisions are performed hop-by-hop fashion according to the availability of PU activity free channels. Proposed forwarding engine is based on reactive flooding approach. When a CR node needs some data, it encapsulates the name of data plus signature in its C-Interest packet and sends it to all neighbor nodes. NDN-CRAHNs forwarding process selects the channel jointly with the C-Interest forwarding.
C-Interest Processing in NDN-CRAHNs. Handling of the C-Interest message can be explained in the following steps.
Step 1. When a CR node receives some message, it checks if it is a C-Interest.
Step 2. In case of C-Interest, it first checks its nonce value. If it is a duplicate message, then discard it.
Step 3. In the next step, it looks up its CoS for finding the desired data. If data is found, the node sends it back to the source.
Step 4. If it is not in CoS, it checks its UIT table for any existing UIT entry.
Step 4.1. If an entry is found in UIT, it means that already some other CR nodes have requested the same data. Thus, since the C-Interest message does not need to be forwarded again, it is discarded. When the desired data comes back, it is forwarded to all requested nodes.
Step 4.2. In the next step, it checks the availability of channel by using underlying layers information.
Step 4.2.1. If there is no free channel, the node waits for channel availability.
Step 4.2.2. In the end, when there is free channel available, it updates the channel information in C-Interest message and forwards it to the next node.
A flow chart of handling C-Interest is shown in Figure 9.

C-Data Processing in NDN-CRAHNs. When a CR node receives a C-Data message, it follows the process below.
Step 1. When a CR node receives some message, it checks if it is a C-Data.
Step 2. In case of C-Data, the node looks up its UIT table for any corresponding entry related to that C-Data.
Step 3. If there is not any information, the data is considered as unsolicited and it is discarded.
Step 4. Otherwise, if UIT has corresponding data information, the node saves the current data in its CoS table.
Step 5. In the next step, it looks up its UIT table to find the pending source(s) that request the same data before.
Step 6. While some pending C-Interest is satisfied, it discards the corresponding entry from UIT table.
Step 7. In the next step, the node checks the availability of channel by using lower layer information.
Step 7.1. If there is no free channel then it waits for channel availability.
Step 7.2. In the end, when there is some free channel available, the node updates the channel information in C-Data message and forwards it to the next node.
A flow chart of handling C-Data is shown in Figure 10.

4.4.2. Transport
NDN-CRAHNs also provide some transport layer functionality by adopting the inherent operations from the tenet of transmission control protocol (TCP) which is based on traditional IP-based architecture. Due to intermittent and dynamic nature of wireless channels, C-Interest or C-Data messages may be lost during transmission, and as a result, the network performance is degraded as well. For better performance, if C-Data is not received by a CR node within a specific time, then it is considered lost. The node again sends the same C-Interest message if it still needs that C-Data message. Moreover, waiting time is estimated based on retransmission timeout (RTO) value. In NDN-CRAHNs, as soon as the CR node receives requested data, it immediately sends C-Interest for the next part of the content without any wait for predefined interval. As a result, it increases the efficiency of proposed NDN-CRAHNs by reducing the whole content retrieval time.
5. Performance Evaluation
Simulation is performed by using Network Simulator-3 (NS3) [24] and the summary of the simulation parameters is shown in Table 1. We compare the performance of proposed NDN-CRAHNs with IP-based CRAHNs by taking an assumption that all the nodes in the network already know about the channels information and activity of primary user by using some centralized spectrum database [25]. We evaluate the performance of both approaches by taking into account grid as well as random topology which consists of hundred nodes, in which CR nodes can move everywhere with low speed (2 m/s) by using some random walk mobility model [26]. We do not consider the high mobility in our evaluation. The reason is that, in high mobility, it is difficult for CR nodes to find free channels and path establishment (i.e., in case of routing protocol) especially in the presence of PUs. Some reference papers like [27, 28] provide the fact that low data rate (i.e., 6 Mbps) provides better air time utilization in cognitive environment. Furthermore, the papers claim that the low data rate is preferable when traffic is getting busier and PU idle duration is short. That is why we used 6 Mbps data rate for our evaluation works. AODV protocol is used as routing protocol in IP-based CRAHNs. Although there are many other available options such as proactive-based or geolocation based routing protocols, these types of protocols require periodically information exchange among CR nodes which cause extra overhead in resource constrained cognitive radio ad hoc networks. On the other hand, in reactive-based approach, routing protocol (i.e., AODV) works like on-demand basis. When a node needs some data, it requests that data. This approach reduces the extra overhead over the network and considers as more appropriate approach in cognitive ad hoc environment especially in the presence of PUs. Each node can store maximum packets in its content store. Each simulation is run for seconds and the results are averaged value of ten times runs. Total five different sizes of data contents are used in these simulations, in which each content consists of 100 to 500 packets, respectively. Each packet size consists of 1024 bytes.
5.1. Simulation Results
In Figure 11, by using grid topology, we compare the average end-to-end delay of NDN-CRAHNs with IP-based CRAHNs as a function of the content size. As shown in Figure 11, the average end-to-end delay of both schemes increases as the number of packets increases. However, the increasing ratio of average delay using NDN-CRAHNs is comparatively low. In NDN-CRAHNs, there is not any path establishment mechanism among CR nodes in order to communicate. In addition, NDN-CRAHNs node also does not need to maintain any session or end-to-end connectivity. As a result, it performs better in grid wireless environment. On the other side, in IP-based CRAHNs, node needs to maintain end-to-end connectivity among nodes for forwarding data which is not suitable approach in cognitive environment where channel condition is intermittent and dynamic. In IP-based CRAHNs end-to-end delay is high because every node needs to maintain path to destination in order to communicate. If some path is broken then node needs to again perform rerouting which increases the data retrieval time. In Figure 11, the delay performance of conventional CRAHNs is linearly proportional to the content size. The reason is as follows. The simulation topology is grid topology and no mobility is assumed for this simulation case. Therefore, once the routing path is set at the initial state, there are few factors to vary the delays to receive a Data packet at a customer. The only factor that increases the delay is the contents size. That is, the delay is linearly proportional to content size.

Figure 12 shows the performance in terms of packet delivery ratio as a function of the content size. NDN-CRAHNs packet delivery ratio is 100% because there is a one-to-one flow mechanism between sender and receiver. As soon as the requested data is received, the next interest will be sent, and there is also interest retransmission mechanism which manages the interest flow in the network. On the other side, in CRAHNs case, there is not any one-to-one transmission mechanism. Once the path is established then data is immediately transmitted with predefined interval increasing the packets collision chances. As a result, IP-based CRAHNs have more packet loss during transmission.

We also evaluate the performance of both NDN-CRAHNs and IP-based schemes by considering random topology as shown in Figure 13, in which delay of both schemes increases, but the increasing ratio in NDN-CRAHNs scenario is comparatively low. In random case, where CR nodes can move in any directions, NDN-CRAHNs scheme is more suitable because we do not need to maintain any path connectivity for communication. In addition, in NDN-CRAHNs, mobility does not affect the performance much because communication is based on content names instead of their location. On the other hand, in IP-based CRAHNs, if some path is broken due to channel error or node mobility, then nodes again need to establish a path for communications.

In Figure 14, we analyze the packet delivery ratio of both NDN-CRAHNs and IP-based CRAHNs in random topology case. Packet delivery ratio of NDN-CRAHNs is 100% due to its end-to-end connectionless communication mechanism. On the other side, in IP-based CRAHNs, path establishment and maintenance in random environment are difficult task due to the nodes mobility and uncertain channel behavior. Thus, IP-based CRAHNs do not perform better than NDN-CRAHNs do.

By using NDN-CRAHNs, we can achieve better results comparing to IP-based approach in terms of average end-to-end delay and packet delivery ratio. The reason is that, in NDN-CRAHNs, CR nodes retrieve data from the destination by using multicasting mechanism. Unlike IP-based routing, content data travels through different paths from destination to source CR nodes and does not follow any single path. In parallel, every CR node has in-networking caching functionality. If any Data packet is lost, it can be recovered from a neighbor node instead of requesting the content data again from the destination. Furthermore, CR node’s mobility does not affect the communication process of NDN-CRAHNs. Consequently, this approach enhances the network performances in terms of average end-to-end delay and packet delivery ratio by increasing availability of data in the network.
Figure 15 shows the number of C-Interest retransmission in NDN-CRAHNs for both grid and random scenarios. In grid case, the C-Interest retransmission is high as compared to random case. The reason is that, in grid topology, more CR nodes participate in communication and more packets are in the network which increases the packets collision probability. On the other side, in random case, it is not necessary that all the nodes take part in communication. Therefore, traffic in the network is lower as compared to grid and as a consequence, chances of packet collision are less. In parallel, in grid scenario, nodes have fixed distance but in random case nodes are distributed randomly. If a distance between source and destination is shorter (i.e., more chances in random scenario), then the nodes can quickly access data because, in NDN-CRAHNs, nodes also do not need to make any path as well. That shows NDN-CRAHNs are a suitable approach in cognitive radio ad hoc environment where nodes are randomly distributed.

6. Open Challenges in NDN-CRAHNs
Naming Issues. Currently, there is no clear consensus about the naming scheme in NDN-CRAHNs, albeit proposed architecture using hierarchy-based approach which is more human readable and easier to aggregate. In parallel, flat naming scheme is also considered as a good candidate because this naming scheme can administratively easily to be handled and also does not require long-prefix match. In addition, we further need some more efficient naming definitions that incorporate both the naming and cognitive application constraints.
Caching Issues. In NDN-CRAHNs, several types of traffic compete with each other for the same available caching space. Different procedures are required to identify various problems such as where and how long to cache the data in NDN-CRAHNs. Various factors like priority and the type of data make difference to decide how long to cache data in cognitive environment. Therefore, there is a need of efficient cache replacement policies and cache decision policies for achieving better performance.
Security Issues. Efficient key management mechanisms are required that effectively work in infrastructure less environment such as CRAHNs. Devices in NDN-based CRAHNs are resource constraints. Thus, we further need signature and authentication mechanisms that are efficient in terms of time and energy consumption. In addition, some new schemes which efficiently compute digital signatures in cognitive ad hoc environment need to be addressed.
Forwarding Issues. In NDN-based CRAHNs, not all the channels are available due to presence of PU activity or may be available for unequal period of time or channels with different bandwidth having dynamic propagation characteristics. We need some efficient forwarding schemes that take into account all these aforementioned things, in order to send C-Interest or C-Data message in the network. Furthermore, new strategies are required that control the C-Interest and C-Data messages flooding in NDN-based CRAHNs.
Transport Issues. Due to dynamic topology and high node mobility, different challenges exist in NDN-based CRAHNs regarding limiting the rate of C-Interest packets and accurate estimation of the retransmission interval. When some nodes are in sensing mode, they are unable to forward message. Therefore, accurate estimation of RTT is difficult. To ensure the reliability and congestion control in the network, we further need good algorithms for C-Interest rate limit and accurate estimation of RTT value.
7. Conclusions
A new cross layer fine-grained architecture named NDN-CRAHNs is proposed for cognitive radio ad hoc networks. NDN-CRAHNs have two powerful characteristics. Firstly, CR nodes do not need to maintain end-to-end path connectivity especially in the presence of primary user activity and intermittent wireless channels. Secondly, CR nodes also have caching mechanism to store the C-Data packets, which reduce the data retrieval latency for other CR nodes that request the same data. Detailed simulation results show that NDN-CRAHNs approach has fruitful advantages in cognitive ad hoc environment as compared to traditional IP-based CRAHNs.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This research was supported in part by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2013R1A1A2005692) and by the Ministry of Education (MOE) and National Research Foundation of Korea (NRF) through the Human Resource Training Project for Regional Innovation (no. 2014H1C1A1066943).