- Sanjoy Paul, Roy Yates, Dipankar Raychaudhuri and Jim Kurose. The Cache-and-Forward Network Architecture for Efficient Mobile Content Delivery Services in the Future Internet. In ITU-NGN conference, 2008.
- Lijun Dong, Yanyong Zhang, Sanjoy Paul and Dipankar Raychaudhuri, Efficient Content Dissemination in a Cache-and-Forward Network. In submission.
- Lijun Dong, Hongbo Liu, Yanyong Zhang, Sanjoy Paul and Dipankar Raychaudhuri, On the Cache-and-Forward Network Architecture. In submission.
CNF Protocol Architecture
Existing Internet protocols (e.g., TCP/IP) are not well-suited for mobile content services because they were designed under very different assumptions, both in terms of service requirements and technology constraints. In particular, the TCP model assumes a continuous source-to-destination path, and is based on the famous “end-to-end principle” which argues for keeping in-network functions to a minimum and pushing service-specific complexity to the end-points at the edge of the network. While the TCP protocol has served remarkably well for the first 25 years of the Internet’s operation, the end-to-end principle has significant limitations when dealing with mobile users who experience intermittent and/or unreliable access over wireless channels. Moreover, the connection oriented TCP/IP model was originally designed to support point-to-point data services rather than for multipoint content dissemination. We also note that many of the technology assumptions behind the end-to-end principle may no longer be applicable. In particular, the cost of semiconductor memory (currently ~$10/GB) has dropped by about 5-6 orders of magnitude since the Internet was first designed, link and CPU speeds have increased by 3-4 orders-of-magnitude to ~100 Mbps-1 Gbps and 1-10 GIPS respectively. These considerations argue for a back-to-basics reconsideration of the end-to-end networking model taking into account emerging requirements for large-scale mobile content services together with the increased capabilities of today’s core technologies.
The “cache-and-forward” network architecture proposed here exploits these dramatic reductions in storage and processing costs to design a network that directly addresses the mobile content delivery problem. We observe here that TCP/IP does not work well for mobile devices because wireless links tend to have variable error rates, and because these devices may occasionally become disconnected due to lack of coverage. These problems are further compounded by the emergence of multi-hop wireless access networks such as ad hoc peer-to-peer, metropolitan area mesh and sensor networks, in which the probability of at least one bad radio link or temporary disconnection tends to be higher than in single hop networks. On the other hand, these emerging peer-to-peer and multi-hop wireless networks are valuable for opportunistically delivering high-speed services and improving the overall service economics, and should thus be supported by any new protocol for mobile content. Earlier work on the wireless “Infostations” concept demonstrated the benefits of opportunistic transport in mobile service scenarios – at that time, this was envisioned as an overlay service typically over a single wireless hop without requiring any major changes to the networking protocol itself. Disconnected operation is also associated with delay-tolerant networks (DTN) originally intended for robust communication in tactical or vehicular environments, but we feel that this can be an important ingredient for mainstream mobile content delivery services as well. The key idea is to facilitate opportunistic transport on a hop-by-hop basis rather than end-to-end streaming of data as in TCP/IP. Such a hop-by-hop transport model implies large in-network storage of content files as they make their way through the network, made possible by remarkable recent reductions in the cost of semiconductor storage. In-network storage also enables the use of content caching and content-aware routing [12,13] as a basic network capability rather than as an external overlay service as currently implemented in the Internet.
CNF is based on the concept of store-and-forward routers with large storage, providing for opportunistic delivery to occasionally disconnected mobile users and for in-network caching of content. The proposed CNF protocol uses reliable hop-by-hop transfer of large data files between CNF routers in place of an end-to-end transport protocol like TCP. This approach makes it possible to serve mobile users with intermittent connectivity, while also mitigating self-interference problems which arise in multi-hop wireless scenarios. Hop-by-hop transport is similarly useful in wired networks where router storage can help to smooth out link congestion bottlenecks which arise in TCP/IP networks. A second key feature of the CNF protocol is the integration of address-based and content-based routing to support various content delivery modes that take advantage of in-network storage. During this reporting period, we have made significant progress in finalizing the details of the CNF protocol architecture, and have validated many parts of the protocol design with ns2 and other simulation models. The next step will be to move towards a real-time implementation on testbeds such as PlanetLab/VINI and ORBIT.
An architectural overview of CNF was presented at the ITU’s Next Generation Network (NGN) Conference held in Geneva, May 2008 in a paper entitled “The Cache-and-Forward Network Architecture for Efficient Mobile Content Delivery Services in the Future Internet”.
Figure 1.1: Cache and Forward (CNF) Network Architecture
A conceptual view of the proposed CNF network and its main elements is given in Fig. 1.1 above.
The cache-and-forward architecture represents a set of new protocols that can be implemented either as a “clean-slate” implementation or on top of IP. The main concepts of the architecture are listed below:
Post Office (PO): The CNF architecture is based on the model of a postal network designed to transport large objects and provide a range of delivery services. Keeping in mind that the sender and/or receiver of an object may be mobile and may not be connected to the network, we introduce the concept of “Post Office” (PO) which serves as an indirection (rendezvous) point for senders and receivers. A sender deposits the object to be delivered in its PO and the network routes it to the receiver’s PO, which holds the object until it is delivered to the final destination. Each sender and receiver may have multiple POs, where each PO is associated with a point of attachment in the wired network for a mobile endpoint (sender/receiver).
Cache and Forward (CNF) Router: The CNF Router is a network element with persistent storage and is responsible for routing packages within the CNF network. Packages are forwarded hop-by-hop (where a hop refers to a CNF hop and not an IP hop) from the sender’s PO towards the receiver’s PO using forwarding tables updated by a routing protocol running either in the background (proactive) or on demand (reactive).
Cache and Carry (CNC) Router: The CNC Router is a mobile network element that has persistent storage exactly as in a CNF Router, but is additionally mobile. Thus a CNC router can pick up a package from a CNF router, another CNC router or from a PO and carry it along. The CNC router may deliver the package to the intended receiver or to another CNC router that might have a better chance of delivering the package to the desired receiver.
Content Identifier (CID): To make content a first class entity in the network, we introduce the notion of persistent and globally unique content identifiers. Thus if a content is stored in multiple locations within the CNF network, it will be referred to by the same content identifier. The notion of a CID is in contrast to identifiers in the Internet, where content is identified by a url whose prefix consists of a string identifying the location of the content. CNF endpoints will request content from the network using content identifiers.
Content Discovery: Since copies of the same content can be cached in multiple CNF routers in the network, discovering the CNF router with the desired content that is “closest” to the requesting endpoint must be designed into the architecture. We discuss this in more detail in the next section.
Type of Service: In order to differentiate between packages with different service delivery requirements (high priority, medium priority, low priority), a Type of Service (ToS) byte will be used in the package header. The ToS byte can be used in the cache replacement policy and the delivery schedule of packages at the CNF routers.
Multiple delivery mechanisms: A package destined for a receiver would be first delivered to, and stored in, the receiver’s PO. There are several ways in which the package can be delivered from the PO to the receiver:
– A PO can inform the receiver that there is a package waiting for it at the PO and it (the receiver) should arrange to pick it up. The receiver can pick up the package when in range of that PO. Otherwise, it may ask its new PO and/or a CNC router to pick up the package on its behalf.
– A receiver can poll the PO to find out if there is a package waiting for pick up. If it is and the receiver is within range of the PO, it can pick up the package itself. Otherwise, it may ask its new PO and/or a CNC router to pick up the package on its behalf.
– A PO can proactively push the package to the receiver either directly or via CNC routers.
CNF Protocol Details
Figure 1.2 shows the overall CNF protocol stack (with IP being used as the base layer in this realization). Applications send down large files of arbitrary size to the transport layer which segments into moderately sized chunks ~10-100 MB. The network attaches a header to each chunk, and the combination is called a package. A package is the basic unit of transport through the CNF network layer.
Figure 1.2. Cache-and-Forward Protocol Stack
Link Layer: A link in the CNF architecture is a logical link between two adjacent CNF nodes where a CNF node could be a CNF router, a CNC router, or a CNF endpoint. For example, if there are two CNF routers across an optical core network, the link between them would span the entire core network. On the other hand, if a CNF endpoint is connected to a CNF router (Access Point with persistent storage) using WiFi, the link would span just the wireless hop. The Link Protocol has two components: Link Session Protocol (LSP) and Link Transport Protocol (LTP). LSP is used to negotiate the type of LTP, and the corresponding parameters. The choice of LTP will depend on the characteristics of the link.
Network Layer: The network layer is responsible for content discovery and for routing content towards the destination after it has been located in the network. The first part is addressed by (1) content-aware routing based on a content identifier (CID), while the second part is addressed by conventional (IP) address-based routing. In the latter mode, CNF routers exchange information about how to reach a given content file (=CID) rather than how to reach an “address” as in traditional routing protocols. Based on these exchanges, CNF routers set up query forwarding tables with CIDs as destinations. A CNF router, on receiving a content query for a given CID, checks if it has the requested content, and if it does, returns the content using conventional (IP) address-based routing. If it does not, it consults its Query Forwarding Table to determine the Next Hop, and forwards the request towards the CNF router that has the content.
Figure 1.3: Routing of Queries and Content in the CNF network
As the query is routed through the CNF network, the content will be found either at an intermediate CNF router that has a cached copy, or in the worst case, would be found at the original source of the content. When the content is found, the next hop for forwarding the content is determined in two steps. First, on a slow timescale, a routing protocol updates the Content Forwarding Table at each CNF router, and then, at the time of forwarding a package, the CNF router will query the next-hop CNF router to see if it is prepared to accept the package. If the next-hop CNF router declines (due to bandwidth or storage limits), the forwarding CNF router will choose a different next hop on the fly. Query and content routing is shown schematically in Figure 1.3 above.
Transport Layer: The Transport Protocol (TP) runs at the endpoints, but is simpler than TCP because most of the complexity, including congestion control and error control, are embedded in the Link and Network layer protocols in the CNF architecture. Moreover, in view of possible disconnection, the end-to-end message exchange in TP can happen over a long period of time (e.g., hours) - a much longer time than the sub-second end-to-end round-trip time in TCP. One function of CNF transport is to fragment very large files (10’s of GB) into smaller chunks (~100 MB-1GB) at the original source before transporting them through the CNF network. Fragments are represented by a tuple [CID, Offset], where CID identifies the content the fragment is part of and the Offset represents the location of the fragment with respect to the beginning of the file. The TP at the final destination reassembles the fragments into the original large file. If it detects gaps, it can request retransmission of the missing fragment(s) from the network (as opposed to from the end host as in TCP) and any CNF router with the desired fragment(s) may provide the retransmission. Depending on the type of service requested by the application, there may also be an end-to-end file delivery acknowledgement.
Name Resolution Service (NRS): The main purpose of the Name Resolution Service (NRS) is to map the name of an endpoint to its corresponding POs. The CNF architecture is independent of the style of naming an endpoint in that an endpoint might be identified by using a handle, url (email@example.com), a role (fireman, police officer etc.), or by using names of local relevance (Jim’s laptop, Sue’s cellphone). Late binding is used to resolve the name of an endpoint to the address of its PO. Keeping in mind the address-format agnostic principle of CNF, the address of a PO could be as simple as an IP address, a DTN address which has a global and a local component, or some other type of address. POs could periodically send out advertisements or an endpoint could send out solicitations for PO whenever it moves to a new area or whenever it becomes active after a long period of inactivity.
File Name Resolution Service (FNRS): The main purpose of the File Name Resolution Service
(FNRS) is to map a CID to corresponding attributes of the content. A possible implementation of FNRS would be the handle system. Attributes corresponding to a CID would consist of a variety of information pertinent to the content, such as, Content Hash, Content Creator, Content Access Rights, etc. It is conceivable that for popular content, an attribute may also consist of a list of CNF routers with cached copy of the content. Content Hash would be digitally signed by the Content Creator to establish authenticity of the content. Content Access Rights would implement DRM policies.
Figure 1.4: End-to-End Timing Diagram for an Example CNF Network Delivery
Protocol Flow Diagram: In order to provide an intuitive feel of how the CNF architecture works, we present an end-to-end protocol timing diagram in Figure 1.4. In the diagram, MN = Mobile Node, PO = Post Office, NRS = Name Resolution Server, CNF = Cache and Forward Router. First, the source/sender (which may be a mobile node) drops a package at the sender’s PO. The Sender’s PO uses the Name Resolution Service (NRS) to retrieve the Post Office Descriptors (PODs) for the final destination (which may be a mobile node as well). Once the destination’s PO is known, the next hop is determined at the PO and the package is forwarded towards the next-hop CNF router. Each CNF router independently determines the next hop and forwards the package towards the destination’s PO. Note that each CNF router and the destination’s PO along the route to the destination generate two acknowledgement messages: (1) ACK: a notification to the previous CNF router that it has received the package and (2) Package ACK: a notification to the Sender’s PO that it has received the package. Thus the “Package ACK” tracks the progress of the package along the route to the destination while the “ACK” can be used to flush the buffer at the previous CNF router if needed. Although we show “Delete Package” at each CNF router after the reception of an ACK, this operation is “optional” in that a node can cache a package for future use. Once the package reaches the destination’s PO, it is cached there until the destination MN checks with the PO and retrieves it from there.
Future Work Plan
The goals for the third year of the project include:
- Completion of a detailed version 1.0 specification of the CNF protocol, including refinements to improve mobility support and scalability of content addressing/routing.
- Prototype validation of a baseline CNF protocol implemented on large-scale testbeds such as PlanetLab/VINI and ORBIT.
- Continued discussion with other groups working on wireless and DTN related projects, potentially leading to a converged “phase II” architecture.
Prof. Dipankar Raychaudhuri
ray (AT) winlab (DOT) rutgers (DOT) edu
ryates (AT) winlab (DOT) rutgers (DOT) edu
Prof. Yanyong Zhang
yyzhang (AT) winlab (DOT) rutgers (DOT) edu
lijdong (AT) winlab (DOT) rutgers (DOT) edu