References:

[1] Sanjoy Paul, Roy Yates, Dipankar Raychaudhuri, and Jim Kurose, “The Cache-And-Forward Network Architecture for Efficient Mobile Content Delivery Services in the Future Internet,” Proceedings of the First ITU-T Kaleidoscope Academic Conference on Innovations in NGN: Future Network and Services, 2008.

[2] Lijun Dong, Hongbo Liu, Yanyong Zhang, Sanjoy Paul and Dipankar Raychaudhuri, “On the Cache-and-Forward Network Architecture,” ICC, 2009.

[3] Lijun Dong, Yanyong Zhang, Dipankar Raychaudhuri and Sanjoy Paul, “Performance Evaluation of In-network Integrated Caching,” under submission

[4] Ayesha Saleem, “Performance Evaluation of the Cache and Forward Link Layer Protocol in Multihop Wireless Subnetworks,” Master’s Thesis, Rutgers University, WINLAB, 2008.

[5] Hongbo Liu, Yanyong Zhang, and Dipankar Raychaudhuri, “Performance Evaluation of the ”Cache-and-Forward (CNF)” Network for Mobile Content Delivery Services, “ ICC 2009.

[6] Shweta Jain, Ayesha Saleem, Hongbo Liu, Yanyong Zhang and Dipankar Raychaudhuri, “Design of Link and Routing Protocols for Cache-and-Forward Networks,” Sarnoff Symposium 2009.
Slides


 

Cache and Forward Network


Project Objectives:
This project (started in September 2006) aims at investigating cache and forward (CNF), a clean-slate information centric network architecture that exploits rapidly decreasing cost of storage to support efficient content delivery to mobile users in the future Internet. The goal of the project is to design and validate the (CNF) network architecture, using a combination of analysis, simulation and real-time prototyping on network testbeds. Although proposed as a “clean slate” design, CNF can also be implemented as an overlay over IP.

Figure 1 - CNF Protocol Stack

Technology Rationale:
Mobile content is viewed as an increasingly important category of traffic in the Internet given the proliferation of personal mobile phone and computing devices. Existing Internet protocols (e.g., TCP/IP) are not well-suited for mobile content services because they were designed under very different assumptions, both in terms of service requirements and technology constraints. In particular, the TCP model assumes a continuous source-to-destination path, and is based on the famous “end-to-end principle” which argues for keeping in-network functions to a minimum and pushing service-specific complexity to the end-points at the edge of the network. While the TCP protocol has served remarkably well for the first 25 years of the Internet’s operation, the end-to-end principle has significant limitations when dealing with mobile users who experience intermittent and/or unreliable access over wireless channels. Moreover, the connection oriented TCP/IP model was originally designed to support point-to-point data services rather than for multipoint content dissemination. We also note that many of the technology assumptions behind the end-to-end principle may no longer be applicable. In particular, the cost of semiconductor memory (currently ~$10/GB) has dropped by about 5-6 orders of magnitude since the Internet was first designed, link and CPU speeds have increased by 3-4 orders-of-magnitude to ~100 Mbps-1 Gbps and 1-10 Gbps respectively. These considerations argue for a back-to-basics reconsideration of the end-to-end networking model taking into account emerging requirements for large-scale mobile content services together with the increased capabilities of today’s core technologies.

The “cache-and-forward (CNF)” network architecture exploits these dramatic reductions in storage and processing costs to design a network that directly addresses the mobile content delivery problem. The key idea is to facilitate opportunistic transport on a hop-by-hop basis rather than end-to-end streaming of data as in TCP/IP. Such a hop-by-hop transport model implies large in-network storage of content files as they make their way through the network. In-network storage also enables the use of content caching and content-aware routing as a basic network capability rather than as an external overlay service as currently implemented in the Internet. The CNF architecture also introduces the concept of a post office (PO) specifically to improve content delivery to mobile users. The PO associates content in transit to its mobile requestor. If a destination becomes unavailable during content retrieval, the content is cached at an intermediate location and the PO is informed. When the mobile reconnects after an interruption, it may query the PO to obtain a pointer to the intermediate cache and resume content retrieval where it was left off instead of making a new request to the content source. The conceptual architecture of the CNF network is shown in figure 2.

 

 

Figure 2:  Conceptual View of the CNF Network

Technical Approach:
We approach the design of CNF architecture as a new protocol stack as shown in the figure above. We then design, develop, optimize and evaluate each protocol layer. We compare the performance of the CNF protocols in mobile scenarios with existing protocols.

The CNF protocol stack (Figure 1) consists of a control plane that consists of file or content name resolution service (CNRS) and content routing service. The data plane contains CNF application, transport, network and link protocols. The CNRS protocol provides naming service for the content and resolves a content name to its actual location. The content routing protocol provides a content discovery service that searches in-network caches to find the content at nearest location from the requestor. The CNF application is an information centric content service that transmits large files (~10’s of GB) to the requestor. The CNF transport protocol provides a hop by hop transport of large content without assuming an end to end connectivity. The routing protocol incorporates awareness of both short-term and long-term path metrics along with in-network storage availability when selecting a suitable path to the destination. The storage information allows the routing protocol to opportunistically utilize a good link and store the content in case the end to end connectivity is unavailable. The link layer supports the hop by hop transport policy by providing hop by hop content transfer reliability in addition to the traditional per packet reliability. This reliability is achieved by requiring an acknowledgment vector at every hop that ensures the successful reception of the content in transit at the next hop before it is forwarded. Caching and content retrieval are an integral part of the underlying network and the routing protocol in CNF. The content caching and retrieval protocols are responsible for searching the in-network caches to find the nearest copy of the content rather than fetching the content from the original source.

Results to Date and Future Work Plan:

CNF Transport Protocol [5]:
In the CNF transport protocol, a very large file (~10’s of GB) is first fragmented into smaller chunks (~100MB-1GB) at the original source and each chunk is then transmitted in a hop by hop fashion. Error control and congestion control are delegated to the link and network layer keeping the transport protocol much simpler than TCP. This approach makes it possible to serve mobile users with intermittent connectivity, while also mitigating self-interference problems which arise in multi-hop wireless scenarios. Hop-by-hop transport is similarly useful in wired networks where router storage can help to smooth out link congestion bottlenecks which arise in TCP/IP networks.

CNF Routing Protocol [6]:
The routing protocol incorporates awareness of both short-term and long-term path metrics along with in-network storage availability in the routing decision. CNF routers have sufficient storage capacity for temporary storage. Therefore, if the path quality is poor, or the end-user is temporarily disconnected, the routing protocol stores the file in transit, delivering it at a later time when conditions improve. This approach provides opportunistic use of good links providing a network wide throughput improvement.

CNF Link Protocol [4,6]:
In CNF the link protocol must receive the entire file or content chunk before sending it to the routing layer. Therefore, in this context we introduce per hop file transfer reliability which may be used instead of or in addition to per packet reliability provided by an 802.11 like MAC protocol. When a file to be transferred arrives at the link layer, it is assigned a temporally unique identifier and the file is divided into batches of a fixed number of packets. The link layer then reliably transfers each batch to the next hop before moving onto the next batch. This reliability is achieved by requiring an acknowledgment vector at every hop that ensures the successful reception of the content in transit at the next hop before it is forwarded.

CNF Content Caching:
In-network caching is achieved in CNF by caching content in transit in the CNF routers. This protocol is known as cache and capture (CC)[2]. Routers periodically advertise the content they have cached to generate a searchable list of local caches. This process is called cache and broadcast (CB)[2]. An enhancement over CB is “coordinated cache and broadcast (CCB)”[3]. Here the routers coordinate with their neighbors to decide which content should be cached. The content retrieval protocol then searches the local in-network caches to find the nearest copy of the content rather than fetching the content from the original source.

The CNF project is currently at an intermediate stage of development. We have completed initial design and evaluation and we are working on protocol optimization and testbed and proof of concept evaluation of each protocol in the Orbit emulation environment. A comprehensive evaluation of the complete CNF architecture with the entire protocol stack will constitute the major part of our future efforts in this area.

 

:


Contacts:

Prof. Dipankar Raychaudhuri
732-932-6857 Ext. 638
ray(AT)winlab(DOT)rutgers(DOT)edu






HOME | ABOUT WINLAB | WINLAB RESEARCH | FOCUS PROJECTS | FACULTY | SPONSORSHIP
Copyright © 2004-2009 WINLAB, Rutgers University