Skip to content

Past Projects

DISCUS FP7 project  (http://www.discus-fp7.eu/)

The DIStributed Core for unlimited bandwidth supply for all Users and Services (FP7 Grant Agreement 318137).
DISCUS is the FP7 project coordinated by TCD that I am currently managing, togeher with David Payne and Catherine Keogh.
The project goal is to exploit demonstrated technology and concepts needed to define and develop a new radical architectural concept that can enable an integrated wireless and FTTP future network which addresses the economic, energy consumption, capacity scaling, evolutionary, regulatory and service demand challenges arising from an FTTP enabled future.

Peer-to-peer applications for next generation passive optical networks

The focus of this project is on context-aware strategies to optimize bandwidth utilization and improve energy efficiency for data-intensive applications, such as multimedia distribution.

The big picture
Bandwidth increase and revenue shrinking trendsOver the past decade growth of Internet related technologies and services was made possible by the exponential growth of available bandwidth, mainly supported by EDFA and WDM technologies at the optical layer and by extensive silicon chip integration in the electronics. Such technologies seem however to have reached saturation. We do not currently have competitive solutions to exploit the additional bandwidth in the fibre. Similarly, scalability issues in dynamic heat dissipation in silicon technology have sensibly slowed down the capacity growth of network electronic equipment, such as IP routers. Power consumption has also become a major concern in networking and more in general for the IT sector. Although IT related power consumption is currently between 1 and 2% of the total, it is growing three times faster than the rest. The concern generated by such growth has pushed much of current research on networks towards reduction of power consumption.

While new techniques that aim at increasing the data rate which can be carried over a single fibre (such as multi-level modulations, OFDM, and coherent transmission) are being developed, it is believed that their ability in reducing the cost-per-bandwidth ratio will not be comparable to that allowed by EDFA and WDM technologies over the past decade. Network capacity will keep growing over the foreseeable future, but this growth will not benefit from the same economies of scale we have seen in the past, and bandwidth will be increasingly perceived as a scarce resource. Such trend will continue unless significant innovation will emerge in the optical transmission (e.g., with new transmission media) and in the electronic processing domain (e.g., for switching and routing functions). The quest for reduction in bandwidth cost will thus need to be tackled at multiple levels. Besides looking for technological advances at the physical layer, means for a more efficient use of the scarce bandwidth resources are also required. For example achieving quality of service through massive static over-provisioning of network resources is not sustainable. Similarly, current use of routing traffic almost exclusively through IP routers is not sustainable. These technologies do not seem scalable to support the Internet evolution in the foreseeable future, where high peak rates will be required to offer quality of experience to a multitude of to bandwidth-hungry applications, such as HD video on demand and thin client computing.

It is our belief that dynamic and intelligent bandwidth allocation will be a key feature to enable efficient bandwidth usage and reduce the pressure on the network core. Such efficiency will be also reflected in reduced power consumption. Both access and core architectures will need to be redesigned to be more flexible, self-aware and self-managing, bandwidth and power efficient.

The framework: next-generation access networks

Video on Demand (VoD) and Internet Television (IPTV) have become the biggest sources of Internet traffic, and this trend is only going to consolidate as more and more companies enter the fray to compete in this promising market segment. However, multimedia services still pose a number of challenges to current network infrastructures. The combination of heavy bandwidth requirements and high concentration of requests during peak-hours make bandwidth provisioning a non trivial task for network operators. We are thus witnessing a gradual move from traditional copper-based access technologies towards high-speed Fibre-to-the-X (FTTX) solutions, in order to meet this increasing demand for bandwidth. However, this shift has greater implications on the architecture of the network.

Network semplicfication
Specifically, FTTX is changing the rules of the game by greatly simplifying and “flattening” the traditional hierarchical structure of the Internet, where a high-speed optical core aggregated traffic through statistical multiplexing, dispatching it to SDH/SONET metro rings and finally to copper-based access sections. In particular, at CTVR we envision a future network where Long Reach Passive Optical Networks (LR-PONs) will play a key role to reduce network operators OpEx. By introducing a limited number of active elements (i.e. optical amplifiers) in the access section, LR-PONs extend the coverage span of traditional PONs from 20 Km to up to 100 Km. The number of Local Exchange sites required to interconnect remote customers can thus be greatly reduced; metro rings might disappear entirely, to be replaced by an architecture in which few metro/core nodes aggregate hundreds of thousands of customers each.

While the increased capacity guaranteed by these FTTX technologies is certainly a key enabler for multimedia services, it also reduces the bandwidth gap between access and core links; consequently, traffic aggregation in the core becomes less and less effective – a problem that is aggravated by the high concentration in time of multimedia requests. Rather than solving the bandwidth bottleneck issue, FTTX will shift it towards the core network. As the same technology (i.e. fibre) is installed both in the access and in the core, the aggregation capabilities of the core saturate. Since no better technology than fibre currently exist, core bandwidth can only be increased by stacking more and more communication equipment, a practice that is neither effective nor sustainable both form an economic and power consumption perspective.

The solution: locality-awareness

LR-PONs also present a number of traits that make them an ideal framework for peer-to-peer (P2P) based approaches to content distribution.P2P-based solutions have become increasingly popular in CD applications, both for their inherent scalability and the cost savings they enable; by leveraging the upload bandwidth of participating customers, content providers do not need to bear the entire bandwidth cost of distributing the content, as opposed to a traditional client-server scenario. Unfortunately, this approach poses a serious threat to ISPs infrastructures and business models, both due to the sheer amount of traffic it contributes to and the additional costs it generates for network operators in terms of inter-ISP traffic. Locality-aware policies have been repeatedly proposed by researchers as a potential solution to these issues: by limiting P2P exchanges to peers residing in the same Autonomous System (AS) it could be possible to reduce traffic on the Internet backbone, speed up data transfers by reducing network latencies, and drastically reduce costs for ISPs.

Unfortunately, these policies can only be successful for a given client when there are enough local peers sharing its desired content. P2P protocols such as BitTorrent typically aim for a ratio of uploading peers to clients of about 10:1, in order to overcome the discrepancies between upload and download bandwidth of residential DSL customers. Considering that on average 82% of all torrents have less than 10 active peers in the entire network at any given moment [1], it should be evident that finding enough local peers to satisfy the 1:10 ratio is often hard if not impossible; therefore, there is only limited applicability for locality schemes in traditional networks.
However, this imbalance between upload and download bandwidth is only due to the current limitations of copper-pair based technology. State-of-the-art FTTH deployments have already reduced this ratio to 4:1 or even 2:1 in some cases (e.g. GPON); the expected advent of next-generation optical access network architectures can further reduce this ratio to 1:1.

Furthermore, the already mentioned simplified structure of the network introduced by LR-PONs will aggregate hundreds of thousands of users at a logical distance of only a few hops. These extended metro/core communities represent the ideal environment for locality-aware policies, as the sheer number of available peers will dramatically increase the chances of finding a local copy of the required content without leaving the access segment of the network.

We believe that the synergy of P2P and next-generation long reach access architectures can solve the bandwidth problem from an architecture perspective (rather than from a technology viewpoint). By turning around traffic at the edge of the network we can reduce the load on core links and thus improve the scalability and power efficiency of the system, while at the same time taking fully advantage of the upstream capacity provided by FTTH technologies.

Experimental results

To evaluate the benefits of our proposed approach, we first performed a steady-state simulation analysis of traffic loads imposed by different multimedia delivery schemes under various network conditions. Specifically, the performance of client-server, Content Delivery Networks (CDN), non-local P2P and locality-aware P2P — the latter with both asymmetric and symmetric end-users bandwidth — are compared in each simulation run. A Zipf-Mandelbrot popularity model is used to generate realistic video requests patterns.

Core traffic reduction of locality-aware P2P

The results show that significant core traffic savings can be achieved through locality-aware symmetric P2P, without enforcing any sort of content caching policy – e.g. just by locally storing the latest video watched by each user. The performance of this approach improves with the number of active users in the system, as this has a direct impact on the size of the distributed cache available at each access segment; however, even with 10% of active users on each PON, locality-aware symmetric P2P outperforms any other content distribution scheme in terms of core traffic. Furthermore, this can translate to power savings for network oeprators, as the reduced core traffic requires a reduced amount of network interfaces to be dealt with. For further details, see [2].

The following step was to analyze the impact of dynamic evolution of video content popularity on the performance of such a network-managed, peer-to-peer based caching scheme for multimedia distribution. For this purpose, we developed an event-driven simulator, which we called PLACeS (Peer-to-peer Locality Aware Content dElivery Simulator). Separate popularity model were introduce to study VoD and time-shifted IPTV. The results of these studies have been submitted as two different papers and are currently awaiting evaluation from the scientific community; as such, they cannot be published here at the moment.

[1] C. Zhang, P. Dhungel, and K. Di Wu, “Unraveling the bittorrent ecosystem,” IEEE Transactions on Parallel and Distributed Systems, vol. 22, no. 7, pp. 1164–1177, Jul. 2010.

[2] E. Di Pascale, D. B. Payne, M. Ruffini: “Bandwidth and Energy Savings of Locality-Aware P2P Content Distribution in Next-Generation PONs”, Proceedings of ONDM 2012.

Intelligent, self-aware, cross-layer optical networks

Although Internet traffic growth has sensibly decreased over the past few years, traffic forecast show that annual growth of about 40% remains a plausible scenario over the next five years. Especially if we consider that service providers are now deploying FTTx solutions (many of which are FTTH), which have the potential to increase customer peak bandwidth by well over two order of magnitude compared to current offers on copper access, such traffic forecast seem to represent a lower bound. Traffic growth has put the core network under pressure to provide more and more bandwidth at lower and lower cost. Traffic forecast show that this trend will continue in the foreseeable future. Over the past decade growth on Internet related technologies and services leveraged on the exponential growth of available bandwidth, mainly supported by EDFA and WDM technologies at the optical layer and by extensive silicon chip integration in the electronics. Such technologies seem however to have reached saturation. We do not currently have competitive technologies able to exploit the additional bandwidth in the fiber. Similarly, scalability issues in dynamic heat dissipation in silicon technology have sensibly slowed down the capacity growth of network electronic equipment, such as IP routers. Power consumption has also become a major concern in networking and more in general for the IT sector. Although IT related power consumption is currently between 1 and 2 % of the total, it is growing three times faster than the rest. The concern generated by such growth has pushed much of current research on networks towards reduction of power consumption. Probably the most relevant example is the Greentouch consortium, which includes among the top vendors, operators and universities in the world, and aims at reducing power consumption in networks by 1000 times within the next 5 years. In addition, networking technology could help further reduce global power consumption in other areas such as transport, by making ideas (i.e., bits) travel instead of people. Although new techniques are being developed such as multi-level modulations, OFDM, coherent transmission, that aim at increasing the data rate that can be carried over a single fiber, it is believed that their ability in reducing the cost-per-bandwidth ratio will not be comparable to that allowed by EDFA and WDM technologies over the past decade. Although network capacity will keep growing over the foreseeable future, its growth will not benefit from the same economies of scale we have seen in the past, and bandwidth will be increasingly perceived as a scarce resource. Such trend will continue unless significant innovation will emerge in the optical transmission (e.g., with new transmission media) and in the electronic processing domain (e.g., for switching and routing functions). The quest for reduction in bandwidth cost will thus need to be tackled at multiple levels. Besides looking for technological advances at the physical layer, means for a more efficient use of the scarce bandwidth resources are also required. For example achieving quality of service through massive static over-provisioning of network resources is not sustainable. Similarly, current use of routing traffic almost exclusively through IP routers is not sustainable. These technologies do not seem scalable to support the Internet evolution in the foreseeable future, where high peak rates will be required to offer quality of experience to a multitude of to bandwidth-hungry applications, such as HD video on demand and thin client computing. We believe that dynamic and intelligent bandwidth allocation will be a key feature to enable efficient bandwidth usage and reduce the pressure on the network core. Such efficiency will be also reflected in reduced power consumption. Both access and core architectures will need to be redesigned to be more flexible, self-aware and self-managing, bandwidth and power efficient.

Optical IP Switching (OIS)

Optical IP Switching is a technique we have developed that creates and deletes optical cut-through paths in response to local analysis of IP traffic. Switching data directly in the optical domain has important consequences. On one hand it can allow cost saving, as optical switch ports are data rate independent, and cost tens of times less than IP ports. On the other hand however, optical switching is operated at the wavelength granularity, which can become quite inefficient compared to the packet granularity offered by electronic routers and switches. This inefficiencies originate from the difference in granularity between electronic routing, where data is switched packet-by-packet (each in the order of the Kbyte in size), and wavelength switching, where data is switched at the channel rate (of the order of a few Gbps). We challenge this large gap (about six orders of magnitude) by first reorganizing the packets in IP flows, decreasing the granularity to values between hundreds of Kbps and few Mbps. We then aggregate the flows sharing a common route into the same dedicated optical cut-through paths, using a method we have developed, that groups flows depending on their destination network. The path creation algorithm finally selects the flow aggregates eligible for dedicated cut-through paths, taking into consideration the aggregate data rate, the resources available, the network policies of its own domain and, in the case of interdomain operations, the network policies of its neighboring domains.
The distinguishing features of our approach is that the optical paths provisioning mechanism is completely distributed and based only on local decisions. We believe that this approach better satisfies the requirements of Internet network architectures (especially in the inter-domain), where existing distributed routing protocols have proved to be very effective to cope with large-scale deployment, high heterogeneity (both of network technologies and user applications) and high traffic variability. For example, by adopting the OIS architecture different nodes can implement their own policies to decide if an incoming signal should be transparently switched or locally terminated.

OIS interface to UCLP (User Controlled Light Path)

This project was developed in collaboration with i2cat, HEAnet and Glimmerglass, and involved the integration of UCLP and OIS, so that bandwidth on demand (BoD) services could be requested automatically by the OIS nodes, following traffic flow analysis. In this model OIS acts as a client of the external UCLP networks.
When an OIS node detects traffic destined for a specific external domain, it can automatically request the bandwidth needed to reach the desired destination. The UCLP server is the central unit in charge of signaling and scheduling operations, receiving the bandwidth requests, calculating the best routes and signaling the network elements to provision the optical paths. The interface we have developed for this experiment allows the OIS node to log into the UCLP server, download the network topology and request/release dedicated end-to-end paths. The novelty of our implementation is that we have provided the UCLP server with a mechanism that stores an updated list of network prefixes reachable through each node.

%d bloggers like this: