Skip to content

The beginning of the end for the role of operators as we know them?

This is a recent article on the deployment of fiber access networks by Google in Kansas City:

http://gigaom.com/2012/07/26/the-economics-of-google-fiber-and-what-it-means-for-u-s-broadband/

I believe this means much more than broadband access.. it could be the beginning of a new evolutionary process in the data communications networks, where the service providers start building their own network, then they will interconnect it among them.. and hops.. this reminds me of something.. I believe that is how the Internet was born.

Is this the real beginning of Internet 2.0?

You might think a network is just a network, it only carries bits around, so if current network operators are not profitable, what advantage can a service operator like Google get from building their own access network?

Network providers don’t have much cash to spare and don’t make much money from evolving their access network to fibre, while sustaining all the risk. So their incentive on access network investment is low.

On the other side service providers are those with the cash, and their future revenue is dependent on building new applications that will hugely benefit from a fast access network.

But, back to Google.. there is more to it I thinks, besides the above speculation about who benefits from a faster access. They do it the Google way.

They use more off-the-shelf components, they seem to have highly optimized processes already (the thing that tends to drive up the cost for operators). They even change the typical business models, where they convince the users to commit to the product even before they start the roll out! It seems they are capable of bringing about the kind of innovation that is required for a proper fibre roll out.

There has been a lot of talking about how operators are considering to start and sell content, because their are not making any money… but maybe it will go the other way around, where the content providers will start building the network, then they will interconnect it among them, and they will build the real Internet 2.0.

This would be a huge change in the network ownership model…

I suppose if we had to look at the negative side of it, we could see that with more than one service provider, the market segmentation would increase. Leaving aside the economical debate, this could bring technology segmentation, where different providers might use different technologies which might not be interoperable, if not at the IP layer. In addition, without a proper regulation, there won’t be a proper national rollout plan, so it might only cover patches of “wealthier” population, exacerbating the digital divide.

Advertisements

HEAnet national conference talks are online

Last month I was at the HEAnet national conference. It’s a small conference, but it’s really good for networking within Ireland.

My talk is in here

http://www.heanet.ie/conferences/2011/talks/id/45

Many very interesting talks, especially the one form a BT futurologist:

http://www.heanet.ie/conferences/2011/talks/id/17

Optical lectures

I have recently started teaching part of the CS7002 module, in the Computer Science department of Trinity College University of Dublin. I had previously recorded some of these lectures at CTVR, the telecommunication research centre where I carry out my research.

Over the next two weeks I will post these lectures in youtube and will forward the link in the Teaching section of this site:

https://mruffini.wordpress.com/teaching/

 

Plug-and-play FPGA design

Network researchers need a cheap and reconfigurable solution for testing new ideas, might these be new algorithms, protocols, or entire new architectural concepts. Software-based simulations have so far been the main testing environment for network research. However as data rates of commercial communication systems has increased fast, the discrepancy between data rates that can be simulated and operational rates is constantly widening.

If we look at some tools that in the past few years have gained much popularity among network researchers we can probably infer what are their key requirements:

1) The click modular router: a software based platform running on commodity hardware that allows easy plug-and-play of a (growing) number of network element to build up a network stack

2) The NetFPGA platform: an FPGA-based reconfigurable hardware platform for running different network devices at line-rate speed (1G and 10G)

3) The OpenFlow platform: a framework that allows to control and modify the behavior of network devices that implement it.

It is clear then that we, network researchers, have been looking for:

a) a tool for testing proprietary algorithms,  protocols or architectural frameworks at line rate speed for concept proofing in realistic scenarios

b) a platform that allows reusing existing elements (either proprietary or publicly available). Although we tend to focus our work on a subset of a communication stack, we often need to implement the entire stack in order to carry out experiments. Thus an ideal platform should provide many ready-to-use ordinary and advanced elements operating at different network layers.

c) a one-fit-all platform, which can be completely configured to reproduce the behavior of any (or a large number of) network devices. As researchers move to different topics or tasks they should be able to re-use their hardware

d) a platform where protocol stacks are easy and quick to deploy and modify. We want to spend more time in the development of the idea rather than on its hardware implementation

The problem is that such features do not yet exists all in one platform. The ideal platform can be summarized in two word, we want high-level software programming running at hardware speed. Basically a click-like (1) environment able to run on a NetFPGA (2) board. Ideally as a third step, we would also like that such platform be embedded in commercial network devices, such as in OpenFlow (3), so that large testbeds can be re-programmed to run experiments.

There is a growing number of research projects in the FPGA application field that aim at realizing such ideal platforms, the most populars being: Chimpp, Cliff, CUSP and Software-Defined Silicon.

At CTVR we also have a project with a similar goal: we are investigating how can high-level synthesis design tools be embedded in an click-like architecture, to deliver an high-level software programming tool for delivering (FPGA-based) implementations running at hardware speed.

Stay tuned for updates…

FPGA vs ASIC

A couple of weeks ago I was at the Xilinx Research Labs open day in Dublin, where, as CTVR we presented a demo and a few posters both on wireless and optical networing.. http://ledoyle.wordpress.com/2011/01/28/xilinx-research-labs-open-day/

One of the points I was making in one of the posters was that we envisage the Long-Reach PON fiber to the home solution would ideally use FPGAs rather than ASICs for the commercial implementation.

What I heard from Xilinx CTO was quite in line with our view. Although it is recognised that ASIC circuits are more efficient than FPGA, in average it takes about 30 times the number of logic gates to implement a function in FPGA than in ASIC, there’s a number of advantages that I believe will make FPGAs win the battle. This is what I learned about advantages from a commercial perspective:

  • FPGAs tend to be a couple of generations ahead of ASIC, so basically the factor 30 I just discussed is in practice lower than that
  • ASICs require very high volumes to be economically feasible (hundred of thousands units), and the design process is extremely slow and inflexible. This means that products developed in ASIC are already old when they get to the market. More and more consumer products (e.g., LCD TV) adopt FPGAs to be able to embed the latest developments (e.g., algorithms). So FPGA can give a competitive advantage by giving added value to the product
  • Last but definitely not least Xilinx is developing high-level language tools to make FPGA programming available to anyone… this is I believe the true biggest advantage… Just remember of the revolution that object-oriented languages brought to software development…

This has also great implications from a green-planet perspective.. Rather than selling you products (e.g. again your LCD TV) they could sell you upgrades, thus reducing the energy consumption at production and the waste due to continuous product replacement.

What are the implications for the telecoms world?

If (..actually …when) FPGAs will replace ASICs in telecoms, I can see huge changes in the way telecoms standards operate. Standardizations could evolve much faster as the standards would be decoupled from the inflexibility and long development times of ASICs.  On the other side, telecoms vendor would be able to adopt such new standards very quickly.. This will give a competitive advantage to them and a make better world for us.

By the way we’ve just started implementing the LR-PON protocol (https://mruffini.wordpress.com/research/long-reach-passive-optical-networks/) in FPGA, using the 10G NetFPGA board (http://www.netfpga.org/)

 

 

My new blog!

Hi All!

 

Welcome to my new blog.

I’ll be positing regularly with my updated views on flexible telecommunications networks…