Changes in Networking Systems

I’ve been posting frequently on networking issues with the key point being the market is on the precipice of a massive change. There is a new model emerging.

· Datacenter Networks are in my way

· Networking: The Last Bastion of Mainframe Computing

We now have merchant silicon providers for the core Application Specific Integrated Circuits (ASICs) that form the core network switches and routers including Broadcom, Fulcrum (recently purchased by Intel), Marvell, Dune (purchased by Broadcom). We have many competing offerings for the control processor that supports the protocol stack including Freescale, Arm, and Intel. The ASIC providers build reference designs that get improved by many competing switch hardware providers including Dell, NEC, Quanta, Celestica, DNI, and many others. We have competition at all layers below the protocol stack. What’s needed is an open, broadly used, broadly invested networking stack. Credible options are out there with Quagga perhaps being the strongest contender thus far. Xorp is another that has many users. But, there still isn’t a protocol stack with the broad use and critical mass that has emerged in the server world with the wide variety of Linux distributions available.

Two recent new addition to the community are 1) the Open Networking Foundation, and 2) the Open Source Routing Forum. More on each:

Open Networking Foundation:

Founded in 2011 by Deutsche Telekom, Facebook, Google, Microsoft, Verizon, and Yahoo!, the Open Networking Foundation (ONF) is a nonprofit organization whose goal is to rethink networking and quickly and collaboratively bring to market standards and solutions. ONF will accelerate the delivery and use of Software-Defined Networking (SDN) standards and foster a vibrant market of products, services, applications, customers, and users.

Open Source Routing Forum

OSR will establish a “platform” supporting committers and communities behind the open source routing protocols to help the release of a mainstream, and stable code base, beginning with Quagga, most popular routing code base. This “platform” will provide capabilities such as regression testing, performance/scale testing, bug analysis, and more. With a stable qualified routing code base and 24×7 support, service providers, academia, startup equipment vendors, and independent developers can accelerate existing projects like ALTO, Openflow, and software defined networks, and germinate new projects in service providers at a lower cost.

Want to be part of re-engineering datacenter networks at Amazon?

I need more help on a project I’m driving at Amazon where we continue to make big changes in our datacenter network to improve customer experience and drive down costs while, at the same time, deploying more gear into production each day than all of Amazon.com used back in 2000. It’s an exciting time and we have big changes happening in networking. If you enjoy and have experience in operating systems, networking protocol stacks, or embedded systems and you would like to work on one of the biggest networks in the world, send me your resume (james@amazon.com).

–jrh

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

2 comments on “Changes in Networking Systems
  1. Thanks for the thoughtful comment Craig. Generally, I agree with you that the needs of the general purpose high scale cloud networking requirements and HPC. HPC continues to be on the outer edge especially with respect to low latency requirements but there is no question that the workloads are evolving to look more similar over time.

    You are right that cabling in folded clos is substantial and incremental growth without massive recabling is a problem. However, at this point, cabling is not the biggest cost nor is cabling space or weight a limiting factor. But, you are right to worry about it in that cabling and physical connector costs are increasing relative to the rest of the equipment in the network. If nothing changes, it will eventually dominate.

    –jrh

  2. Craig Dunwoody says:

    Hello James,

    Thanks for these links. It’s great to see all kinds of new thinking in networking. One idea that I think is potentially interesting for the future is to continue to use standard Ethernet links and off-the-shelf Broadcom/Intel/Marvell high-radix Ethernet switch ASICs, but in topologies that facilitate physically integrating network switching hardware together with processing and storage hardware, thereby completely eliminating separate network-switch boxes. One could then build out an entire high-scale datacenter with a shared-nothing hardware architecture using a single integrated networking+processing+storage server/node module design as the sole unit of replication/scalability.
    I have seen a number of approaches based on this idea, but one that I think is particularly elegant is described in a Supercomputing 2009 paper by HP Labs folks:
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.158.3130&rep=rep1&type=pdf

    The authors mention exascale-supercomputer applications, but I think that this approach is more broadly useful, as I see less and less difference in networking requirements between conventional-HPC systems and high-scale cloud infrastructure. The paper describes and analyzes a particular network topology ("HyperX") and associated routing algorithms, then suggests an efficient and highly scalable hardware-packaging scheme. In Figures 9 and 10 they show an example with 256 interconnected cabinets containing a total of 128K servers, using a total of only 7,680 cables between cabinets and only 15 unique cable lengths. I like the simple cabling plan that facilitates incremental addition of cabinets without disrupting/changing existing cable connections, and the avoidance of cabling congestion points that you get with other topologies like folded-Clos.

    At very high scale this kind of approach will get much easier to implement after optical backplane and cable link technologies mature and become much more cost-efficient than they are today, but at much smaller scale I wonder if this could be beneficial even with current electrical-link technology. Of course, even if using completely off-the-shelf networking, processing, and storage chips, custom motherboards and other structures move away from mass-market volume economics of scale, but we already see cases like OpenCompute where people are getting to scales where it is worthwhile to do that in at least a limited way.

    It is also great to see more and more people trying out alternative network topologies, which as you noted are increasingly easy to experiment with using off-the-shelf 1RU-rackmount single-chip network switch boxes and improving Open Source networking-software infrastructure. I’m interested to learn more about what various groups are doing in this area, including any experiments with topologies like the HyperX described by the HP Labs group.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.