2019 ACM SIGCOMM Test of Time Award

Back in late 2008 and early 2009, I had a few projects underway. One was investigating the impact of high temperatures on the longevity and fault rates in servers. We know what it costs to keep a data center cool, but what I wanted to know is what it would cost if we didn’t keep it cool. I wanted to understand the optimization point between server faults costing too much on the high temperature end of the spectrum and cooling costing too much on the low end.

Another project I had on the go was attempting to quantify the power lost to increased fan speed and semi-conductor leakage current as server air approach temperatures are increased.  At the time, many in the industry believed that higher data center temperatures would actually cause servers to consume more power. And, like many industry beliefs that we’re all used to hearing and repeating, it’s true, but the impact is far smaller than predicted. These costs don’t immediately swallow the power gains from reduced data center cooling. I found there is a steady curve of increased server power consumption as temperature is increased and a steady curve of reduced data center cooling power. What I was after is the optimization point and understanding what influenced it.

I found both these projects interesting and some of what I learned is still useful to me today. But, by far, the most interesting project I was involved with at that time was the VL2 networking effort at Microsoft Research. I felt super lucky to part of that project partly because Albert Greenberg was leading the effort. Albert is a legend of the networking world but also a humble researcher who just loves to explain how things work and why. Spending time and solving problems with Albert and the rest of the VL2 team was both fun and educational.

The other reason I was excited about being part of the VL2 effort is it had the potential to take networking down the same path of Moore’s Law price decreases that had led to so much innovation in the server world.  At this time networking was still effectively back in the mainframe era where a single company designed the central logic ASIC, the boards, the chassis, and the entire software stack above.

An example chassis switch popular at the time was the Cisco Nexus 7000. At that time a fully configured Nexus 7000 requires 8 circuits of 5kw each. Admittedly some of this 40kw is to provide power redundancy, but with only 120 ports, that’s more power than 4 racks of contemporary servers. Absolutely silly levels of power is consumed and, of course, you can’t possibly dissipate that much power without a lot of very expensive components which were packaged up into a vertically integrated package of astronomical cost.  I lovingly referred to this and similar gear as the “SUV of the Data center” in my 2010 Stanford talk “Data Center Networks are in my Way

The VL2 Research project was super-educational for me and really influenced my thinking in what’s possible with commodity networking equipment. The final result of this project was the 2009 SIGCOM paper VL2: A Scalable and Flexible Data Center Network. It’s hard to believe that it’s been 10 years since we published this paper, but I was reminded of it last week when I heard the paper had won the 2019 SIGCOMM Test of Time Award. From the Association of Computing Machinery SIGCOMM award:

The ACM SIGCOMM Test of Time Award recognizes papers published 10 to 12 years in the past in Computer Communication Review or any SIGCOMM sponsored or co-sponsored conference that is deemed to be an outstanding paper whose contents are still a vibrant and useful contribution today. The award is given annually and consists of a custom glass award. The paper is chosen by an award committee appointed by the SIGCOMM Award Committee Chair.
The past recipients of the ACM SIGCOMM Test of Time Paper Award are:
2019:

“VL2: A Scalable and Flexible Data Center Network” by Albert Greenberg, James R. Hamilton, Navendu Jain, Srikanth Kandula, Changhoon Kim, Parantap Lahiri, Dave A. Maltz, Parveen Patel, and Sudipta Sengupta. SIGCOMM 2009

This paper articulated the core design principles that have become the foundation for modern datacenter networks: scalable Clos topologies, randomized load-balanced routing, and virtual networks constructed by decoupling endpoint addresses and locations. By convincingly arguing for these principles, and providing one of the first glimpses into real-world datacenter network traffic characteristics, this paper has had enduring impact on both the practice of datacenter network design and the large body of research on the topic that has followed over the last decade. 
 
The 2019 award paper was selected by a committee composed of: Hitesh Ballani (MSR, chair), Mark Handley (UCL), Z. Morley Mao (UMich), and Mohammadreza Alizadeh Attar (MIT).

Congratulations to Albert Greenberg, Navendu Jain, Srikanth Kandula, Changhoon Kim, Parantap Lahiri, Dave Maltz, Praveen Patel, and Sudipta Sengupta. I’m sure there that VL2 effort has contributed at least a few of the nails in the coffin of expensive, power wasting, and vertically integrated chassis switches.

3 comments on “2019 ACM SIGCOMM Test of Time Award
  1. Albert Greenberg says:

    Happy and super lucky to have worked with you as well, James.

  2. Simon Leinen says:

    Congratulations to all involved. This award is well deserved. A paper with huge influence both in the research world and on the way data center networks are built.

  3. Congratulations to you too, Jim. Thanks for the explanation. I knew nothing about this topic earlier. I learnt something useful today.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.