Wednesday, May 23, 2012
Untitled 1

Urs Holzle did the keynote talk at the 2012 Open Networking Summit where he focused on Software Defined Networking in Wide Area Networking. Urs leads the Technical Infrastructure group at Google where he is Senior VP and Technical Fellow. Software defined networking (SDN) is the central management of networking routing decisions rather than depending upon distributed routing algorithms running semi-autonomously on each router.  Essentially what is playing out in the networking world is a replay of what we have seen in the server world across many dimensions. The dimension that is central to the SDN discussion is a datacenter full of 10k to 50k servers are not managed individually by an administrator and the nodes making up the networking fabric shouldn’t be either.

 

The key observations behind SDN are 1) if the entire system is under single administrative control, central routing control is possible, 2) at the scale of a single administrative domain, central control of networking routing decisions is practical, and 3) central routing control allows many advantages including faster convergence on failure, priority-based routing decisions when resource constrained, application-aware routing and it enables the same software system that manages application deployment to manage network configuration.

 

In Holzle’s talk, he motivated SDN by first talking about WAN economics:

·         Cost per bit/sec delivered should go down with scale rather than up (consider analogy in compute and storage)

·         However, cost/bit doesn’t naturally decrease with size due to:

o   Quadratic complexity in pairwise interactions

o   Manual management and configuration of individual elements

o   Complexity of automation due to non-standard vendor configuration APIs

·         Solution: Manage the WAN as a fabric rather than as a collection of individual boxes

·         Current equipment and protocols don’t support this:

o   Internet protocols are box-centric rather than fabric-centric

o   Little support for monitoring and operations

o   Optimized for “eventual  consistency” in networking

o   Little baseline support for low-latency routing and fast failover

·         Advantages of central traffic engineering:

o   Better networking utilization with a global view

o   Converges faster to target optimum on failure

o   Allows more control and to specify application intent:

§  Deterministic behavior simplifies planning vs overprovisioning for worst case variability

o   Can mirror product event streams for testing to support faster innovation and roust software development

o   Controller uses modern server hardware (50x better performance)

·         Testability matters:

o   Decentralized requires a full scale test bed of production network to test new traffic engineering features

o   Centralized can tap real production input to research new ideas and to test new implementations

·         SDN Testing Strategy:

o   Various logical modules enable testing in isolation

o   Virtual environment to experiment and test with the complete system end-to-end

§  Everything is real except the hardware

o   Allows use of tools to validate state across all devices after every update from central server

§  Enforce ‘make before break’ semantics

o   Able to simulate the entire back-bone with real monitoring and alerts

·         Google is using custom networking equipment with 100s of ports of 10GigE

o   Dataplane runs on merchant silicon routing ASICs

o   Control plane runs on Linux hosted on custom hardware

o   Supports OpenFlow

o   Quagga BGP and ISIS stacks

o   Only supports the protocols in use at Google

·         OpenFlow Deployment History:

o   The OpenFlow deployment was done on the Google internal (non-customer facing) network

o   Phase I: Spring 2010

§  Install OpenFlow-controlled switches but make them look like regular routers

§  BGP/ISIS/OSPF now interfaces with OpenFlow controller to program switch state

§  Installation procedure:

·         Pre-deploy gear at one site, take down 50% of bandwidth, perform upgrade, bring new equipment online and repeat with the remaining capacity

·         Repeat at other sites

o   Phase II: Mid 2011

§  Activate simple SDN without traffic engineering

§  Ramp traffic up on test network

§  Test transparent software rollouts

o   Phase III: Early 2012

§  All datacenter backbone traffic carried by new network

§  Rolled out central traffic engineering

·         Optimized routing based upon 7 application level priorities

·         Globally optimized flow placement

§  External copy scheduler works with the OpenFlow controller to implement deadline scheduling for large data copies

·         Google SDN Experience:

o   Much faster iteration: deployed production quality centralized traffic engineering in 2 months

§  Fewer devices to update

§  Much better testing prior to roll-out

o   Simplified high-fidelity test environment

o   No packet loss during upgrade

o   No capacity loss during upgrade

o   Most features don’t touch the switch

o   Higher network utilization

o   More stable

o   Unified view of entire network fabric (rather than router-by-router view)

o   Able to implement:

§  Traffic engineering with higher quality of service awareness and predictability

§  Latency, loss, bandwidth, and deadline sensitivity in routing decisions

o   Improved routing decisions:

§  Based upon a priori knowledge of network topology

§  Based upon L1 and L3 connectivity

o   Improved monitoring and alerts

·         SDN Challenges:

o   OpenFlow protocol barebones but good enough

o   Master election/control plane partition challenging to handle

o   What to leave on router and what to run centrally?

o   Flow programming can be slow for large networks

·         Conclusions:

o   OpenFlow is ready for real world use

o   SDN is ready for real world use

§  Enables rich feature deployment

§  Simplified network management

o   Googles Datacenter WAN runs on OpenFlow

§  Largest production network at Google

§  Improved manageability

§  Lower cost

 

A video of Urs’ talk is available at: OpenFlow @ Google

 

James Hamilton 
e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

 

Wednesday, May 23, 2012 6:28:21 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services
Saturday, May 26, 2012 5:02:31 PM (Pacific Standard Time, UTC-08:00)
Nice post James. I had some comments about SDN/OpenFlow in general. OpenFlow is standardizing the protocol between the centralized controller and the routers/switches so as to traffic engineer the network. The idea of traffic engineering and using it to guarantee Quality-of-Service (QoS) has been around for a very long time in the networking area. It has been applied in LAN, WAN, and Internet backbone contexts.

MPLS (Multi-Protocol Label Switching) was invented and standardized in the 1990s so as to enable setting up of explicit paths that are different from shortest paths along which link state routing protocols like OSPF/IS-IS route. MPLS introduces a label between layer 2 and layer 3 (so called layer 2.5) which routers use to determine which next hop to forward the packet to. A path setup phase, using protocols like CR-LDP or RSVP, is used to set up label mappings at each router for a Label Switched Path (LSP) before data is routed along the path.

MPLS mechanisms can be used to implement traffic engineering in the SDN context as well. The simplicity of looking up a single label in the routing table is a big advantage in favor of MPLS. OpenFlow advocates for matching patterns on the packet headers and that operation gets more difficult to implement at higher line speeds and as matching patterns get more complex (e.g., due to packet encapsulation or application needs). With MPLS, this pattern matching operation at every hop is reduced to a label matching operation.

MPLS has mechanisms to disseminate link metrics in a distributed manner so as to facilitate routing decisions. SDN may not need those in the presence of centralized knowledge at the controller.

Centralized traffic engineering has been used in optical networks for a very long time. The IP routing layer can be thought of as an overlay over the optical switching layer for the Internet backbone. In an optical mesh network, a centralized management plane computes circuit switched paths and talks to individual optical switches to set up the respective cross-connect at each node. This interface is standardized by bodies like TMF (Telecommunications Management Forum) and is called "Northbound Interface". In principle, one could buy the centralized management plane from one vendor and the switches from another. Sounds similar to OpenFlow/SDN? Indeed it is.

In the late 1990s and early 2000s, I was working on distributed IP centric control plane architectures for optical networks -- what was in vogue then was to control them in a distributed manner. At that time, we borrowed ideas from the IP layer to move optical networks away from centralized control. Now it is time for the IP and higher layers to borrow ideas from the world of optical networks and take them towards centralized control!
Sunday, May 27, 2012 3:51:41 PM (Pacific Standard Time, UTC-08:00)
Hi Sudipta. Good hearing from you.

I agree that MPLS is in heavy use in most large private networks and at most telecos. In my view, MPLS is particularly interesting because it is both used heavily and broadly supported by commodity routing hardware.

Thanks for the note above.

--jrh
Comments are closed.

Disclaimer: The opinions expressed here are my own and do not necessarily represent those of current or past employers.

Archive
<November 2014>
SunMonTueWedThuFriSat
2627282930311
2345678
9101112131415
16171819202122
23242526272829
30123456

Categories
This Blog
Member Login
All Content © 2014, James Hamilton
Theme created by Christoph De Baene / Modified 2007.10.28 by James Hamilton