Structure 2008: Put Cloud Computing to Work

Alex Mallet and Viraj Mody of the Windows Live Mesh team took great notes at the Structure ’08 (Put Cloud Computing to Work) conference (appended below).

Some pre-reading information was made available to all attendees as well: Refresh the Net: Why the Internet needs a Makeover?

Overall

Interesting mix of attendees from companies in all areas of “cloud computing”

The quality of the presentations and panels was somewhat uneven

Talks were not very technical

Amazon is the clear leader in mindshare; MS isn’t even on the board

Lots of speculation about how software-as-a-service, platform-as-a-service, everything-as-a-service is going to play out: who the users will be, how to make money, whether there will be cloud computing standards etc

5 min Nick Carr video [author of “The Big Switch”]

Drew symbolic link between BillG retiring and Structure ’08, the first “cloud computing” conference, being the same week, marking the shift of computing from the desktop to the datacenter

Generic pontificating on the coming “age of cloud computing”

“The Platform Revolution: a look into disruptive technologies”, Jonathan Yarmis, research analyst

Enterprises always lag behind consumers in adoption of new technology, and IT is powerless to stop users from adopting new technology

4 big tech trends: social networks, mobility, cloud computing, alternative business models [eg ad-supported]

Tech trends mutually reinforcing: mobility leads to more social networking applications, being able to access data/apps in the cloud leads to more mobility

Mobile is platform for next-gen and emerging markets: 1.4 billion devices per year, 20% device growth per year, average device lifetime 21 months; opens up market to new users and new uses

Claim: “single converged device will never exist”; cloud computing enables independence of device and location

Stream computing: rate of data creation is growing at 50-500% per year, and it’s becoming increasingly important to be able to quickly process the data, determine what’s interesting and discard the rest

“Economic value of peer relationships hasn’t been realized yet” – Facebook Beacon was a good idea, but poorly realized

“Virtualization and Cloud Computing”, with Mendel Rosenblum, VMWare founder

Virtualization can/should be used to decouple software from hardware even in the datacenter

Virtualization is cloud-computing enabler: can decide whether to run your own machines, or use somebody else’s, without having to rewrite everything

Coming “age of multicore” makes virtualization even more important/useful

Smart software that figures out how to distribute VMs over physical hardware isn’t a commodity yet

VMWare is working on merging the various virtualization layers: machine, storage, networks [eg VLANs]

HW support for virtualization is mostly being done for server-class machines [?]

Rosenblum doesn’t think moving workloads from the datacenter to edge machines, to take advantage of spare cycles, will ever take off – it’s just too much of a pain to try to harness those spare cycles

Single-machine hypervisor is becoming commodity, so VMWare is moving to managing the whole datacenter, to stay ahead of the competition

Keynote, Werner Vogels, Amazon CTO:

Mostly a pitch for Amazon’s web services: EC2, S3, SQS, SimpleDB

Gave example of Animoto, company that merges music + photos to create a video, which has no servers whatsoever: had 25K users, launched a Facebook app, and went from 25K users total to adding 25K users/hour; were able to handle it by moving from 50 EC2 instances to 3000 EC2 instances in 2 days

Currently 370K registered AWS developers

Bandwidth consumed by AWS is bigger than bandwidth consumed by Amazon e-commerce services

Shift to service-oriented architecture occurred as result of being approached by Target in 2001/2002, asking whether Amazon could run their e-commerce for them. Realized that their current architecture wouldn’t scale/work, so they re-engineered it

Single Amazon page can depend on hundreds of other services

Big barrier between developing web app and operating it at scale: loadbalancing, hardware mgmt, routing, storage management etc. Called this the “undifferentiated heavy lifting” that needs to be done to even get in the game

Claim: typical company spends 70% effort/money on “undifferentiated heavy lifting” and 30% on differentiated value creation; AWS is intended to allow companies to focus much more on differentiated value creation

SmugMug has been at forefront of companies relying on AWS; currently store 600TB of photos in S3, and have launched an entirely new product, SmugVault, based purely on the existence of S3 => AWS not just replacement for existing infrastructure, but enabling new businesses

In 2 years, cloud computing will be evaluated along 5 axes: security, availability, scalability, performance, cost-effectiveness

Really plugged the pay-as-you-go model

“Working the Cloud: next-gen infrastructure for new entrepreneurs” panel

Q: is lock-in going to be a problem ie how easy will it be to move an app from one cloud computing platform to another ?

o A: Strong desire for standards that will make it easy to port apps, but not there yet

o A: To really use the cloud, you need to embed assumptions about it in your code; even bare-metal clouds require intelligence, like scripts to spin up new EC2 instances, so lock-in is a real concern

o Side thread: Google person on panel claimed that using Google App Engine didn’t lock in developers, because the GAE APIs are well-documented, and he was promptly verbally mugged by just about everyone else on the panel, pointing out things like the use of BigTable in GAE making it difficult to extract data, or replace the underlying storage layer etc.

o Prediction: there will be convergence to standards, and choice will come down to whether to use a generic cloud, or a more specialized/efficient cloud, eg one targeted at the medical information sector, with features for HiPPA compliance

Need new licensing models for cloud computing, to deal with the dynamic increase/decrease in number of application instances/virtual machines as load changes

Tidbit: Google has geographically distributed data centers, and geo-replicates [some] data

Q: will we be able to use our old “toys” [APIs, programming models etc] in the cloud ?

o A: Yes, have to be able to, otherwise people won’t adopt it

o A: Yes, just have to be smart about replacing the plumbing underneath the various APIs

o A: Yes, but current API frameworks are lacking some semantics that become important in cloud computing, like ways to specify how many times an object should be replicated, whether it’s ok to lazily replicate some data etc

Mini-note, “Optical networking”, Drew Perkins

Video is, by far, the largest consumer of bandwidth on the internet

Cost of content is disproportionate to size: 4MB song costs $1, 200MB TV show episode costs $2, 1.5GB movie costs $3-4.

Photonic integrated circuits that can be used to build 100GB/s are needed to meet future bandwidth requirements: less power, need fewer network devices

“Race to the next database” panel

Quite poorly organized: panelists each got to give an [uninformative] infomercial for their company, and there was very little time for actual questions and discussion

Aster Data Systems is back-end data warehouse and analytics system for MySpace: 1 billion impressions/day, 1 TB of new data per day, new data is loaded into 100-node Aster cluster every hour and needs to be available for ad analytics engine to decide which ads to show

SQLStream is company that has built a data stream processing product that collapses the usual processing stages [data staging, cleaning, loading etc] into a pipeline that continuously produces results for “standing” queries; useful for real-time analytics

Web causes disruption to traditional DB model because of [10x] larger data volumes, need for high interactivity/turn-around, need to scale out instead of up. For example, GreenPlum is building a 20PB data warehouse for one customer.

Can’t rely on all the data being in a single store, so need to be able to do federated/distributed queries

“MS Datacenters”

Presentation centered on MS plans for datacenters-in-a-box

Datacenters-in-a-container are long-term strategy for MS, not just transient response to high demand and lack of space

Container blocks have lower reliability than traditional datacenters, so applications need to be geo-distributed and redundant to handle downtime

“End of boxed software”, Parker Harris, co-founder of Salesforce.com

Origins of salesforce.com: Modeled on consumer internet sites – amazon.com; ebay.com

Transition from client->server site to a platform (force.com): first instinct is to build a platform, but then you lose touch with why you’re building it. As they started building their experience, they abstracted away components and started realizing it could become a platform. Revenue comes from site, platform is a bonus.

Initially scaled by buying bigger [Sun] boxes ie scaled up, not out, and ran into lots of complexity. Unclear whether that’s still the case or whether they’ve re-architected.

“Scaling to satiate demand” panel

Q: “When did you first realize your architecture was broken, and couldn’t scale ?”

o A: When site started to get slow; Ebay: after massive site outages

Q: “How do you handle running code supplied by other people on your servers ?”

o A: Compartmentalize ie isolate apps; have mgmt infrastructure and tooling to be able to monitor and control uploaded apps; provide developers with fixed APIs and tools so you can control what they do

Q: “How do Facebook and Slide [builds Facebook apps] figure out where the problems are if Slide starts failing ?”

o A: Lots of real-time metrics; ops folks from both companies are in IM contact and do co-operative troubleshooting

Q: “How should you handle PR around outages ?”

o A: Be transparent; communicate; set realistic timelines for when site will be back up; set expectations wrt “bakedness” of features

Beware of retrying failed operations too soon, since retries may cause an overloaded system to never be able to come up

Ebay: each app is instrumented with the same set of logging infrastructure and there’s a real-time OLAP engine that analyzes the logs and does correlation to try to find troubled spots

Facebook and Meebo both utilized their user base to translate their sites into multiple languages

Need to know which bits of the system you can turn off if you run into trouble

Biggest challenge is scaling features that involve many-many links between users; it’s easy to scale a single user + data

Keep monitoring: there are always scale issues to find, even without problems/outages

Slide: “Firefox 3 broke all of our apps”

Facebook has > 10K servers

Mini-note, “Creating fair bandwidth usage on the Internet”, Dr. Lawrence Roberts, leader of the original ARPANET team

P2P leads to unfair usage: people not using P2P get less; 5% of users (P2P users) receive 80% capacity

Deep packet inspection catches 75% of p2p traffic, but isn’t effective in creating fairness

Anagran has flow behavior mgmt: observe per user flow behavior & utilization and then equalize. Equalization is done in memory, on networking infrastructure [routers etc] and at the user level instead of the flow level

Mini-note, “Cloud computing and the mid-market”, Zach Nelson, CEO of NetSuite

Mid-market is the last great business applications opportunity

Cloud computing makes it economical to reach the “Fortune 5 million”

Cloud computing still doesn’t solve problem of application integration

Consulting services industry is next to be transformed by cloud computing

Mini-note, “Electricity use in datacenters”, Dr.Jonathan Koomey

Site Uptime Network, an organization of data center operators and designers did study of 19 datacenters from 1999-2006:

o Floor area remained relatively constant

o Power density went from 23 W/sq ft to 35 W/sq ft

In 2000, datacenters used 0.5% of world’s electricity; in 2005, used 1%.

Cooling and power distribution are largest electricity consumers; servers are second-largest; storage and networking equipment accounts for a small fraction

Asia-Pacific region’s use of power is increasing the fastest, over 25% growth per year

Lots of inefficiencies in facility design: wrong cost metrics [sq feet versus kW], different budgets and costs borne by different orgs [facilities vrs IT], multiple safety factors piled on top of each other

Designed Eco-Rack, which, with only a few months of work, reduces power consumption on normalized workload by 16-18%

Forecast: datacenter electricity consumption will grow by 76% by 2010, maybe a bit less with virtualization

“VC investment in cloud computing infrastructure” panel

Overall thesis of panel was that VCs are not investing in infrastructure

VCs disagreed with panel theme, and said that it depended on the definition of infrastructure; said they are investing in infrastructure, but it’s moving higher in the stack, like Heroku [?]

HW infrastructure requires serious investment, large teams, and long time-frame – not a good fit with VC investment model

Any companies that want to build datacenters or commodity storage and compute services are not a good investment – there are established, large competitors, and it’s very expensive to compete in that space

Infrastructure needed for really large scale [like a 400 Gbit/sec switch] has a pretty small market, which makes it hard to justify the investment. If there’s a small market, the buyers all know they’re the only buyers and exert large downward pressure on price, which makes it hard for company to stay in business

Quote: “any company that’s doing something worthwhile, and building something defensible, will take at least 24 months to develop”

James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh | blog:http://perspectives.mvdirona.com

One comment on “Structure 2008: Put Cloud Computing to Work
  1. These conference synopses are tremendously useful; we’re in the process of moving a next-generation visual discovery platform from servers to the cloud (SSDS, Live Mesh) and it’s very helpful to see how this infrastructural ecosystem and practices are evolving.

    Our application has to be ready to scale right out of the box, and we want to avoid dealing with the frantic infrastructure build up scenario as the traffic rises…

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.