Friday, June 27, 2008

Alex Mallet and Viraj Mody of the Windows Live Mesh team took great notes at the Structure ’08 (Put Cloud Computing to Work) conference (appended below).

 

Some pre-reading information was made available to all attendees as well: Refresh the Net: Why the Internet needs a Makeover?

 

Overall

-          Interesting mix of attendees from companies in all areas of “cloud computing”

-          The quality of the presentations and panels was somewhat uneven

-          Talks were not very technical

-          Amazon is the clear leader in mindshare; MS isn’t even on the board

-          Lots of speculation about how software-as-a-service, platform-as-a-service, everything-as-a-service is going to play out: who the users will be, how to make money, whether there will be cloud computing standards etc

 

5 min Nick Carr video [author of “The Big Switch”]

-          Drew symbolic link between BillG retiring and Structure ’08, the first “cloud computing” conference, being the same week, marking the shift of computing from the desktop to the datacenter

-          Generic pontificating on the coming “age of cloud computing”

 

“The Platform Revolution: a look into disruptive technologies”, Jonathan Yarmis, research analyst

-          Enterprises always lag behind consumers in adoption of new technology, and IT is powerless to stop users from adopting new technology

-          4 big tech trends: social networks, mobility, cloud computing, alternative business models [eg ad-supported]

-          Tech trends mutually reinforcing: mobility leads to more social networking applications, being able to access data/apps in the cloud leads to more mobility

-          Mobile is platform for next-gen and emerging markets: 1.4 billion devices per year, 20% device growth per year, average device lifetime 21 months; opens up market to new users and new uses

-          Claim: “single converged device will never exist”; cloud computing enables independence of device and location

-          Stream computing: rate of data creation is growing at 50-500% per year, and it’s becoming increasingly important to be able to quickly process the data, determine what’s interesting and discard the rest

-          “Economic value of peer relationships hasn’t been realized yet” – Facebook Beacon was a good idea, but poorly realized

 

“Virtualization and Cloud Computing”, with Mendel Rosenblum, VMWare founder 

-          Virtualization can/should be used to decouple software from hardware even in the datacenter

-          Virtualization is cloud-computing enabler: can decide whether to run your own machines, or use somebody else’s, without having to rewrite everything

-          Coming “age of multicore” makes virtualization even more important/useful

-          Smart software that figures out how to distribute VMs over physical hardware isn’t a commodity yet

-          VMWare is working on merging the various virtualization layers: machine, storage, networks [eg VLANs]

-          HW support for virtualization is mostly being done for server-class machines [?]

-          Rosenblum doesn’t think moving workloads from the datacenter to edge machines, to take advantage of spare cycles, will ever take off – it’s just too much of a pain to try to harness those spare cycles

-          Single-machine hypervisor is becoming commodity, so VMWare is moving to managing the whole datacenter, to stay ahead of the competition

 

Keynote, Werner Vogels, Amazon CTO:

-          Mostly a pitch for Amazon’s web services: EC2, S3, SQS, SimpleDB

-          Gave example of Animoto, company that merges music + photos to create a video, which has no servers whatsoever: had 25K users, launched a Facebook app, and went from 25K users total to adding 25K users/hour; were able to handle it by moving from 50 EC2 instances to 3000 EC2 instances in 2 days

-          Currently 370K registered AWS developers

-          Bandwidth consumed by AWS is bigger than bandwidth consumed by Amazon e-commerce services

-          Shift to service-oriented architecture occurred as result of being approached by Target in 2001/2002, asking whether Amazon could run their e-commerce for them. Realized that their current architecture wouldn’t scale/work, so they re-engineered it

-          Single Amazon page can depend on hundreds of other services

-          Big barrier between developing web app and operating it at scale: loadbalancing, hardware mgmt, routing, storage management etc. Called this the “undifferentiated heavy lifting” that needs to be done to even get in the game

-          Claim: typical company spends 70% effort/money on “undifferentiated heavy lifting” and 30% on differentiated value creation; AWS is intended to allow companies to focus much more on differentiated value creation

-          SmugMug has been at forefront of companies relying on AWS; currently store 600TB of photos in S3, and have launched an entirely new product, SmugVault, based purely on the existence of S3 => AWS not just replacement for existing infrastructure, but enabling new businesses

-          In 2 years, cloud computing will be evaluated along 5 axes: security, availability, scalability, performance, cost-effectiveness

-          Really plugged the pay-as-you-go model

 

“Working the Cloud: next-gen infrastructure for new entrepreneurs” panel

-          Q: is lock-in going to be a problem ie how easy will it be to move an app from one cloud computing platform to another ?

o   A: Strong desire for standards that will make it easy to port apps, but not there yet

o   A: To really use the cloud, you need to embed assumptions about it in your code; even bare-metal clouds require intelligence, like scripts to spin up new EC2 instances, so lock-in is a real concern

o   Side thread: Google person on panel claimed that using Google App Engine didn’t lock in developers, because the GAE APIs are well-documented, and he was promptly verbally mugged by just about everyone else on the panel, pointing out things like the use of BigTable in GAE making it difficult to extract data, or replace the underlying storage layer etc.

o   Prediction: there will be convergence to standards, and choice will come down to whether to use a generic cloud, or a more specialized/efficient cloud, eg one targeted at the medical information sector, with features for HiPPA compliance

-          Need new licensing models for cloud computing, to deal with the dynamic increase/decrease in number of application instances/virtual machines as load changes

-          Tidbit: Google has geographically distributed data centers, and geo-replicates [some] data

-          Q: will we be able to use our old “toys” [APIs, programming models etc] in the cloud ?

o   A: Yes, have to be able to, otherwise people won’t adopt it

o   A:  Yes, just have to be smart about replacing the plumbing underneath the various APIs

o   A: Yes, but current API frameworks are lacking some semantics that become important in cloud computing, like ways to specify how many times an object should be replicated, whether it’s ok to lazily replicate some data etc

 

Mini-note, “Optical networking”, Drew Perkins

-          Video is, by far, the largest consumer of bandwidth on the internet

-          Cost of content is disproportionate to size: 4MB song costs $1, 200MB TV show episode costs $2, 1.5GB movie costs $3-4.

-          Photonic integrated circuits that can be used to build 100GB/s are needed to meet future bandwidth requirements: less power,  need fewer network devices

 

“Race to the next database” panel

-          Quite poorly organized: panelists each got to give an [uninformative] infomercial for their company, and there was very little time for actual questions and discussion

-          Aster Data Systems is back-end data warehouse and analytics system for MySpace: 1 billion impressions/day, 1 TB of new data per day, new data is loaded into 100-node Aster cluster every hour and needs to be available for ad analytics engine to decide which ads to show

-          SQLStream is company that has built a data stream processing product that collapses the usual processing stages [data staging, cleaning, loading etc] into a pipeline that continuously produces results for “standing” queries; useful for real-time analytics

-          Web causes disruption to traditional DB model because of [10x] larger data volumes, need for high interactivity/turn-around, need to scale out instead of up. For example, GreenPlum is building a 20PB data warehouse for one customer.

-          Can’t rely on all the data being in a single store, so need to be able to do federated/distributed queries

 

“MS Datacenters”

-          Presentation centered on MS plans for datacenters-in-a-box

-          Datacenters-in-a-container are long-term strategy for MS, not just transient response to high demand and lack of space

-          Container blocks have lower reliability than traditional datacenters, so applications need to be geo-distributed and redundant to handle downtime

 

“End of boxed software”, Parker Harris, co-founder of Salesforce.com

-          Origins of salesforce.com: Modeled on consumer internet sites - amazon.com; ebay.com

-          Transition from client->server site to a platform (force.com): first instinct is to build a platform, but then you lose touch with why you're building it. As they started building their experience, they abstracted away components and started realizing it could become a platform. Revenue comes from site, platform is a bonus.

-          Initially scaled by buying bigger [Sun] boxes ie scaled up, not out, and ran into lots of complexity. Unclear whether that’s still the case or whether they’ve re-architected.

 

“Scaling to satiate demand” panel

-          Q: “When did you first realize your architecture was broken, and couldn’t scale ?”

o   A: When site started to get slow; Ebay: after massive site outages

-          Q: “How do you handle running code supplied by other people on your servers ?”

o   A: Compartmentalize ie isolate apps; have mgmt infrastructure and tooling to be able to monitor and control uploaded apps; provide developers with fixed APIs and tools so you can control what they do

-          Q: “How do Facebook and Slide [builds Facebook apps] figure out where the problems are if Slide starts failing ?”

o   A: Lots of real-time metrics; ops folks from both companies are in IM contact and do co-operative troubleshooting

-          Q: “How should you handle PR around outages ?”

o   A: Be transparent; communicate; set realistic timelines for when site will be back up; set expectations wrt “bakedness” of features

-          Beware of retrying failed operations too soon, since retries may cause an overloaded system to never be able to come up

-          Ebay: each app is instrumented with the same set of logging infrastructure and there’s a real-time OLAP engine that analyzes the logs and does correlation to try to find troubled spots

-          Facebook and Meebo both utilized their user base to translate their sites into multiple languages

-          Need to know which bits of the system you can turn off if you run into trouble

-          Biggest challenge is scaling features that involve many-many links between users; it’s easy to scale a single user + data

-          Keep monitoring: there are always scale issues to find, even without problems/outages

-          Slide: “Firefox 3 broke all of our apps”

-          Facebook has > 10K servers

 

Mini-note, “Creating fair bandwidth usage on the Internet”, Dr. Lawrence Roberts, leader of the original ARPANET team

-          P2P leads to unfair usage: people not using P2P get less; 5% of users (P2P users) receive 80% capacity

-          Deep packet inspection catches 75% of p2p traffic, but isn’t effective in creating fairness

-          Anagran has flow behavior mgmt: observe per user flow behavior & utilization and then equalize. Equalization is done in memory, on networking infrastructure [routers etc] and at the user level instead of the flow level

 

Mini-note, “Cloud computing and the mid-market”, Zach Nelson, CEO of NetSuite

-          Mid-market is the last great business applications opportunity

-          Cloud computing makes it economical to reach the “Fortune 5 million”

-          Cloud computing still doesn’t solve problem of application integration

-          Consulting services industry is next to be transformed by cloud computing

 

Mini-note, “Electricity use in datacenters”, Dr.Jonathan Koomey

-          Site Uptime Network, an organization of data center operators and designers did study of 19 datacenters from 1999-2006:

o   Floor area remained relatively constant

o   Power density went from 23 W/sq ft to 35 W/sq ft

-          In 2000, datacenters used 0.5% of world’s electricity; in 2005, used 1%.

-          Cooling and power distribution are largest electricity consumers; servers are second-largest; storage and networking equipment accounts for a small fraction

-          Asia-Pacific region’s use of power is increasing the fastest, over 25% growth per year

-          Lots of inefficiencies in facility design: wrong cost metrics [sq feet versus kW], different budgets and costs borne by different orgs [facilities vrs IT], multiple safety factors piled on top of each other

-          Designed Eco-Rack, which, with only a few months of work, reduces power consumption on normalized workload by 16-18%

-          Forecast: datacenter electricity consumption will grow by 76% by 2010, maybe a bit less with virtualization

 

“VC investment in cloud computing infrastructure” panel

-          Overall thesis of panel was that VCs are not investing in infrastructure

-          VCs disagreed with panel theme, and said that it depended on the definition of infrastructure; said they are investing in infrastructure, but it’s moving higher in the stack, like Heroku [?]

-          HW infrastructure requires serious investment, large teams, and long time-frame – not a good fit with VC investment model

-          Any companies that want to build datacenters or commodity storage and compute services are not  a good investment – there are established, large competitors, and it’s very expensive to compete in that space

-          Infrastructure needed for really large scale [like a 400 Gbit/sec switch] has a pretty small market, which makes it hard to justify the investment. If there’s a small market, the buyers all know they’re the only buyers and exert large downward pressure on price, which makes it hard for company to stay in business

-          Quote: “any company that’s doing something worthwhile, and building something defensible, will take at least 24 months to develop”

 

James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

 

Friday, June 27, 2008 5:45:07 AM (Pacific Standard Time, UTC-08:00)  #    Comments [1] - Trackback
Services
 Wednesday, June 25, 2008

John Breslin did an excellent job of writing up Kai-Fu Lee’s Keynote at WWW2008.  John’s post: Dr. Kai-Fu Lee (Google) – “Cloud Computing”. 

 

There are 235m internet users in China and Kai-Fu believes they want:

1.       Accessibility

2.       Support for sharing

3.       Access data from wherever they are

4.       Simplicity

5.       Security

 

He argues that Cloud Computing is the best answer for these requirements.  He defined the key components of what he is referring to as the cloud to be: 1) data stored centrally without need for the user to understand where it actually is, 2) software and services also delivered from the central location and delivered via browser, 3) built on open standards and protocols (Linux, AJAX, LAMP, etc.) to avoid control by one company, and 4) accessible from any device especially cell phones.  I don’t follow how the use of Linux in the cloud will improve or in any way change the degree of openness and the ease with which a user could move to a different provider.  The technology base used in the cloud is mostly irrelevant. I agree that open and standard protocols are both helpful and a good thing.

 

Kai-Fu then argues that what he has defined as the cloud has been technically possible for decades but three main factors make it practical today:

1.       Falling cost of storage

2.       Ubiquitous broadband

3.       Good development tools available cost effectively to all

 

He enumerated six properties that make this area exciting: 1) user centric, 2) task centric, 3) powerful, 4) accessible, 5) intelligent, and 6) programmable.  He went through each in detail (see Dan’s posting).  In my read I just spent time on the data provided on GFS and Bigtable scaling, hardware selection, and failure rates that were sprinkled throughout the remainder of the talk:

·         Scale-out: he argues that when comparing a $42,000 high-end servers to the same amount spent on $2,500 servers, the commodity scale-out solution is 33x more efficient.  That seems like a reasonable number but I would be amazed if Google spent anywhere near $2,500 for a server.  I’m betting on $700 to perhaps as low as $500. See Jeff Dean on Google Hardware Infrastructure for a picture of what Jeff Dean reported to be the  current Google internally designed server design.

·         Failure management. Kai-Fu stated that a farm of 20,000 servers will have 110 failures per day.  This is a super interesting data point from Google in that failure rates are almost never published by major players. However, 110 per day on a population of 20k servers is ½% a day which seems impossibly high.  That says, on average, the entire farm is turned over in 181 days.  No servers are anywhere close to that unreliable so this failure data must be of all types of failures whether software or hardware. When including all types of issues, the ½% number is perfectly credible.  Assuming there current server population is roughly one million, they are managing 5,500 failures per day requiring some form of intervention.  It’s pretty clear why auto-management systems are needed at anything even hinting at this scale. It would be super interesting to understand how many of these are recoverable software errors, recoverable hardware errors (memory faults etc.), and unrecoverable hardware errors requiring service or replacement. 

·         He reports there are “around 200 Google File System (GFS) clusters in operation. Some have over 5 PB of disk space over 200 machines.”   That ratio is about 10TB per machine.  Assuming they are buying 750GB disks that just over 13 disks.  I’ve argued in the past that a good service design point is to build everything on two hardware SKUs: 1) data light, and 2) data heavy.  Web servers and mid-tier boxes run the former and data stores run the later.  One design I like uses the same server board for both SKUs with 12 SATA disks in SAS attached disk modules.  Data light is just the server board.  Data heavy is the server board coupled with 1 or optionally more disk modules to get 12 , 24, or even 36 disks for each server. Cheap cold storage needs high disk to server ratios.

·         The largest Big table cells are 700TBs over 2,000 servers.”  I’m surprised to see two thousand reported by Kai-Fu as the largest BigTable cell – in the past I’ve seen references to over 5k. Let’s look at the storage to server ratio since he offered both.  700TB storage spread over 2k servers is only 350 GB per node. Given that they are using SATA disks, that would be only a single disk and a fairly small one at that.  That seems VERY light on storage. BigTable is a semi-structured storage layer over GFS.  I can’t imagine a GFS cluster with only 1 disk/server so I suspect the 2,000 node BigTable cluster that Kai-Fu described didn’t include the GFS cluster that it’s running over.  That helps but the number still are somewhat hard to make work.  These data don’t line up well with what’s been published in the past nor do they appear to be the most economic configurations.

 

Thanks to Zac Leow to sending this pointer my way.

 

                                --jrh

 

James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

 

Wednesday, June 25, 2008 8:06:53 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services
 Tuesday, June 24, 2008

Earlier today Nokia announced it will acquire the remaining 52% share of the Symbian Limited to take over controlling interest of the mobile operating system provider with 91% of the outstanding shares.  This alone is interesting but what is fascinating is they also announced their intention to open source Symbian to create “the most attractive platform for mobile innovation and drive the development of new and compelling web-enabled applications”.  The press release reports the acquisition will be completed at 3.647 EUR/share at a total cost of 264m EUR. The new open source project responsible for the Symbian operating systems will be managed by the newly set up Symbian Foundation with support announced by Nokia, AT&T, Broadcom, Digia, NTT docomo, EA Mobile, Freescale, Fujitsu, LG, Motorola, Orange, Plusmo, Samsung, Sony Ericcson, ST, Symbian, Teleca, Texas Instruments, T Mobile, Vodaphone, and Wipro.

 

Other articles on the acquisition:

·         http://www.techcraver.com/2008/06/23/huge-news-nokia-acquires-symbian/

·         http://www.readwriteweb.com/archives/nokia_acquires_symbian.php

 

This substantially changes the mobile operating system world with all major offerings other than Windows Mobile, iPhone, and RIM now available in open source form.  The timing of this acquisition strongly suggests that it’s a response to a perceived threat from Google Android ensuring that, even if Android never gets substantial market traction, it’s already had a lasting impact on the market.

 

                                                                --jrh

 

Sent my way by Eric Schoonover

Update: Added iPhone to prooprietary mobile O/S list (thanks Dare Obasanjo).

 

James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

 

Tuesday, June 24, 2008 4:01:40 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Ramblings
 Wednesday, June 18, 2008

Lars Bak leads the Google Aarhus Denmark lab. He’s one of the original developers of Sun HotSpot Java VM. the Self Programming Language, and the sun Connected Limited Device Configuration VM for mobile phone.  He’s schedule to do a talk at JAOO Aarhaus, Denmark (Sept. 30, 2008).  Unconfirmed rumors report he will be announcing “Google Secret Project” during his JAOO keynote.

 

It’s hard to know for sure what is coming but the popular speculation is that Google will be announcing a dynamic language runtime with support for Python, JavaScript, and Java. A language runtime running on both server-side and client-side with support for a broad range of client devices including mobile phones would be pretty interesting.

 

                                --jrh

 

John Lam pointed me to: https://secure.trifork.com/speaker/Lars+Bak.

 

James Hamilton, Data Center Futures Team
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

Wednesday, June 18, 2008 6:54:03 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Saturday, June 14, 2008

Earlier today Google hosted the second Seattle Conference on Scalability. The talk on Chapel was a good description of a parallel language for high performance computing being implemented done at Cray.  The GIGA+ talk described a highly scalable filesystem metadata system implemented in Garth Gibson’s lab at CMU. The Google presentation described how they implemented maps.google.com on various mobile devices. It was full of gems on managing device proliferation and scaling the user interface down to small screen sizes. 

 

The Wikipedia talk showed an interesting transactional DHT implementation using Erlang.  And, the last talk of the day was a well presented talk by Vijay Menon on transactional memory. My rough notes from the 1 day conference follow.

 

                                    --jrh

 

 

Kick-off by Brian Bershad, Director of Google Seattle Lab

 

Communicating like Nemo: Scalability from a fish-eye View

·         Speaker: Jennifer Wong (Masters Student from University of Victoria)

o   Research area: faul tolerance in Mobile collaborative systems

·         Research aimed to bring easy communications at cost and beyond two-people for SCUBA divers.

·         Fairly large market with requirement to communicate

o   Note that PADI has 130k members (there are other many other recreational diving groups and, of course, there are commercial groups as well)

·         Proposal: use acoustic for underwater to surface stations. Wireless between surface stations doing relay.

·         Acoustic unit to be integrated into dive computer.

 

Maidsafe: Exploring Infinity

·         Speaker: David Irvine, Mindsafe.net Ltd.

o   http://www.maidsafe.net/ 

·         Problems with existing client systems

o   Data gets lost

o   Encryption is difficult

o   Remot4e access diff

o   Syncing between devices hard

·         Proposal: chunk files, hash the chunks, xor and then encrypt the chunks and distribute over a DHT over many systems

o   It wasn’t sufficiently clear where the “random data” that was xored in came from or where it was stored.

o   It wasn’t sufficiently clear where the encryption key that was used in the AES encryption came from or was stored. 

·         Minimum server count for a reliable cloud is 2,500.

·         It is a quid pro quo network. You need to store data for others to be able store data yourself.

·         Lots of technology in play. Uses SHA, XORing, AES, DHTs PKI (without central authority) and something the speaker referred to as self encryption. The talk was given in whiteboard format which didn’t get to the point where I could fully understand enough of the details.  There were several question from the audience on the security of the system.  Generally an interesting system that distributes data over a large number of non-cooperating client systems.

·         Seems similar in design goals to Oceanstore and Farsite.  Available for download at http://www.maidsafe.net/. 

UPDATE: The speaker, David Irvine, sent me a paper providing more detail on Maidsafe: 0316_maidsafe_Rev_004(Maidsafe_Scalability_Response)-Letter1 (2).pdf (72.33 KB).

 

Chapel: Productive Parallel Programming at Scale

·         Speaker: Bradford Chamberlain, Cray Inc.

·         Three main limits to HPC scalability:

o   Single Program, Multiple Data (SPMD) programming model (vector machines)

o   Exposure of low-level implementation details

o   Lack of programmability

·         Chapel Themes:

o   General parallel programming

o   Reduce gap between mainstream and parallel languages

o   Global-view abstractions vs fragmented

§  Fragmented model is thinking through the partitioned or fragmented solution over many processors (more complex than global-view)

§  MPI programmers program a fragmented view

o   Control of locality

o   Multi-resolution design

·         Chapel is work done at Cray further developing the ZPL work that Brad did at the University of Washington.

·         Approaches to HPC parallelism:

o   Communication Libraries

§  MPI, MPI-2, SHMEM, ARMCI, GASNet

o   S/hared Memory Models:

§  OpenMP, Pthrads

o   PGAS Languages (Partitioned Global Address Space)

·         Chapel is a block-structured, imperative programming language.  Selects the best features from ZPL, HPF, LCU, Java, C#, C++, C, Modula, Ada,…

·         Overall design is to allow efficient code to be written at a high level but still allow low level control of data and computation placement

 

CARMEN: A Scalable Science Cloud

·         Speaker: Paul Watson, CSC Dept., Newcastle University, UK

·         Essentially a generalized, service-based e-Science platform for bio-informatics

o   Vertical services implemented over general cloud-storage and computation

o   Currently neuroscience focused but also getting used by chemistry and a few other domains

·         Services include workflow, data, security, metatdata, and service (code) repository

·         e-Science Requirements Summary:

o   share both data and codes

o   Capacity: 100’s TB

o   Cloud architecture (facilitates sharing and economies of scale)

·         Many code deployment techniques:

o   .war files, .net, JAR, VMWare virtual machines, etc.

·         Summary: system supports upload data, metadata describing providence of the data (data type specific), code in many forms, and security policy.  Work requests are received and appropriate code is shipped from the code repository to the data (if not already there) and the job is scheduled and run returning results.  Execution graph is described as a workflow.  Workflows are constructed using graphical UI in a browser. Currently running on a proprietary resource cloud but using a commercial cloud such as AWS is becoming interesting to them. (pay as you go model is very interesting).

o   To use AWS or similar service, they need to 1) have the data there, and 2) someone has to maintain the vertical service.

o   Considering http://www.inkspotscience.com/ , a company focused upon “on-demnd e-science”. 

 

GIGA+: Scalable Directories for Shared File Systems

·         Speaker: Swapnil Patil, CSC Dept., Carnegie Mellon University

·         Work done with Garth Gibson

·         Large FS don’t scale the metadata service will.

·         The goals of this work is to scale millions and trillions of objects per directory

·         Supports UNIX VFS interface

·         As clusters scale, we need to scale filesystems and as filesystems grow, we need to scale them to millions of objects per directory

·         Unsynchronized, parallel growth without central coordination

·         Current state of the art:

·         Hash-tables: Linux EX2/Ext3

·         B-trees: XFS

·         Hash table issues are they don’t incrementally grow. 

·         Can use extensible hashing (Fagin 79) to solve the problem as

·         Lustre (Sun) uses a single central metadata server (limits scalability)

·         PVFS2 distribute the metadata over many servers using directory partitioning.  Helps distribute the load but a very large or very active directory is still on one metadata server.

·         IBM GPFS implements distributed locking (scalability problems) and shared storage as the communications mechanism (if node 1 has page X needed by node 2, it has to write it to disk and node 2 has to read it (ping ponging).

·         GPFS does well with lookup intensive workloads but doesn’t do well with high metadata update scenarios

·         What’s new in GIGA+:

·         Eliminate serialization

·         No synchronization or consistency bottlenecks

·         Weak consistency

·         Clients work with potentially stale data but it mostly works. On failure, the updated metadata is returned to the client.  Good optimistic approach.

·         When a hash bucket is split, the client will briefly be operating on stale data but, on failure, the correct location is returned updating the client cache.  Could use more aggressive techniques to update caches (e.g. gossip) but this is a future optimization.

·         Partition presence or absence is indicated by a bit-map on each server.

·         Very compact

·         Unlike extensible hashing, only those partitions that need to be split are split (supports non-symetric growth).

·         Appears to have similarities to consistent hashing.

 

Scaling Google Maps from the Big Screen Down to Mobile Phones

·         Speaker: Jerry Morrison, Google Inc.

·         Maps on the go.

·         Three main topics covered:

·         Scaling the UX

·         Coping with the mobile network

·         Note that in many countries the majority of network access is through mobile devices rather than PCs

·         Key problem of supporting mobile devices well is mostly about scaling the user experience (240x320 with very little keyboard support)

·         Basic approach is to replace the AJAX application with a device optimized client/server application.

·         Coping with the mobile network:

·         Low bandwidth and high latency (Peter Denning notes that bandwidth increases at the square of the latency improvement)

·         4.25 seconds average latency for an HTTP request

·         The initial connection takes additional seconds

·         Note: RIM handles all world-wide traffic through an encrypted VPN back to Waterloo, CA.

·         Google traffic requests

·         Load balancer

·         GMM which pulls concurrently from:

1.       Map search, Local search, & Directions

2.       Maps & sat tiles

3.       Highway traffic

4.       Location

·         Tiles are PNG at 64x64 pixels.  Larger tiles compress more but sends more than needed. Small can respond quicker but you need more tiles to complete the screen.

·         They believe that 22x22 is optimal for 130x180 screen (they use 64x64 looking to the future)

·         Tiles need mobile adaption:

·         More repetition of street names to get at least one on a screen

·         Want more compression so use less colors and some other image simplifications

·         JPEG has 600 bytes of header

·         Showed a picture of the Google Mobile Testing Lab

·         100s of phones

·         Grows 10% a quarter

·         Three code bases:

·         Java, C++, ObjectiveC (iPhone)

·         Language translation: fetch language translation dynamically as needed (can’t download 20+ on each request)

·         Interesting challenges from local juristictions:

·         Can’t export lat & long from China

·         Different representations of disputed borders

 

Scalable Wikipedia with Erlang

·         Speaker: Thorsten Schuett, Zuse Institute Berlin

·         Wikipedia #7 website (actually #6 in that MSN and Windows Live was shown as independent)

·         50,000 requests per second

·         Standard scaling story:

·         Started with a single server

·         Then split DB from mid-tier

·         Then partitioned DB

·         Then partitioned DB again into clusters (with directory to find which one)

·         Then put memcache in front of it

·         Their approach: use P2P overlay in the data center

·         Start with chord# (DHT) which supports insert, delete, and update

·         Add transactions, load balancing, ….

·         Transactions:

·         Implemented by electing a subset as transaction managers elected with Paxos protection

·         Quorum of >N/2 nodes

·         Supports Load Balancing

·         And supports multi-data center policy based geo-distribution for locality (low latency) and robustness (remote copies).

·         They have implemented distributed Erlang

·         Security problems

·         Scalability problems

·         Ended up having to implement their own transport layer

·         Summary:

·         DHT + Transactions = scalable, reliable, efficient key/value store

·         Overall system is message based and fail-fast and that made Erlang a good choice.

 

Scalable Multiprocessor Programming via Transactional Programming

·         Speaker: Vijay Menon, Google Inc.

·         Scalability no longer just about large scale distributed systems.  Modern processors are multi-core and trending towards many-core

·         Scalability is no longer restricted to a server problem. Clients and even some mobile devices today.

·         Conventional programming model is threads and locks.

·         Transactional memory: replaced locked regions with transactions

·         Optimistic concurrency control

·         Declarative safety in the language

·         Transactional Memory examples:

·         Azul: Large scale Java server with more than 500 cores

·         Transactional memory used in implementation but not exposed

·         Sun Rock (expected 2009)

·         Conflict resolution:

·         Eager: Assume potential conflict and prevent

·         Lazy: assume no conflict (record state and validate)

·         Most hardware TMs are lazy.  S/W implementations vary.

·         Example techniques:

·         Write in place in memory with old values written to log to support rollback)

·         Buffer changes and only persist on success

·         Showed some bugs in transaction management and showed how programming language support for transactions could help.:

·         Implementation challenges

·         Large transactions expensive

·         Implementation overhead

·         Semantic challenges including Allowing I/O and other operations that can’t be rolled back

 

James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

 

Saturday, June 14, 2008 4:15:52 PM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Software
 Wednesday, June 11, 2008

Jeff Dean did a great talk at Google IO this year. Some key points from Steve Garrity (msft pm) and some note from the excellent write-up at Google spotlights data center inner workings:

·         many unreliable servers to fewer high cost servers

·         Single search query touches 700 to up to 1k machines in < 0.25sec

·         36 data centers containing > 800K servers

o   40 servers/rack

·         Typical H/W failures: Install 1000 machines and in 1 year you’ll see: 1000+ HD failures, 20 mini switch failures, 5 full switch failures, 1 PDU failure

·         There are more than 200 Google File System clusters

·         The largest BigTable instance manages about 6 petabytes of data spread across thousands of machines

·          MapReduce is increasing used within Google.

o   29,000 jobs in August 2004 and 2.2 million in September 2007

o   Average time to complete a job has dropped from 634 seconds to 395 seconds

o   Output of MapReduce tasks has risen from 193 terabytes to 14,018 terabytes

·         Typical day will run about 100,000 MapReduce jobs

o   each occupies about 400 servers

o   takes about 5 to 10 minutes to finish

 

More detail on the typical failures during the first year of a cluster from Jeff:

·         ~0.5 overheating (power down most machines in <5 mins, ~1-2 days to recover)

·         ~1 PDU failure (~500-1000 machines suddenly disappear, ~6 hours to come back)

·         ~1 rack-move (plenty of warning, ~500-1000 machines powered down, ~6 hours)

·         ~1 network rewiring (rolling ~5% of machines down over 2-day span)

·         ~20 rack failures (40-80 machines instantly disappear, 1-6 hours to get back)

·         ~5 racks go wonky (40-80 machines see 50% packetloss)

·         ~8 network maintenances (4 might cause ~30-minute random connectivity losses)

·         ~12 router reloads (takes out DNS and external vips for a couple minutes)

·         ~3 router failures (have to immediately pull traffic for an hour)

·         ~dozens of minor 30-second blips for dns

·         ~1000 individual machine failures

·         ~thousands of hard drive failures

 

A pictorial history of Google hardware through the years starting with the current generation server hardware and working backwards from Jeff’s talk at the 2007 Seattle Scalability Conference:

Current Generation Google Servers

 

Google Servers 2001

 

Google Servers 2000

 

Google Servers 1999

 

Google Servers 1997

My general rule on hardware is that, if you have a viewing window into the data center, you are probably spending too much on servers. The Google model of cheap servers with software redundancy is the only economic solution at scale.

 

Other notes from Google IO:

·         http://perspectives.mvdirona.com/2008/05/29/RoughNotesFromSelectedSessionsAtGoogleIODay1.aspx

·         http://perspectives.mvdirona.com/2008/05/29/IO2008RoughNotesFromMarissaMayerDay2KeynoteAtGoogleIO.aspx

·         http://perspectives.mvdirona.com/2008/05/30/IO2008RoughNotesFromSelectedSessionsAtGoogleIODay2.aspx

 

All pictures above courtesy of Jeff Dean.

 

                                                --jrh

 

James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

 

 

Wednesday, June 11, 2008 4:57:56 AM (Pacific Standard Time, UTC-08:00)  #    Comments [4] - Trackback
Hardware
 Sunday, June 08, 2008

I’m interested in high-scale web sites, their architecture and their scaling problems.  Last Thursday, Oren Hurvitz posted a great blog entry summarizing two presentations at Java One on the LinkedIn service architecture.

 

LinkedIn scale is respectable:

·         22M members

·         130M connections

·         2M email messages per day

·         250k invitations per day

 

No big surprises other than LinkedIn are still using a big, expensive commercial RDBMS and a large central Sun SPARC Server. LinkedIn call this central server “The Cloud” and its described as the “backend server caching the entire LinkedIn Network”.  Note that “server” is not plural – it appears that the cloud is not partitioned  but it is replicated 40 times. However, each instance of the cloud hosts the entire social graph in 12GB of memory. 

 

Social graphs are one of my favorite services problems in that they are notoriously difficult to effectively partition. I often refer to this as the “hairball problem”.  There is no clean data partition that will support the workload with adequate performance. Typical approaches to this problem redundantly store a several copies of the data with different lookup keys and potentially different partitions. For example, most social networks have some notion of a user group.  The membership list is stored with each group.  

 

But, a common request is to find all groups for which a specific user is a member. Storing users with the group allows efficient group enumeration but doesn’t support the “find all groups of a given user” query.  Storing the group membership with each user supports this query well but doesn’t allow efficient group membership enumeration. The most common solutions are to store the data redundantly both ways. Typically one is the primary copy and the other is updated asynchronously after the primary is updated. This effectively makes read much more efficient at the cost of more work at update time.  Sometimes, the secondary copy isn’t persisted in the backend database and is only stored in an in memory cache often implemented using memcached.

 

The LinkedIn approach of using a central in-memory, social graph avoids some of the redundancy of the partitioned model I described above at the expensive of requiring a single-server memory large enough to store the entire un-partitioned social graph.  Clearly this is more efficient by many measures but requiring that the entire social graph fit into a single servers memory means that more expensive servers are required as the service grows. And, many of us can’t sleep at night with hard scaling limits even if they are apparently fairly large.

 

Other scaling web sites pointers are posted at: http://perspectives.mvdirona.com/2007/11/12/ScalingWebSites.aspx.

 

Thanks to Dare Obasanjo and Kevin Merritt for sending my way.

 

                                                                --jrh

 

James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

 

 

Sunday, June 08, 2008 11:14:02 AM (Pacific Standard Time, UTC-08:00)  #    Comments [1] - Trackback
Services
 Friday, June 06, 2008

There was an interesting talk earlier today at Microsoft Research by Jason Cong of the UCLA Computer Science Department on compiling design specifications in C/C++/SystemC and user constraints into ASIC and FPGA design. The advantage of compiler based approaches include, more productivity working at a higher level, automating verification, allows optimization, and allows rapid experimentation at different frequencies and different optimization goals (performance vs power, for example). As design complexity increases higher level language and optimization have excellent potential.  Essentially the same thing is happening in hardware design as happened 30 years ago in operating systems implementation languages. High level implementation languages replace lower level as complexity climbs.  For example, the Intel Core 2 Duo processor is a 1B transistor implementation whereas the 386 was 1/1000th of that complexity at 1M.

 

Also super interesting was the example from a financial institution that is taking a software based stock analysis system where they take the hottest parts of the system and compile these to FPGA implementations. 30x faster at 1/10th the power. Very cool.  

 

Now that AMD supports hyper transport it is possible to implement custom processors with excellent overall system performance.  Intel has opened up the FSB and is also expected to offer a non-compatible hyper transport-like implementation in the future.

 

My rough notes from the talk follow:

 

·         Speaker: Jason Cong, UCLA Computer Science (cong@cs.ucla.edu)

·         Working on on-chip interconnects & communications

o   3D IC design

o   RF-interconnects

§  Note that power restrictions restrict processors to ~5GH

·         But, communications lines can scale to 100s of GH

·         Dividing communications link into 10 or more “channels” that operate at different frequencies

·         This talk focused on  ESL SystemC to FPGA compiler

·         Why?

o   700,000 lines of RTL for a 10M gate design is too much

o   Allows executable specification

o   Verification requires executable design

o   Accelerated computing or reconfigurable computing also need C/C++ based compilation/synthesis to FPGAs

§  CPUs coupled with FPGA to support common functions at high performance and lower power

·         Note that performance limited by communications (getting data to the CPU)

o   Long wires that have to be traversed in a single clock are the limiting factor

o   This research focuses on supporting multi-cycle communications

·         xPilot: Behavior-to-RTL (Register Transfer Level design) synthesis flow

o   takes behavior spec in C/SystemC to front-end compiler to SSDM

o   SSDM is optimized using standard compiler optimization (loop unrolling, strength reduction, scheduling, etc.)

o   SSDM is compiled to:

§  Verilog/VHDL/SystemC

§  FPGAs: Altera, Xilinx

§  ASICs: Magma, Synopsys

o   UPS: Uniform Power Specification

o   During final compilation optimize for power and shut off compute units that are not being used and shut off those that are being during idle periods (a busy disk controller is frequently waiting for mechanicals and not need to execute instructions)

§  Can’t shut FPGAs but can with ASICs

§  The only solution to dynamic power leakage only solution is shut the component of

o   Allows faster experimentation than hand coding.  You can try different frequencies and different power optimizations (too complex for most humans)

o   Scheduling (allocation of operations to compute logic and specific clock cycles) is NP-complete and automated techniques can exceed quality of expert designs

·         Example:

o   Schedule the behavior to RTL using the following characterization, cycle time, constraints, and objectives:

§  Platform characterization: adder (2ns) & multiplier (5ns)

§  Target cycle time (10ns)

§  Resource constraint: only one multiplier is available

§  Objective: high performance or lower power as examples

·         Note as optimizing and reducing component counts, less space is required, which can allow faster clocking

·         Investigating compilation for Reconfigurable Accelerated Computing

o   Take GCC 3DES implementation and synthesis FPGS RTL description

o   Example took 3DES from C level implementation to a FPGA (Xilinx Virtex-5)

·         Investment bank is using this tool to compile financial optimizations from S/W implementation to FPGA accelerators (Black-Scholes S/w kernel)

o   30x speed-up over software implementation and 1/10 the power (6 vs 68W)

 

A related presentation: http://cadlab.cs.ucla.edu/~cong/slides/fpt05_xpilot_final.pdf.

 

James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

 

 

Friday, June 06, 2008 10:52:35 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Hardware
 Tuesday, June 03, 2008

 

Last week at Google IO, pricing was announced for Google Application Engine. Actually it was blogged the night before at: http://googleappengine.blogspot.com/2008/05/announcing-open-signups-expected.html.

 

The prices are close to identical with Amazon AWS although GAE differs substantially from the AWS offerings.  The former offers a easy to use Python  execution environment whereas Amazon offers the infinitely flexible run-this-virtual-machine model. Clearly the Amazon model costs more to provide so, by that measure, AWS pricing is somewhat better:

 

Google Application Engine Pricing:

·    $0.10 - $0.12 per CPU core-hour

·    $0.15 - $0.18 per GB-month of storage

·    $0.11 - $0.13 per GB outgoing bandwidth

·    $0.09 - $0.11 per GB incoming bandwidth

·    From: http://googleappengine.blogspot.com/2008/05/announcing-open-signups-expected.html

Compared with AWS Pricing:

·         $0.10 - $0.80 per VM hour (depending upon resources allocated)

·         $0.15 per GB-month of storage

·         $0.100 - $0.170 per GB outgoing bandwidth

·         $0.100 per GB incoming bandwidth

·         From: http://www.amazon.com/S3-AWS-home-page-Money/b/ref=sc_fe_l_2?ie=UTF8&node=16427261&no=3435361&me=A36L942TSJ2AJA and http://www.amazon.com/EC2-AWS-Service-Pricing/b/ref=sc_fe_l_2?ie=UTF8&node=201590011&no=3435361&me=A36L942TSJ2AJA.

There are some important differences that make the pricing comparison somewhat biased in a couple of ways. Two important differences: 1) as mentioned above, Amazon gives an entire virtual machine so EC2 is much more flexible than GAE both in that it can run arbitrary applications in arbitrary languages and that it supports all execution models whereas GAE only supports HTTP request/response.  Another key difference is the storage subsystem.  In the numbers above, we’re comparing the Amazon blob store (S3) with the more structured storage model offered by GAE.  The more comparable AWS SimpleDB pricing is considerably higher than the GAE storage pricing. SimpleDB charges $1.50 GB/month in addition to machine usage and network transmission costs.  GAE is offering much more affordable semi-structured storage and the GAE storage model actually supports data types rather than having to force everything to character format.

 

GAE is still free to start with under 5M page views/month and up to ½ GB storage for free.  Obviously this helps developers get started without strings and that’s a good thing. But, more importantly, it avoids Google from having to go to the expense of billing very small values.  In a weird sort of way, I’m more impressed with AWS billing $0.04 on some accounts in that it shows there billing system is incredibly lean. Scaling down billing is hard, hard, hard.

 

In addition to announcing prices, GAE went from a controlled admission beta to a fully open beta where all comers are welcome.  I’m impressed how quickly they have gone from making the service initially available to a full open beta.  Impressive.

 

Also announced last week was a new GAE Memcached API which appears to have been a 20% project of Brad Fitzpatrick (Sriram Krishnan sent my way). And a set of image manipulation APIs supporting image scaling, rotating, etc. will now be part of the GAE API set.

 

My notes from Google IO:

·         http://perspectives.mvdirona.com/2008/05/29/RoughNotesFromSelectedSessionsAtGoogleIODay1.aspx

·         http://perspectives.mvdirona.com/2008/05/29/IO2008RoughNotesFromMarissaMayerDay2KeynoteAtGoogleIO.aspx

·         http://perspectives.mvdirona.com/2008/05/30/IO2008RoughNotesFromSelectedSessionsAtGoogleIODay2.aspx

 

James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

Tuesday, June 03, 2008 8:05:05 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Sunday, June 01, 2008

Yesterday the Tribute to Honor Jim Gray was held at the University of California at Berkeley. We all miss Jim deeply so it really is a tough topic.  But it was great to get together with literally 100s of Jim’s friends and share stories and talk about some of his accomplishments, his contributions to the field, and his contributions to each of us.  Jim is amazing across all three dimensions but what is most remarkable is the profound way he helped others achieve more throughout the industry.  We’re all better engineers, researchers, and human beings for having been lucky enough to have known and worked with Jim.

 

Also announced yesterday was the creation of the Jim Gray Chair at Cal Berkeley.  Bill Gates Eric Schmidt, Marc Benioff, and Mike Stonebraker each donated $250,000 which were matched by a $1,000,000 from the Hewlett Foundation.

 

Seattle PI coverage: Gathering in Berkeley, Calif., today to honor legendary scientist, Microsoft researcher Jim Gray.

 

The morning, general session agenda:

             Welcome - Shankar Sastry

             Opening Remarks - Joseph Hellerstein

             A Tribute, Not a Memorial: Understanding Ambiguous Loss - Pauline Boss

             The Amateur Search - Michael Olson

             Jim Gray at Berkeley - Michael Harrison

             Knowledge and Wisdom - Pat Helland

             Why Did Jim Gray Win the Turing Award? - Michael Stonebraker

             Jim Gray Chair - Stuart Russell

             500 Special Relationships: Jim as a Mentor to Faculty and Students - Ed Lazowska

             Jim Gray: His Contributions to Industry - David Vaskevitch

             A "Gap Bridger" - Richard Rashid

             Thanks to the U.S. Coast Guard - Paula Hawthorn

 

The afternoon, technical session agenda:

·         Welcome - Shankar Sastry

·         Opening Remarks - Joseph Hellerstein

·         A Tribute, Not a Memorial: Understanding Ambiguous Loss - Pauline Boss

·         The Amateur Search - Michael Olson

·         Jim Gray at Berkeley - Michael Harrison

·         Knowledge and Wisdom - Pat Helland

·         Why Did Jim Gray Win the Turing Award? - Michael Stonebraker

·         Jim Gray Chair - Stuart Russell

·         500 Special Relationships: Jim as a Mentor to Faculty and Students - Ed Lazowska

·         Jim Gray: His Contributions to Industry - David Vaskevitch 

·         A "Gap Bridger" - Richard Rashid

·         Thanks to the U.S. Coast Guard - Paula Hawthorn

 

The event was video recorded and streamed via http://webcast.berkeley.edu/.

 

Update: the video will be at:  Tribute to Honor Jim Gray - General Session (thanks to George Spix for sending my way).

Second Update: A good article by John Markoff of the NY Times: A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First.

 

                                    --jrh

 

James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

 

 

Sunday, June 01, 2008 10:08:01 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Ramblings
 Thursday, May 29, 2008

Google IO notes continued from earlier in the day: http://perspectives.mvdirona.com/2008/05/29/IO2008RoughNotesFromMarissaMayerDay2KeynoteAtGoogleIO.aspx and yesterday: http://perspectives.mvdirona.com/2008/05/29/RoughNotesFromSelectedSessionsAtGoogleIODay1.aspx

 

Google Web Toolkit and Client-Server Communications

·         Speaker: Miquel Mendez

·         GWT client/server communication options:

o   Frames

o   Form Panel

o   XHR: RequestBuilder (be careful don’t to start too many—many browsers have limits)

o   XML RPC

·         XML Encoding/Decoding: com.google.gwt.xml defines XML related classes

·         JSON Encoding/Decoding: com.google.gwt.json.JSON defines JSON related classes

·         GWT RPC: Generator that generates code and makes use of RequestBuilder

 

Reusing Google APIs with Google Web Toolkit

·         Speaker: Miquel Mendez

·         GALGWT: Google API Library for GWT.  It’s an open source project lead by Miquel (Javascript bindings to GWT).

o   It’s a collection of easy to use GWT bindings for existing Google JavaScript APIs

o   It’s a Google code open source project

·         Reminder: GWT is a java to Javascript compiler.

·         GWT now has a gadget class.  Google Gadget creation using GWT by extending the Gadget class and implementing the NeedsXXX intrerfaces.

·         Gears support:

o   Exposes database, LocalServer, and WorkerPool JS modules

o   Provides an offline module that automates the process fo going offline (creates the necessary manifests automatically)

·         Google Maps support as well

 

Engaging User Experiences with Google App Engine

·         Speakers: John Skidgel (designer) & & Lindsay Simon (developer).

·         Showed a guest book application written using Djanjo Form.  It’s been modified to run under App Engine (didn’t say how).

·         App engine development environment makes it easy to work with a designer as it’s easy to install and runs well on a Mac.

·         Walked through what they called 3D (Design, Develop, & Deploy) and how they handle it.

·         Authentication options:

·         Do your own

·         Use GAE (any authenticated user or you can  narrow the population to your domain only – all supported out of the box).

·         Don’t make auth a gating factor or you will lose users – auth at the last possible moment

·         Use the App Engine Datastore for sessions

·         Decreasing Latency:

·         Create build rules to concatenate and minify (Yahoo! Minifier) CSS and JS

·         File fingerprinting

·         Set expires headers for a very long time but add a version ID.  He showed how to handle the version number on the server side.  The recommendation was 10 year expiration with version numbers.

·         Recommends “Progressive Enhancement” or “Defensive Enhancement”.  You should still be able to render without JS. JS should give a better experience but you may not have JS (crappy mobile browsers for example).  Another test, shut off CSS and it should still work.

 

James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

 

 

Thursday, May 29, 2008 5:55:26 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services

Continued from Yesterday (day 1): Rough notes from Selected Sessions at Google IO Day 1.

 

Marissa Mayer Keynote: A Glimpse Under the Hood at Google

·         Showed iGoogle and talked about how Google Gadgets are a great way to get broad distribution and are a form of advertising.

·         Search is number 2 most used application (after email)

·         The ordinary and the everyday

·         Why is search page so simple?

·         Variation of Occam’s Razor: “the simple design is probably right”

·         Sergey did it and it was because “there was nobody else to do it and he doesn’t do HTML”

·         Described process of answering a query (700 to 1,000 machines in .16 seconds):

·         This time of day we’re busy so the query will likely go to one data center and likely get bounced to another (must be a simplification of what really happens – load ballancing)

·         Mixer

·         Google Web Server

·         Ads + Websearch (300 to 400 systems)

·         Back to mixer

·         Back to Web server

·         Back to load balancer

·         Split A/B Testing:

·         We given a subset of users a different user experience. Web services allow very detailed views and to iterate very quickly and evolve rapidly.

·         Example: amount of white space under Google logo on results page?

·         This test showed convincingly that less white space rather than more (produces more usage and more revenue)

·         Example: yellow or blue as background for paid adds

·         Yellow produced both more satisfaction and more revenue.

·         “If you don’t listen to your customers, someone else will” – Sam Walton

·         But you need to test rather than ask since they often don’t know.

·         Example: would you like 10, 20, or 30 results. Users unanimously wanted 30.

·         But 10 did way better in A/B testing (30 was 20% worse) due to lower latency of 10 results

·         30 is about twice the latency of 10 (I would have expected the other overheads to dominate.  Suggests there is another solution waiting to be found here).

·         Example: Maps was 120k for launch page.  We took 30 to 40k out.  Got a proportional increase in usage.

·         Example: Google Video uploads used to be 1 day to watch while YouTube offered “Watch it now”.  Much more compelling.

·         Urgent can drown out important

·         Users go from unskilled to skilled searchers very fast (under 1 month).  Consequently it’s better to optimize for expert since most are and novices get there fast due to fast feedback loop.

·         The lesson is to think longer term at all levels in design.

·         Think beyond the current development horizon.  10 years for major products and services.

·         Example: Universal search vs vertical search.  Users want verticals now but what they really want is universal search.  They just want to find the answer they are searching for.

·         Goog-411: don’t know if we can make money off this but it helps us develop voice recognition. Applications of voice recognition are monetizable so, even if Goog-411 doesn’t yield revenue, other applications will.

·         International content:

·         50% of the web is English but only aobut 1% of the web is Arabic

·         Conclusion: take an Arabic search, translate find relevant pages, then translate the result.  Opens up MUCH more content and dramatically improves the results for an Arabic user.

·         Larry Paige: ”A Healthy Disrespect for the Impossible” opens up many possibilities.

·         Showed examples of how search is not generally “solvable” but getting to 90 to 95% has HUGE benefit. Search is a hard and unconstrained problem.  Same with health records.

·         Recommendation: Be Scrappy & revel in constraints

·         Google operates in 140 countries and 110 languages.  Described the complexity of pulling out text strings from a web site, sending out to translation, dealing with multiple string versions, etc.

·         Betters solution: let the users help with the translated content.  If you don’t see your language, help us do it.  There are now ¼ million users helping with translation from all over the world.

·         Interesting little Easter egg:  one of the languages on the Google home page is “Bork! Bork! Bork!” – it’s the Swedish chef from the Muppets

·         Interesting little example: they took 11k Googler’s to Indiana Jones last week

·         Marissa went through a bunch of examples of taking on the impossible and brainstorming possible solutions and showing that some just exercised their thinking and others produced cool products/solutions.  Explained that 20% time is just another way of exercising the brain (“Imagination as a muscle”).  And Orkut, Google News, and during one period 50% of their new products, were from 20% time.

·         Random note: What you last searched for is the best context signal for the current search.

 

James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

 

 

Thursday, May 29, 2008 8:59:35 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Wednesday, May 28, 2008

Rough notes from the sessions I attended at Google IO.  The sessions are going to be available in Video so, if you want more detail (or more accuracy :-)), you can check out the videos.

 

Vic Gundotra Keynote:

·         2 hour session walking through entire conference material mostly with demos: Open Social, Google Web Toolkit, Android, Gears,

·         8 Main Conference Tracks with multiple concurrent sessions in each:

o   AJAX & Javascript

o   APIs & Tools

o   Maps & Geo

o   Mobile

o   Social

o   Code Labs

o   Tech Talks

o   Fireside Chat

·         All recorded and will be available publically.

 

OpenSocial: A Standard for the Social Web

·         How do we socialize objects online without creating yet another social network (there are already at least 100)?

·         API for controlled exchange of Friends, Profiles, and Activities (the update stream)

·         Recommends Hal Varian’s (Google Chief Economist) ”Information Rules”

o   OpenSoicial is an implementation of Chapter 8

·         Association of Google, MySpace, and Yahoo!

o   http://opensocial.org

·         More than 275M users of OpenSocial

·         How to build an OpenSocial application?

o   JavaScript Version 0.7 now and REST services coming soon

o   Three groupings of the API

§  People & friends

§  Activities

§  Persistence

o   Programming model is async. Send a request and set a callback function that gets called on completion.

o   Update of activity field: postActivity(text) – also supports setting priority

o   Example server side REST services:

§  /people/{guid}/@all: collection of all people connected to the user

§  /peple/{guid}/@friends: friends

·         Main sell is to allow small sites to gain critical mass when friction of yet another login system and initial lack of users would have blocked.  Make it easier on users.

·         Showed a map of the world showing that different social networks have won in different geographies all over the world.

o   E.g. LiveJournal (Rusia), Orkut (Brazil)

·         OpenSocial gets you to all their users so plan to localize your application (OpenSocial is designed to support localication)

·         OpenSocial Terms:

o   Container:  the site (Hi5, MySpace, etc.)

o   Owner: author/owner of the page

o   Viewer: person viewing the page

·         Apache Shindig is an open source implementation with a goal of allowing new sites to host open social applications in well under an hour.

·         Shindig is an Apache Incubator project: http://incubator.apache.org/shindig

·         Summary: make the web more social, current version is 0.7, and 0.8 includes REST.

·         OpenSocial has 11 sessions in addition to this one at Google IO.

 

Google App Engine

·         This session packed.  Others quite lightly filled.

·         Google App Engine does one thing well

o   App engine handles HTTP requests, nothing else

o   Resources are scaling automatically

o   Highly scalable store based on BigTable

·         An application is a directory with everything underneath it

·         Single file app.yaml in app root directory

o   Defines app metadata

o   Maps URL patterns in regex to request handlers

o   Seperates static files from program fiels

·         Dev Server (SDK) emulates deployment environment

·         Request Handlers:

o   Python script invoked as though it were a CGI script

o   Environment variables give request parameters

§  PATH_INFO

§  QUERY_STRING

§  HTTP_REFERER

o   Write response to stdout

·         Runtime is Python only but the fact that it is specified in app.yaml suggests that more will eventually be added.

·         Showed Django support and how to use GAE with Django

o   Showed a minimal main.py

§  Import os from google.appengine.ext.webapp import util, ….

o   Also showed minimal settings.py

·         Note: Existing Django apps will NOT port easily to GAE.

 

Google Docs + Gears == Google Docs Offline

·         Google Docs Offline Architecture:

o   Document editor

o   Spreadsheet editor

o   Presentation editor

o   Authentication

o   Docs Home (doclist)

·         Overall, no big breakthroughs.  It’s just Docs offline but its work well done.

·         Challenges to disconnected operation:

o   Upgrade is a challenge: Now that code is being installed remotely, the server needs to support old code at least until the new code is pushed out and installed.

·         Possible solutions for static resources: fail to upgrade, sticky sessions, resource database, or serve the old version.

·         Solution implemented: resource database with a per-server cache

o   Rolling upgrade for HTML: hard code the offlineVersion and request it specifically – it will fail during rolling server upgrades but the speaker argued that it wasn’t worth the cost to avoid this failure.

o   Security: Decided to not do auth remotely and rely on O/S facilities (if you have access to the data at the O/S level, you get access). But they do provide support for multiple users since most power users have multiple personas (work and home at least).  Multi-user support is via putting the email address of the user in a cookie.  They have an loggedin and a loggedout manifest.  The loggedout manifest redirects to a dialog to chose one of your existing accounts. This either sets the loggedout cookie to an appropriate email address or fails. (loggedin cookie doesn’t have an email address – it has the google security context).

·         Recommendations:

o   Need to provide debugging tools (online can look at server logs – need something for online)

o   Rollout initially a small number

o   Support disabling offline experience for a user

 

Under the Covers of the App Engine Datastore

·         Speaker: Ryan Barrett: App engine Data Store Lead

·         Bigtable in one slide:

o   Scalable structured store

o   Types on each value

o   Single row transactions

o   Two types of scans: 1) prefix (physically contiguous), 2) range scan (also physically contiguous)

·         The entities table:

o   Primary GAE table

o   Stores all entities in all apps

o   Generic and schemaless

o   Row name is entity key

o   Only column is serialized entity

·         Entity key is based on parent entities (root to child, to child, etc.)

o   Note: Can’t change a primary key but can delete and create a new entity with new key

·         Queries and indexes:

·         GQL: Google Query Language

o   A tiny subset of SQL.  Most clauses restricted. Added the Ancestor clause.

·         Big table only supports scan.  No sorting and no filtering.

o   Because they have no knowledge of the app or data shape, they convert all queries to scans since that is all BigTable can do.

o   Indexes:

§  Kind Index (kind, key) where kind is child, grandparent, parent, …

§  Single-property index (kind, name, value key) : Serves queries on a single property. (there are two indexes: ascending and descending)

§  Composite index: defined by the user in index.yaml (generated by the dev environment if you run queries over all needed composite types).

o   All index comparisons are lexicographic

o   They support index intersection.  Multiple equals filters and an equals filter and an ancestor restriction for example (just do index anding).

·         Indexes space consumption is not charged for since they don’t want to make people go to considerable pain to avoid using, for example, composite indexes.  Ryan went on to explain that this is what he “wants” but it is not a committed decision.

·         If a query can’t be satisfied with a range scan, they query will be failed (need index exception).

·         Transaction model: all writes are transactional

o   All writes are written to journal with timestamp

o   No locking – they use optimistic concurrency control.

o   Each entity has a last committed time.  All reads access last committed time.  All writes check to ensure last committed hasn’t changed. The committed timestamp is only updated after the full value is written out and the log entry is written. The log entry is a big table row and each row supports atomic writes.  He didn’t provide enough detail to fully debug/understand the commit protocol implementation.

o   You define entity groups (defined by the root entity – all descendents are in the same entity group.  Only the root has the timestamp.

o   He did say that all writes to a entity group are serialized so make the entity groups small.

 

Working with Google App Engine Models:

·         Speaker: Rafe Kaplan

·         Other object relational mapping systems:

o   ActiveRecord

o   Django

o   Hibernate

·         Does not map to an RDBMS

·         No pre-existing schema

·         No joins, No Aggs, & no functions

·         Showed how to model relationships

 

James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

 

Wednesday, May 28, 2008 5:14:59 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Friday, May 23, 2008

Wednesday Yahoo announced they have a built a petascale, distributed relational database.  In Yahoo Claims Record With Petabyte Database, the details are thin but they built on the PostgreSQL relational database system. In Size matters: Yahoo claims 2-petabyte database is world's biggest, busiest, the system is described as an over 2 petabyte repository of user click stream and context data with an update rate for 24 billion events per day.  Waqar Hasan, VP of Engineering at Yahoo! Data group, describes the system as updated in real time and live – essentially a real time data warehouse where changes go in as they are made and queries always run against the most current data. I strongly suspect they are bulk parsing logs and the data is being pushed into the system in large bulk units but, even near real time at this update rate, is impressive.

 

The original work was done at a Seattle startup called Mahat Technologies acquired by Yahoo! in November 2005.

 

The approach appears to be similar to what we did with IBM DB2 Parallel Edition.  13 years ago we had it running on a cluster of 512 RS/6000s at the Maui Super Computer Center and 256 nodes at the Cornel Theory Center.  It’s a shared nothing design which means that each server in the cluster have independent disk and don’t share memory. The upside of this approach is it scales incredibly well. It looks like Yahoo! has done something similar using PostgreSQL as the base technology.  Each node in the cluster runs a full copy of the storage engine.  The query execution engine is replaced with one modified to run over a cluster and use a communications fabric to interconnect the nodes in the cluster.  The parallel query plans are run over the entire cluster with the plan nodes interconnected by the communication fabric.  The PostgreSQL client, communications protocol and server side components with some big exceptions run mostly unchanged.  The query optimizer is either replaced completely with a cluster parallel aware implementation that models the data layout and cluster topology in making optimization decisions.  Or the original, non-cluster parallel optimizer is used and the resultant single node plans are then optimized for the cluster in a post optimization phase. The former will yield provably better plans but it’s also more complex. I’m fearful of complexity around optimizers and, as a consequence, I actually prefer the slightly less optimal, post-optimization phase.  Many other problems have to be addressed including having the cluster metadata available on each node to support SQL query compilation but what I’ve sketched here covers the major points required to get such a design running.

 

The result is a modified version of PostgreSQL runs on each node.  A client can connect to any of the nodes in the cluster (or a policy restricted subset).  A query flows from the client to the server it chose to connect with. The SQL compiler on that node compiles and optimizes the query on that single node (no parallelism). The query optimizer is either cluster-aware or uses a post-optimization cluster-aware component.  The resultant query plan when ready for execution is divided up into sub-plans (plan fragments) that run on each node connected over the communication fabric.  Some execution engines initiate top-down and some bottom up. I don’t recall what PostgreSQL uses but bottom-up is easier in this case.  However, either can be made to work.  The plan fragments are distributed to the appropriate nodes in the cluster.  Each runs on local data and pipes results to other nodes which run plan fragments and forward the results yet again toward the root of the plan. The root of the plan runs on the node that started the compilation and the final results end up there to be returned to the client.

 

It’s a nice approach and as evidenced by Yahoo’s experience it scales, scales, scales.  I also like the approach in that most tools and applications can continue to work with little change.  Most clusters of this design have some restrictions such unique ID generation is either not supported or slow as is referential integrity.  Nonetheless, a large class of software can be run without change.

 

If you are interested in digging deeper into Relational Database technology and how the major commercial systems are written, see Architecture of a Database System.

 

Yahoo has a long history of contributing to Open Source and they are the largest contributor to the Apache Hadoop project. It’ll be interesting to see if Yahoo! Data ends up open source or held as an internal only asset.

 

Kevin Merritt pointed me to the Yahoo! Data work.

 

                                                -jrh

 

James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

Friday, May 23, 2008 6:22:38 AM (Pacific Standard Time, UTC-08:00)  #    Comments [4] - Trackback
Software
 Wednesday, May 21, 2008

Search drives the online commerce world by bringing sellers and buyers together.  As a seller, you most important task is getting your site to rank high organically and to have your advertisements placed most prominently and most frequently to user interested in buying and only to users interested in your product.  A buyer chooses a search engine on the basis of more reliably getting them to what they are looking for.  And, with commercial queries, getting them to the “best” seller where best is a fairly complex and hard to define term in this context.  Happy buyers keep using the search engine and paying the sellers.  Sellers who manage their organic and paid placements correctly sell lots of product.  Successful search engines make considerable profit.  That’s just the way the ecosystem has evolved – it’s the broadly used search engine that has all the influence and so they end up with considerable profit.

 

What if the rules changed?  What if some of the search engine profit was returned to users?  Could this change the ecosystem and could it be a good thing?  Let’s watch because Microsoft is about to announce a “cash back service” later today according to Search Engine Land.  In this posting, Playing with Live Cashback, the blog author demonstrates using the Live Cashback system and concludes that it won’t have much impact.  I’m less certain.  I suspect that respecting users and returning some value to them will change this market in positive way. It’ll be fun to watch over the next 4 to 6 weeks and see how the search ecosystem evolves.

 

                                                --jrh

 

James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

 

 

Wednesday, May 21, 2008 8:35:27 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services
 Tuesday, May 20, 2008

There is no question that cloud computing is going to a big part of the future of server-side systems. What I find interesting is the speed with which this is happening.  Look at recent network traffic growth rates from AWS:

 

From: http://aws.typepad.com/aws/2008/05/lots-of-bits.html

 

AWS is now consuming considerably more bandwidth than Amazon’s global web sites.  Phenomenal growth and impressive absolute size.

 

Continuing to look at growth, I saw a chart a few weeks back on the Amazon Web Services Blog that illustrates the value of a pay-as-you-go and pay-as-your-grow service.  This chart shows the number of EC2 servers in use by Animoto over a couple of week period. Note the explosion in EC2 server usage in the three day period from 4/15 through 4/18 and imagine trying to do capacity planning for Animoto.  They went from roughly 50 servers to needing more than 3,500 in three days. Imagine having to predict growth and get servers racked, stacked and online in time to meet the growth.  Nearly impossible.

From: http://aws.typepad.com/aws/2008/04/animoto---scali.html (Emre Kiciman sent it my way).

 

When you next hear “why web services?”, think of this chart.

 

Another point I hear frequently around web services is, “sure, they are used by start-ups but REAL enterprises would never use them due to security and data privacy reasons.”  Again, utter bunk but it’s a frequently repeated quip. I led the Exchange Hosted Services team and we provided hosted email anti-malware and archiving.   The service was originally targeting small and medium sized businesses and many from those categories did use it. But, what was interesting was the number of name-brand, world-wide enterprises that recognized the cost and quality advantages of using hosting services.  Valuable internal enterprise resources are best saved for tasks that add value to the business.  

 

Perhaps the large enterprises will use hosted email services but what about low level services such as EC2 and S3?  Again, it’s the same story.  If the value is there, companies of all sizes will use it.  From the Amazon 4th quarters earnings call, TechCrunch reports (Who Are The Biggest Users of Amazon Web Services? It’s Not Startups):

 

So who are using these services? A high-ranking Amazon executive told me there are 60,000 different customers across the various Amazon Web Services, and most of them are not the startups that are normally associated with on-demand computing. Rather the biggest customers in both number and amount of computing resources consumed are divisions of banks, pharmaceuticals companies and other large corporations who try AWS once for a temporary project, and then get hooked.

 

Big companies are jumping in as well.

 

Google recently entered the cloud computing market with Google Application Engine. They are only a couple months in beta and report they have allowed in 60,000 developers in that short period of time.  The amazing thing is the apparent size of the back log. The forums are full  of people complaining that they can’t yet get on (Sriram Krishnan sent my way).

 

Wired recently published “Cloud Computing. Available at Amazon.com Today”.

 

It’s unusual for a new model to grow so fast and it’s close to unprecedented to see so much early growth in the enterprise.  However, when the potential savings are this large, big things can happen.

 

                                --jrh

 

James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

Tuesday, May 20, 2008 7:30:11 PM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services
 Saturday, May 17, 2008

I’ve been involved with high scale systems software projects, mostly database engines, for the last 20 years and I’ve watched the transition from low level and proprietary languages to C. Then C to C++. Recently I’ve been thinking a bit about what’s next.

 

Back in the very early 90’s when I was Lead Architect on IBM DB2, I was dead against C++ usage in the Storage Engine and wouldn’t allow exceptions to be used anywhere in the system. At the time, the quality of C++ compilers was variable with some being real compilers that were actually fairly well done (I lead the IBM RS/6000 C++ team in the late 80s) while others were Cfront-based and pretty weak.  At the time no compiler, including the one I worked on, did a good job implementing exceptions.  Times change.  SQL Server, for example, is 100% C++ and it makes excellent use of exception to clean up resources on failure. 

 

The productivity benefits of new programming languages and tools eventually wins out.  When they get broad use, implementations improve reducing the performance tax and, eventually, even very performance sensitive system software make the transition.

 

I got interested in Java in the mid-90’s and more recently I’ve been using C# quite a bit partly due to where I work and partly because I actually find the language and surrounding tools impressively good.  JITed languages typically don’t perform as well as statically compiled languages but the advantages completely swamp the minor performance costs.  And, as managed language (Java, C#, etc.) implementations improve, the performance tax continues to fall. There is no question in my mind that managed languages will end up being broadly used in even the most performance critical software systems such as database engines.

 

Recently, I’ve gotten interested in Erlang as an systems software implementation language.  By most measures, it looks to be an unlikely choice for high scale system software in that its interpreted, has a functional subset at its core, and uses message passing rather than shared memory and locks. Basically, it’s just about the opposite of everything you would find in a modern commercial database kernel.  So what makes it interesting? The short answer is all the things that make it an unlikely choice also make it interesting.  Servers are becoming increasingly unbalanced with CPU speeds continuing to outpace memory and network bandwidth.  More and more operations are going to be memory and network bound rather than CPU if they aren’t already.  Trading some CPU resources to get a more robust implementation that is easier to understand and maintain is a good choice.  In addition, CPU speed increases are now coming more from multiple cores than from frequency scaling a single core. Consequently a language that produces an abundance of parallelism is a an asset rather than a problem. Finally, large systems software projects like database management systems, operating systems, web servers, IM servers, email systems, etc. are incredibly large and complex. The Erlang model of spawning many lightweight threads that communicate via message passing is going to be less efficient than the more common shared memory and locks solution but it’s much easier to get correct.  Erlang also encourages a “fail fast” programming model.  I’ve long argued that this is the only way to get high scale systems software correct (Designing and Deploying Internet-Scale Services). 

 

Certainly Erlang brings a tax as have other new languages that we have adopted over the years. But, it also bring some of what we need badly right now.  For example, the fail fast programming model is the right one and, when combined with synchronous state redundancy, is how most high-scale systems should be written.  Erlang also encourages the production of a very large number of threads which can be a good thing on very high core count servers.  Message passing rather than shared memory with locks and fail fast with operation restart significantly increases the probability of the software system working correctly through unexpected events.

 

From my perspective, the syntax of Erlang is less than beautiful but all the advantages above make up for most of that.

 

The Concurrency and Coordination Runtime is a .Net runtime that implements some of the features I mention above for languages like C#.  George Chrysanthakopoulos, Microsoft CCR Architect, reports that MySpace is using it: MySpace.com using the CCR (Sriram Krishnan pointed me to this one).

 

It appears that Erlang usage is ramping up fairly quickly right now.  Naturally, since it was developed there,  Erlang is used by many Ericsson projects including the AXD301 ATM Switch and the AXE line of switches.  The AXD series includes over 850k lines of Erlang.  However, outside of Ericsson some very interesting examples are emerging.  Amazon’s SimpleDB is written is Erlang (Amazon SimpleDB is built on Erlang and What You Need To Know About Amazon SimpleDB). The recently released (quietly) Facebook Chat application uses Erlang as well (Dare Obasanjo sent that one my way).  CouchDB is written Erlang as well (CouchDB: Thinking beyond the RDBMS).  Some more Erlang applications from the Erlang FAQ:

Is it time for a new server-side implementation language?

 

                                                --jrh

 

James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

 

Saturday, May 17, 2008 11:16:11 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services | Software
 Monday, May 12, 2008

I’ve spent a big part of my life working on structured storage engines,  first in DB2 and later in SQL Server.  And yet, even though I fully understand the value of fully schematized data, I love full text search and view it as a vital access method for all content wherever it’s stored.   There are two drivers of this opinion: 1) I believe, as an industry, we’re about ¼ of the way into a transition from primarily navigational access patterns to personal data to ones based upon full text search, and 2) getting agreement on broad, standardizing schema across diverse user and application populations is very difficult. 

 

On the first point, for most content on the web, full text search is the only practical way to find it.  Navigational access is available but it’s just not practical for most content.  There is simply too much data and there is no agreement on schema so more structured searches are usually not possible.  Basically structured search is often not supported and navigational access doesn’t scale to large bodies of information.  Full text search is often the only alternative and it’s the norm when looking for something on the web. 

 

Let’s look at email.   Small amounts of email can be managed by placing each piece of email you chose to store in a specific folder so it can be found later navigationally.  This works fine but only if we keep only a small portion of the email we get.  If we never bothered to throw out email or other documents that we come across, the time required to folderize would be enormous and unaffordable. Folderization just doesn’t scale.  When you start to store large amount of email or just stop (wasting time) aggressively deleting email, then the only practical way to find most content is full text search.  As soon as 5 to 10GB of un-folderized and un-categorized personal content is accumulated, it’s the web scenario all over again: search is the only practical alternative.  I understand that this scenario is not supported or encouraged by IT or legal organizations at most companies but that is the way I chose to work.  There is no technical stumbling block to providing unbounded corporate email stores and the financial ones really don’t stand up to scrutiny. Ironically most expensive, corporate email systems offer only tiny storage quotas while most free, consumer-based services are effectively unbounded.  Eventually all companies will wake up to the fact that knowledge workers work more efficiently with all available data.  And, when that happens, even corporate email stores will grow beyond the point of practical folderization.

 

The second issue was the difficulty of standardizing schema across many different stores and many different applications.  The entire industry has wanted to do this over the past couple of decades and many projects have attempted to make progress.  If they were widely successful, it would be wonderful but they haven’t been.  If we had standardized schema, we would have quick and accurate access to all data across all participating applications.  But it’s very hard to get all content owners to cooperate or even care.  Search engines attempt to get to the same goal but they chose a more practical approach: they use full text search and just chip away at the problem.  They work hard on ranking. They infer structure in the content where possible and exploit it where it’s found.   Where structure can’t be found, at least there is full text search with reasonably good ranking to full back upon.

 

Strong or dominant search engine providers have considerable influence over content owners and weak forms of schema standardization becomes more practical.  For example, a dominate search engine provider can offer content owners opportunities to get better search results for their web site if they supply a web site map (standard schema showing all web pages in site).  This is already happening and web administrators are participating because it brings them value.  A web sites ranking in the important search engine providers is very vital and a chance to lift your ranking even slightly is worth a fortune.  Folks will work really hard where they have something to gain.  So, if adopting common schema can improve ranking, there is significant chance something positive actually could happen. 

 

The combination of providing full text search over all content and then motivating content providers to participate in full or partial schema standardization coupled with the search engine inferring schema where it’s not feels like a practical approach to richer search.  I love full text search and view it as the under-pinning to finding all information structured or not.  The most common queries will include both structured and non-structured components but the common element will be that full schema standardization isn’t required nor is it required that a user understand schema to be able to find what they need.  Over time, I think we will see incremental participation in standardized schemas but this will happen slowly.  Full text search with good ranking and relevance assisted by whatever schema can be found or inferred in the data will be the under-pinning to finding most content over the near term.

 

                                                --jrh

 

James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

 

 

Monday, May 12, 2008 4:42:14 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Software
 Wednesday, May 07, 2008

Some time back I got a question on what I look for when hiring a Program Manager from the leader of a 5 to 10 person startup.  I make no promise that what I look for is typical of what others look for – it almost certainly is not.  However, when I’m leading an engineering team and interviewing for a Program Manager role, these are the attribute I look for.  My response to the original query is below:

 

The good news is that you’re the CEO not me.  But, were our roles reversed, I would be asking you why you think you need  PM at this point?  A PM is responsible for making things work across groups and teams.  Essentially they are the grease that helps make a big company be able to ship products that work together and get them delivered through a complicated web of dependencies.  Does a single product startup in the pre-beta phase actually need PM?  Given my choice, I would always go with more great developer at this phase of the companies life and have the developers have more design ownership, spend more time with customers, etc.  I love the "many hats" model and it's one of the advantages of a start-up. With a bunch of smart engineers wearing as many hats as needed, you can go with less overhead and fewer fixed roles, and operate more efficiently. The PM role is super important but it’s not the first role I would staff in a early-stage startup.

 

But, you were asking for what I look for in a PM rather than advice on whether you should look to fill the role at this point in the company’s life.  I don't believe in non-technical PMs, so what I look for in PM is similar to what I look for in a developer.  I'm slightly more willing to put up with somewhat rusty code in a PM, but that's not a huge difference.  With a developer, I'm more willing to put up with certain types of minor skill deficits in certain areas if they are excellent at writing code.  For example, a talented developer that isn’t comfortable public speaking, or may only be barely comfortable in group meetings, can be fine. I'll never do anything to screw up team chemistry or bring in a prima donna but, with an excellent developer, I'm more willing to look at greatness around systems building and be OK with some other skills simply not being there as long as their absence doesn't screw-up the team chemistry overall.  With a PM, those skills need to be there and it just won't work without them.

 

It's mandatory that PMs not get "stuck in the weeds". They need to be able to look at the big picture and yet, at the same time, understand the details, even if they aren't necessarily writing the code that implements the details.  A PM is one of the folks on the team responsible for the product hanging together and having conceptual integrity.  They are one of the folks responsible for staying realistic and not letting the project scope grow and release dates slip. They are one of the team members that need to think customer first, to really know who the product is targeting, to keep the project focused on that target, and to get the product shipped

 

So, in summary: what I look for in a PM is similar to what I look for in a developer (http://mvdirona.com/jrh/perspectives/2007/11/26/InterviewingWithInsightAtMicrosoft.aspx) but I'll tolerate their coding possibly being a bit rusty. I expect they will have development experience. I'm pretty strongly against hiring a PM straight out of university -- a PM needs experience in a direct engineering role first to gain the experience to be effective in the PM role. I'll expect PMs to put the customer first and understand how a project comes together, keep it focused on the right customer set, not let feature creep set in, and to have the skill, knowledge, and experience to know when a schedule is based upon reality and when it's more of a dream.  Essentially I have all the expectations of a PM that I have of a senior developer, except that I need them to have a broad view of how the project comes together as a whole, in addition to knowing many of the details. They must be more customer focused, have a deeper view of the overall project schedules and how the project will come together, be a good communicator, perhaps a less sharp coder, but have excellent design skills. Finally, they must be good at getting a team making decisions, moving on to the next problem, and feeling good about it.

 

                                                --jrh

 

 

 

Wednesday, May 07, 2008 4:40:50 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Process
 Monday, May 05, 2008

I forget what brought it up but sometime back Sriram Krishnan forwarded me this article on Mike Burrows and his work through Dec, Microsoft, and Google (The Genius: Mike Burrows' self-effacing journey through Silicon Valley).  I enjoyed the read.  Mike has done a lot over the years but perhaps his best known works of recent years are Alta Vista at DEC and Chubby at Google.

 

I first met Mike when he was at Microsoft Research.  He and Ted Wobber (also from Digital) came up to Redmond to visit.  Back then I led the SQL Server relational engine development team which included the full text search index support.   I was convinced then, and still am today, that relational database engines do a good job of managing structured data but a poor job of the other 90 to 95% of the data in the world that is less structured.  It just seems nuts to me that customers industry-wide are spending well over $10B a year on relational database management systems and yet only being able to effectively use these systems to manage a tiny fraction of their data.  As an increasing fraction of the structured data in the world is already stored in relational database managements systems, industry growth will come from helping customers manage their less structured data. 

 

To be fair, most RDMBS (including SQL Server) do support full text indexing but what I’m after is deep support for full text where the index is a standard access method rather than a separate indexing engine on the side and, more importantly, full statistics are tracked on the full text corpus allowing the optimizer to make high quality decisions on join orders and techniques that include full text indices.

 

If you haven’t read Mike’s original Chubby paper, do that: http://labs.google.com/papers/chubby.html.  Another paper is at: http://labs.google.com/papers/paxos_made_live.html. Chubby is an interesting combination of name server, lease manager, and mini-distributed file system.  It’s not the combination of functionality that I would have thought to bring together in a single system but it’s heavily used and well regarded at Google.  Unquestionably a success.

 

                                --jrh

 

James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh  | blog:http://perspectives.mvdirona.com

 

 

Monday, May 05, 2008 4:32:43 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Ramblings

Disclaimer: The opinions expressed here are my own and do not necessarily represent those of current or past employers.

Archive
<June 2008>
SunMonTueWedThuFriSat
25262728293031
1234567
891011121314
15161718192021
22232425262728
293012345

Categories
This Blog
Member Login
All Content © 2014, James Hamilton
Theme created by Christoph De Baene / Modified 2007.10.28 by James Hamilton