Author Archive
Back in early 2008, I noticed an interesting phenomena: some workloads run more cost effectively on low-cost, low-power processors. The key observation behind this phenomena is that CPU bandwidth consumption is going up faster than memory bandwidth. Essentially, it’s a two part observation: 1) many workloads are memory bandwidth bound and will not run faster…
Two of the highest leverage datacenter efficiency improving techniques currently sweeping the industry are: 1) operating at higher ambient temperatures (http://perspectives.mvdirona.com/2011/02/27/ExploringTheLimitsOfDatacenterTemprature.aspx) and air-side economization with evaporative cooling (http://perspectives.mvdirona.com/2010/05/15/ComputerRoomEvaporativeCooling.aspx). The American Society of Heating and Refrigeration, and Air-Conditioning Engineers (ASHRAE) currently recommends that servers not be operated at inlet temperatures beyond 81F. Its super common to…
Chris Page, Director of Climate & Energy Strategy at Yahoo! spoke at the 2010 Data Center Efficiency Summit and presented Yahoo! Compute Coop Design. The primary attributes of the Yahoo! design are: 1) 100% free air cooling (no chillers), 2) slab concrete floor, 3) use of wind power to augment air handling units, and 4)…
Datacenter temperature has been ramping up rapidly over the last 5 years. In fact, leading operators have been pushing temperatures up so quickly that the American Society of Heating, Refrigeration, and Air-Conditioning recommendations have become a become trailing indicator of what is being done rather than current guidance. ASHRAE responded in January of 2009 by…
Dileep Bhandarkar presented the keynote at Server design Summit last December. I can never find the time to attend trade shows so I often end up reading slides instead. This one had lots of interesting tidbits so I’m posting a pointer to the talk and my rough notes here. Dileep’s talk: Watt Matters in Energy…
Ben Black always has interesting things on the go. He’s now down in San Francisco working on his startup Fastip which he describes as “an incredible platform for operating, exploring, and optimizing data networks.” A couple of days ago Deepak Singh sent me to a recent presentation of Ben’s I found interesting: Challenges and Trade-offs…
Last week, Sudipta Sengupta of Microsoft Research dropped by the Amazon Lake Union campus to give a talk on the flash memory work that he and the team at Microsoft Research have been doing over the past year. Its super interesting work. You may recall Sudipta as one of the co-authors on the VL2 Paper…
NVIDIA has been an ARM licensee for quite some time now. Back in 2008 they announced Tegra, an embedded client processor including an ARM core and NVIDIA graphics aimed at smartphones and mobile handsets. 10 days ago, they announced Project Denver where they are building high-performance ARM-based CPUs, designed to power systems ranging from “personal…
If you have experience in database core engine development either professionally, on open source, or at university send me your resume. When I joined the DB world 20 years ago, the industry was young and the improvements were coming ridiculously fast. In a single release we improved DB2 TPC-A performance by a factor of 10x….
Megastore is the data engine supporting the Google Application Engine. It’s a scalable structured data store providing full ACID semantics within partitions but lower consistency guarantees across partitions. I wrote up some notes on it back in 2008 Under the Covers of the App Engine Datastore and posted Phil Bernstein’s excellent notes from a 2008…
Years ago I incorrectly believed special purpose hardware was a bad idea. What was a bad idea is high-markup, special purpose devices sold at low volume, through expensive channels. Hardware implementations are often best value measured in work done per dollar and work done per joule. The newest breed of commodity networking parts from Broadcom,…
Even working in Amazon Web Services, I’m finding the frequency of new product announcements and updates a bit dizzying. It’s amazing how fast the cloud is taking shape and the feature set is filling out. Utility computing has really been on fire over the last 9 months. I’ve never seen an entire new industry created…
I love high-scale systems and, more than anything, I love data from real systems. I’ve learned over the years that no environment is crueler, less forgiving, or harder to satisfy than real production workloads. Synthetic tests at scale are instructive but nothing catches my attention like data from real, high-scale, production systems. Consequently, I really…
Achieving a PUE of 1.10 is a challenge under any circumstances but the vast majority of facilities that do approach this mark are using air-side economization. Essentially using outside air to cool the facility. Air-side economization brings some complexities such as requiring particulate filters and being less effective in climates that are both hot and…
I’m interested in low-cost, low-power servers and have been watching the emerging market for these systems since 2008 when I wrote CEMS: Low-Cost, Low-Power Servers for Internet Scale Services (paper, talk). ZT Systems just announced the R1081e, a new ARM-based server with the following specs: · STMicroelectronics SPEAr 1310 with dual ARM® Cortex™-A9 cores ·…
Earlier this week Clay Magouyrk sent me a pointer to some very interesting work: A Couple More Nails in the Coffin of the Private Compute Cluster: Benchmarks for the Brand New Cluster GPU Instance on Amazon EC2. This detailed article has detailed benchmark results from runs on the new Cluster GPU Instance type and leads…
Back in early 2008, I noticed an interesting phenomena: some workloads run more cost effectively on low-cost, low-power processors. The key observation behind this phenomena is that CPU bandwidth consumption is going up faster than memory bandwidth. Essentially, it’s a two part observation: 1) many workloads are memory bandwidth bound and will not run faster…
Two of the highest leverage datacenter efficiency improving techniques currently sweeping the industry are: 1) operating at higher ambient temperatures (http://perspectives.mvdirona.com/2011/02/27/ExploringTheLimitsOfDatacenterTemprature.aspx) and air-side economization with evaporative cooling (http://perspectives.mvdirona.com/2010/05/15/ComputerRoomEvaporativeCooling.aspx). The American Society of Heating and Refrigeration, and Air-Conditioning Engineers (ASHRAE) currently recommends that servers not be operated at inlet temperatures beyond 81F. Its super common to…
Chris Page, Director of Climate & Energy Strategy at Yahoo! spoke at the 2010 Data Center Efficiency Summit and presented Yahoo! Compute Coop Design. The primary attributes of the Yahoo! design are: 1) 100% free air cooling (no chillers), 2) slab concrete floor, 3) use of wind power to augment air handling units, and 4)…
Datacenter temperature has been ramping up rapidly over the last 5 years. In fact, leading operators have been pushing temperatures up so quickly that the American Society of Heating, Refrigeration, and Air-Conditioning recommendations have become a become trailing indicator of what is being done rather than current guidance. ASHRAE responded in January of 2009 by…
Dileep Bhandarkar presented the keynote at Server design Summit last December. I can never find the time to attend trade shows so I often end up reading slides instead. This one had lots of interesting tidbits so I’m posting a pointer to the talk and my rough notes here. Dileep’s talk: Watt Matters in Energy…
Ben Black always has interesting things on the go. He’s now down in San Francisco working on his startup Fastip which he describes as “an incredible platform for operating, exploring, and optimizing data networks.” A couple of days ago Deepak Singh sent me to a recent presentation of Ben’s I found interesting: Challenges and Trade-offs…
Last week, Sudipta Sengupta of Microsoft Research dropped by the Amazon Lake Union campus to give a talk on the flash memory work that he and the team at Microsoft Research have been doing over the past year. Its super interesting work. You may recall Sudipta as one of the co-authors on the VL2 Paper…
NVIDIA has been an ARM licensee for quite some time now. Back in 2008 they announced Tegra, an embedded client processor including an ARM core and NVIDIA graphics aimed at smartphones and mobile handsets. 10 days ago, they announced Project Denver where they are building high-performance ARM-based CPUs, designed to power systems ranging from “personal…
If you have experience in database core engine development either professionally, on open source, or at university send me your resume. When I joined the DB world 20 years ago, the industry was young and the improvements were coming ridiculously fast. In a single release we improved DB2 TPC-A performance by a factor of 10x….
Megastore is the data engine supporting the Google Application Engine. It’s a scalable structured data store providing full ACID semantics within partitions but lower consistency guarantees across partitions. I wrote up some notes on it back in 2008 Under the Covers of the App Engine Datastore and posted Phil Bernstein’s excellent notes from a 2008…
Years ago I incorrectly believed special purpose hardware was a bad idea. What was a bad idea is high-markup, special purpose devices sold at low volume, through expensive channels. Hardware implementations are often best value measured in work done per dollar and work done per joule. The newest breed of commodity networking parts from Broadcom,…
Even working in Amazon Web Services, I’m finding the frequency of new product announcements and updates a bit dizzying. It’s amazing how fast the cloud is taking shape and the feature set is filling out. Utility computing has really been on fire over the last 9 months. I’ve never seen an entire new industry created…
I love high-scale systems and, more than anything, I love data from real systems. I’ve learned over the years that no environment is crueler, less forgiving, or harder to satisfy than real production workloads. Synthetic tests at scale are instructive but nothing catches my attention like data from real, high-scale, production systems. Consequently, I really…
Achieving a PUE of 1.10 is a challenge under any circumstances but the vast majority of facilities that do approach this mark are using air-side economization. Essentially using outside air to cool the facility. Air-side economization brings some complexities such as requiring particulate filters and being less effective in climates that are both hot and…
I’m interested in low-cost, low-power servers and have been watching the emerging market for these systems since 2008 when I wrote CEMS: Low-Cost, Low-Power Servers for Internet Scale Services (paper, talk). ZT Systems just announced the R1081e, a new ARM-based server with the following specs: · STMicroelectronics SPEAr 1310 with dual ARM® Cortex™-A9 cores ·…
Earlier this week Clay Magouyrk sent me a pointer to some very interesting work: A Couple More Nails in the Coffin of the Private Compute Cluster: Benchmarks for the Brand New Cluster GPU Instance on Amazon EC2. This detailed article has detailed benchmark results from runs on the new Cluster GPU Instance type and leads…