In the data center world, there are few events taken more seriously than power failure and considerable effort is spent to make them rare. When a datacenter experiences a power failure, it’s a really big deal for all involved. But, a big deal in the infrastructure world still really isn’t a big deal on the world stage. The Super Bowl absolutely is a big deal by any measure. On average over the last couple of years, the Super Bowl has attracted 111 million viewers and is the number 1 most watched television show in North America eclipsing the final episode of Mash. World-wide, the Super Bowl is only behind the European Cup (UEFA Champions Leaque) which draws 178 million viewers.
When the 2013 Super Bowl power event occurred, the Baltimore Ravens had just run back the second half opening kick for a touchdown and they were dominating the game with a 28 to 6 point lead. The 49ers had already played half the game and failed to get a single touchdown. The Ravens were absolutely dominating and they started the second half by tying the record for the longest kickoff return in NFL history at 108 yards. The game momentum was strongly with Baltimore.
At 13:22 in the third quarter, just 98 seconds into the second half, ½ of the Superdome lost primary power. Fortunately it wasn’t during the runback that started the second half. The power failure let to a 34 min delay to restore full lighting the field and, when the game restarted, the 49ers were on fire. The game was fundamentally changed by the outage with the 49ers rallying back to a narrow defeat of only 3 points. The game ended 34 to 31 and it really did come down to the wire where either team could have won. There is no question the game was exciting and some will argue the power failure actually made the game more exciting. But, NFL championships should be decided on the field and not impacted by the electrical system used by the host stadium.
What happened at 13:22 in the third quarter when much of the field lighting failed? Entergy, the utility supply power to the Superdome reported their “distribution and transmission feeders that serve the Superdome were never interrupted” (Before Game Is Decided, Superdome Goes Dark). It was a problem at the facility.
The joint report from SMG the company that manages the Superdome and Entergy, the utility power provider, said:
A piece of equipment that is designed to monitor electrical load sensed an abnormality in the system. Once the issue was detected, the sensing equipment operated as designed and opened a breaker, causing power to be partially cut to the Superdome in order to isolate the issue. Backup generators kicked in immediately as designed.
Entergy and SMG subsequently coordinated start-up procedures, ensuring that full power was safely restored to the Superdome. The fault-sensing equipment activated where the Superdome equipment intersects with Entergy’s feed into the facility. There were no additional issues detected. Entergy and SMG will continue to investigate the root cause of the abnormality.
Essentially, the utility circuit breaker detected an “anomaly” and opened the breaker. Modern switchgear have many sensors monitored by firmware running on a programmable logic controller. The advantage of these software systems is they are incredibly flexible and can be configured uniquely for each installation. The disadvantage of software systems is the wide variety of configurations they can support can be complex and the default configurations are used perhaps more often than they should. The default configurations in a country where legal settlements can be substantial tend towards the conservative side. We don’t know if that was a factor in this event but we do know that no fault was found and the power was stable for the remainder of the game. This was almost certainly a false trigger.
Because the cause has not yet been reported and, quite often, the underlying root cause is never found. But, it’s worth asking, is it possible to avoid long game outages and what would it cost? As when looking at any system faults, the tools we have to mitigate the impact are: 1) avoid the fault entirely, 2) protect against the fault with redundancy, 3) minimize the impact of the fault through small fault zones, and 4) minimize the impact through fast recovery.
Fault avoidance: Avoidance starts with using good quality equipment, configuring it properly, maintaining it well, and testing it frequently. Given the Superdome just went through $336 million renovation, the switch gear may have been relatively new and, even if it wasn’t, it likely was almost certainly recently maintained and inspected.
Where issues often arise are in configuration. Modern switch gear have an amazingly large number of parameters many of which interact with each other and, in total, can be difficult to fully understand. And, given the switch gear manufactures know little about the intended end-use application of each switchgear sold, they ship conservative default settings. Generally, the risk and potential negative impact of a false positive (breaker opens when it shouldn’t) is far less than a breaker that fails to open. Consequently conservative settings are common.
Another common cause of problems is lack of testing. The best way to verify that equipment works is to test at full production load in a full production environment in a non-mission critical setting. Then test it just short of overload to ensure that it can still reliably support the full load even though the production design will never run it that close to the limit, and finally, test it into overload to ensure that the equipment opens up on real faults.
The first, testing in full production environment in non-mission critical setting is always done prior to a major event. But the latter two tests are much less common: 1) testing at rated load, and 2) testing beyond rated load. Both require synthetic load banks and skill electricians and so these tests are often not done. You really can’t beat testing in a non-mission critical setting as a means of ensuring that things work well in a mission critical setting (game time).
Redundancy: If we can’t avoid a fault entirely, the next best thing is to have redundancy to mask the fault. Faults will happen. The electrical fault at the Monday Night Football game back in December of 2011 was caused by utility sub-station failing. These faults are unavoidable and will happen occasionally. But is protection against utility failure possible and affordable? Sure, absolutely. Let’s use the Superdome fault yesterday as an example.
The entire Superdome load is only 4.6MW. This load would be easy to support on two 2.5 to 3.0MW utility feeds each protected by its own generator. Generators in the 2.5 to 3.0 MW range are substantial V16 diesel engines the size of a mid-sized bus. And they are expensive running just under $1M each but they are also available in mobile form and inexpensive to rent. The rental option is a no-brainer but let’s ignore that and look at what it would cost to protect the Superdome year around with a permanent installation. We would need 2 generators, the switchgear to connect it to the load and uninterruptable power supplies to hold the load during the first few seconds of a power failure until the generators start up and are able to pick up the load. To be super safe, we’ll buy third generator just in case there is a problem and one of the two generators don’t start. The generators are under $1m each and the overall cost of the entire redundant power configuration with the extra generator could be had for under $10m. Looking at statistics from the 2012 event, a 30 second commercial costs just over $4m.
For the price of just over 60 seconds of commercials the facility could protected against fault. And, using rental generators, less than 30 seconds of commercials would provide the needed redundancy to avoid impact from any utility failure. Given how common utility failures are and the negative impact of power disruptions at a professional sporting event, this looks like good value to me. Most sports facilities chose to avoid this “unnecessary” expense and I suspect the Superdome doesn’t have full redundancy for all of its field lighting. But even if it did, this failure mode can sometimes cause the generators to be locked out and not pick up the load during a some power events. In this failure mode, when a utility breaker incorrectly senses a ground fault within the facility, it is frequently configured to not put the generator at risk by switching it into a potential ground fault. My take is I would rather run the risk of damaging the generator and avoid the outage so I’m not a big fan of this “safety” configuration but it is a common choice.
Minimize Fault Zones: The reason why only ½ the power to the Superdome went down was because the system installed at the facility has two fault containment zones. In this design, a single switchgear event can only take down ½ of the facility.
Clearly the first choice is to avoid the fault entirely. And, if that doesn’t work, have redundancy take over and completely mask the fault. But, in the rare cases where none of these mitigations work, the next defense are small fault containment zones. Rather than using 2 zones, spend more on utility breakers and have 4 or 6 and, rather than losing ½ the facility, lose ¼ or 1/6. And, if the lighting power is checker boarded over the facility lights, (lights in a contiguous region are not all powered by the same utility feed but the feeds are distributed over the lights evenly), rather than losing ¼ or 1/6 of the lights in one area of the stadium, we would lose that fraction of the lights evenly over the entire facility. Under these conditions, it might be possible to operate with slightly degraded field lighting and be able to continue the game without waiting for light recovery.
Fast Recovery: Before we get to this fourth option, fast recovery, we have tried hard to avoid failure, then we have used power redundancy to mask the failure, then we have used small fault zones to minimize the impact. The next best thing we can do is to recover quickly. Fast recovery depends broadly on two things: 1) if possible automate recovery so it can happen in seconds rather than the rate at which humans can act, 2) if humans are needed, ensure they have access to adequate monitoring and event recording gear so they can see what happened quickly and they have trained extensively and are able to act quickly.
In this particular event, the recovery was not automated. Skilled electrical technicians were required. They spent nearly 15 minute checking system states before deciding it was safe to restore power. Generally, 15 min on a human judgment driven recover decision isn’t bad. But the overall outage was 34 min. If the power was restored in 15 min, what happened during the next 20? The gas discharge lighting still favored at large sporting venues, take roughly 15 minutes to restart after a momentary outage. Even a very short power interruption will still suffer the same long recovery time. Newer light technologies are becoming available that are both more power efficient and don’t suffer from these long warm-up periods.
It doesn’t appear that the final victor of Super Bowl XLVII was changed by the power failure but there is no question the game was broadly impacted. If the light failure had happened during the kickoff return starting the third quarter, the game may have been changed in a very fundamental way. Better power distribution architectures are cheap by comparison. Given the value of the game, the relative low cost of power redundancy equipment, I would argue it’s time to start retrofitting major sporting venues with more redundant design and employing more aggressive pre-game testing.
b: http://blog.mvdirona.com / http://perspectives.mvdirona.com
Since 2008, I’ve been excited by, working on, and writing about Microservers. In these early days, some of the workloads I worked with were I/O bound and didn’t really need or use high single-thread performance. Replacing the server class processors that supported these applications with high-volume, low-cost client system CPUs yielded both better price/performance and power/performance. Fortunately, at that time, there were good client processors available with ECC enabled (see You Really DO Need ECC) and most embedded system processors also supported ECC.
I wrote up some of the advantages of these early microserver deployments and showed performance results from a production deployment in an internet-scale mail processing application in Cooperative, Expendable, Microslice, Servers: Low-Cost, Low-Power Servers for Internet-Scale Services.
Intel recognizes the value of low-power, low-cost processors for less CPU demanding applications and announced this morning the newest members of the Atom family, the S1200 series. These new processors support 2 cores and 4 threads and are available in variants of up to 2Ghz while staying under 8.5 watts. The lowest power members of the family come in at just over 6W. Intel has demonstrated an S1200 reference board running spec_web at 7.9W including memory, SATA, Networking, BMC, and other on-board components.
Unlike past Atom processors, the S1200 series supports full ECC memory. And all members of the family support hardware virtualization (Intel VT-x2), 64 bit addressing, and up to 8GB of memory. These are real server parts.
Centerton (S1200 series) features:
One of my favorite Original Design Manufacturers, Quanta Computer, has already produced a shared infrastructure rack design that packs 48 Atom S1200 servers into a 3 rack unit form factor (5.25”).
Quanta S900-X31A front and back view:
Quanta S900-X31a server drawer:
Quanta has done a nice job with this shared infrastructure rack. Using this design, they can pack a booming 624 servers into a standard 42 RU rack.
I’m excited by the S1200 announcement because it’s both a good price/performer and power/performer and shows that Intel is serious about the microserver market. This new Atom gives customers access to microserver pricing without having to change instruction set architectures. The combination of low-cost, low-power, and the familiar Intel ISA with its rich tool chain and broad application availability is a compelling combination. It’s exciting to see the microserver market heating up and I like Intel’s roadmap looking forward.
Related Microserver focused postings:
· Cooperative Expendable Microslice Servers: Low-cost, Low-power Servers for Internet Scale Services
· The Case for Low-Cost, Low-Power Servers
· Low Power Amdahl Blades for Data Intensive Computing
· Microslice Servers
· ARMCortext-A9 Design Announced
· 2010 the Year of the Microslice Server
· Very Low Power Server Progress
· Nvidia Project Denver: ARM Powered Servers
· ARM V8 Architecture
· AMD Announced Server-Targeted ARM Part
· Quanta S900-X31A
b: http://blog.mvdirona.com / http://perspectives.mvdirona.com
I have been interested in, and writing about, microservers since 2007. Microservers can be built using any instruction set architecture but I’m particularly interested in ARM processors and their application to server-side workloads. Today Advanced Micro Devices announced they are going to build an ARM CPU targeting the server market. This will be 4-core, 64 bit, more than 2Ghz part that is expected to sample in 2013 and ship in volume in early 2014.
AMD is far from new to microserver market. In fact, much of my past work on microservers has been AMD-powered. What’s different today is that AMD is applying their server processor skills while, at the same time, leveraging the massive ARM processor ecosystem. ARM processors power Apple iPhones, Samsung smartphones, tablets, disk drives, and applications you didn’t even know had computers in them.
The defining characteristic of server processor selection is to focus first and most on raw CPU performance and accept the high cost and high-power consumption that follows from that goal. The defining characteristic of Microservers is we leverage the high-volume client and connected device ecosystem and make a CPU selection on the basis of price/performance and power/performance with an emphasis on building balanced servers. The case for microservers is anchored upon these 4 observations:
· Volume economics: Rather than draw on the small-volume economics of the server market, with Microservers we leverage the massive volume economics of the smart device world driven by cell phones, tablets, and clients. To give some scale to this observation, IDC reports that there were 7.6M server units sold in 2010. ARM reports that there were 6.1B Arm processors shipped last year. The connected and embedded device market volumes are 1000x larger than that of the server market and the performance gap is shrinking rapidly. Semiconductor analyst Semicast estimates that by 2015 there will be 2 ARM processors for every person in the world. In 2010, ARM reported that, on average, there were 2.5 ARM-based processors in each Smartphone. The connected and embedded device market is 1000x that of that of the server world.
Having watched and participated in our industry for nearly 3 decades, one reality seems to dominate all others: high-volume economics drives innovation and just about always wins. As an example, IBM mainframes ran just about every important server-side workload in the mid-80s. But, they were largely swept aside by higher-volume RISC servers running UNIX. At the time I loved RISC systems – databases systems would just scream on them and they offered customers excellent price/performance. But, the same trend played out again. The higher-volume X86 processors from the client world swept the superior raw performing RISC systems aside.
Invariably what we see happening about once a decade is a high-volume, lower-priced technology takes over the low end of the market. When this happens many engineers correctly point out that these systems can’t hold a candle to the previous generation server technology and then incorrectly believe they won’t get replaced. The new generation is almost never better in absolute terms but they are better price/performers so they first are adopted for the less performance critical applications. Once this happens, the die is cast and the outcome is just about assured. The high-volume parts move up market and eventually take over even the most performance critical workloads of the previous generation. We see this same scenario play out roughly once a decade.
· Not CPU bound: Most discussion in our industry centers on the more demanding server workloads like databases but, in reality, many workloads are not pushing CPU limits and are instead storage, networking, or memory bound. There are two major classes of workloads that don’t need or can’t fully utilize more CPU:
1. Some workloads simply do not require the highest performing CPUs to achieve their SLAs. You can pay more and buy a higher performing processor but it will achieve little for these applications. Some workloads just don’t require more CPU performance to meet their goals.
2. This second class of workloads is characterized by being blocked on networking, storage, or memory. And by memory bound I don’t mean the memory is too small. In this case it isn’t the size of the memory that is the problem, but the bandwidth. The processor looks to be fully utilized from an operating system perspective but the bulk of its cycles are waiting for memory. Disk and CPU bound systems are easy to detect by looking for which is running close to 100% utilization while the CPU load is way lower. Memory bound is more challenging to detect but its super common so worth talking about it. Most server processors are super-scalar, which is to say they can retire multiple instructions each cycle. On many workloads, less than 1 instruction is retired each cycle (you can see this by monitoring Instructions per cycle) because the processor is waiting for memory transfers.
If a workload is bound on network, storage, or memory, spending more on a faster CPU will not deliver results. The same is true for non-demanding workloads. They too are not bound on CPU so a faster part won’t help in this case either.
· Price/performance: Device price/performance is far better than current generation server CPUs. Because there is less competition in server processors, prices are far higher and price/performance is relatively low compared to the device world. Using server parts, performance is excellent but price is not.
Let’s use an example again: A server CPU is hundreds of dollars sometimes approaching $1,000 whereas the ARM processor in an iPhone comes in at just under $15. My general rule of thumb in comparing ARM processors with server CPUs is they are capable of ¼ the processing rate at roughly 1/10th the cost. And, super important, the massive shipping volume of the ARM ecosystem feeds the innovation and completion and this performance gap shrinks the performance gap with each processor generation. Each generational improvement captures more possible server workloads while further improving price/performance
· Power/performance: Most modern servers run over 200W, and many are well over 500W, while microservers can weigh in at 10 to 20W. Nowhere is power/performance more important than in portable devices, so the pace of power/performance innovation in the ARM world is incredibly strong. In fact, I’ve long used mobile devices as a window into future innovations coming to the server market. The technologies you seen in the current generation of cell phones has a very high probability of being used in a future server CPU generation.
This is not the first ARM based server processor that has been announced. And, even more announcements are coming over the next year. In fact, that is one of the strengths of the ARM ecosystem. The R&D investments can be leveraged over huge shipping volume from many producers to bring more competition, lower costs, more choice, and a faster pace of innovation.
This is a good day for customers, a good day for the server ecosystem, and I’m excited to see AMD help drive the next phase in the evolution of the ARM Server market. The pace of innovation continues to accelerate industry-wide and it’s going to be an exciting rest of the decade.
Past notes on Microservers:
b: http://blog.mvdirona.com / http://perspectives.mvdirona.com
When I come across interesting innovations or designs notably different from the norm, I love to dig in and learn the details. More often than not I post them here. Earlier this week, Google posted a number of pictures taken from their datacenters (Google Data Center Tech). The pictures are beautiful and of interest to just about anyone, somewhat more interesting to those working in technology, and worthy of detailed study for those working in datacenter design. My general rule with Google has always been that anything they show publically is always at least one generation old and typically more. Nonetheless, the Google team does good work so the older designs are still worth understanding so I always have a look.
Some examples of older but interesting Google data center technology:
· Efficient Data Center Summit
· Rough Notes: Data Center Efficiency Summit
· Rough notes: Data Center Efficiency Summit (posting #3)
· 2011 European Data Center Summit
The set of pictures posted last week (Google Data Center Tech) is a bit unusual in that they are showing current pictures of current facilities running their latest work. What was published was only pictures without explanatory detail but, as the old cliché says, a picture is worth a thousand words. I found the mechanical design to be most notable so I’ll dig into that area a bit but let’s start with showing a conventional datacenter mechanical design as a foil against which to compare the Google approach.
The conventional design has numerous issues the most obvious being that any design that is 40 years old and probably could use some innovation. Notable problems with the conventional design: 1) no hot aisle/cold aisle containment so there is air leakage and mixing of hot and cold air, 2) air is moved long distances between the Computer Room Air Handers (CRAHs) and the servers and air is an expensive fluid to move, and 3) it’s a closed system and hot air is recirculated after cooling rather than released outside with fresh air brought in and cooled if needed.
An example of an excellent design that does a modern job of addressing most of these failings is the Facebook Prineville Oregon facility:
I’m a big fan of the Facebook facility. In this design they eliminate the chilled water system entirely, have no chillers (expensive to buy and power), have full hot aisle isolation, use outside air with evaporative cooling, and treat the entire building as a giant, high-efficiency air duct. More detail on the Facebook design at: Open Compute Mechanical System Design.
Let’s have a look at the Google Concil Bluffs Iowa Facility:
You can see that have chosen a very large, single room approach rather than sub-dividing up into pods. As with any good, modern facility they have hot aisle containment which just about completely eliminates leakage of air around the servers or over the racks. All chilled air passes through the servers and none of the hot air leaks back prior to passing through the heat exchanger. Air containment is a very important efficiency gain and the single largest gain after air-side economization. Air-side economization is the use of outside air rather than taking hot server exhaust and cooling it to the desired inlet temperature (see the diagram above showing the Facebook use of full building ducting with air-side economization).
From the Council Bluffs picture, we see Google has taken a completely different approach. Rather than completely eliminate the chilled water system and use the entire building as an air duct, they have instead kept the piped water cooling system and instead focused on making it as efficient as possible and exploiting some of the advantages of water based systems. This shot from the Google Hamina Finland facility shows the multi-coil heat exchanger at the top of the hot aisle containment system.
From inside the hot aisle, this shot picture from the Mayes County data center, we can see the water is brought up from below the floor in the hot aisle using steel braided flexible chilled water hoses. These pipes bring cool water up to the top-of-hot-aisle heat exchangers that cool the server exhaust air before it is released above the racks of servers.
One of the key advantages of water cooling is that water is a cheaper to move fluid than air for a given thermal capacity. In the Google, design they exploit fact by bringing water all the way to the rack. This isn’t an industry first but it is nicely executed in the Google design. IBM iDataPlex brought water directly to the back of the rack and many high power density HPC systems have done this as well.
I don’t see the value of the short stacks above the heat exchanges. I would think that any gain in air acceleration through the smoke stack effect would be dwarfed by the loses of having the passive air stacks as restrictions over the heat exchangers.
Bringing water directly to the rack is efficient but I still somewhat prefer air-side economization systems. Any system that can reject hot air outside and bring in outside air for cooling (if needed) for delivery to the servers is tough to beat (see Diagram at the top for an example approach). I still prefer the outside air model, however, as server density climbs we will eventually get to power densities sufficiently high that water is needed either very near the server as Google has done or direct water cooling as used by IBM Mainframes in the 80s (thermal conduction module). One very nice contemporary direct water cooling system is the work by Green Revolution Cooling where they completely immerse otherwise unmodified servers in a bath of chilled oil.
Hat’s off to Google for publishing a very informative set of data center pictures. The pictures are well done and the engineering is very nice. Good work!
· Here’s a very cool Google Street view based tour of the Google Lenoir NC Datacenter.
· The detailed pictures released last week: Google Data Center Photo Album
b: http://blog.mvdirona.com / http://perspectives.mvdirona.com
Last night, Tom Klienpeter sent me The Official Report of the Fukushima Nuclear Accident Independent Investigation Commission Executive Summary. They must have hardy executives in Japan in that the executive summary runs 86 pages in length. Overall, It’s an interesting document but I only managed to read in to the first page before starting to feel disappointed. What I was hoping for is a deep dive into why the reactors failed, the root causes of the failures, and what can be done to rectify it.
Because of the nature of my job, I’ve spent considerable time investigating hardware and software system failures and what I find most difficult and really time consuming is getting to the real details. It’s easy to say there was a tsunami and it damaged the reactor complex and loss of power caused radiation release. But why did loss of power cause radiation release? Why didn’t the backup power systems work? Why does the design depend upon the successful operation of backup power systems? Digging to the root cause takes the time, requires that all assumptions be challenged, and invariably leads to many issues that need to be addresses. Good post mortems are detailed, get to the root cause, and it’s rare that a detailed investigation of any complex system doesn’t yield a long, detailed list of design and operational changes. The Rogers Commission on the Space Shuttle Challenger failure is perhaps the best example of digging deeply, finding root cause both technical and operational, and making detailed recommendations.
On the second page of this report, the committee members were enumerated. The committed includes 1) seismologist, 2) 2 medical doctors, 3) chemist, 4) journalist, 5) 2 lawyers, 6) social system designer, 7) one politician, and 8) no nuclear scientist, no reactor designers, and no reactor operators. The earthquake and subsequent tsunami was clearly the seed for the event but since we can’t prevent these, I would argue that they should only play a contextual role in the post mortem. What we need to understand is exactly why the both the reactor and nuclear material storage design were not stable in the presence of cooling system failure. It's weird that there were no experts in the subject area where the most dangerous technical problems were encountered. Basically we can’t stop earthquakes and tsunamis so we need to ensure that systems remain safe in the presence of them.
Obviously the investigative team is very qualified to deal with the follow-on events both in assessing radiation exposure risk, how the evacuation was carried out, and regulatory effectiveness. And it is clear these factors are all important. But still, it feels like the core problem is that cooling system flow was lost and the both the reactors and nuclear material storage ponds overheated. Using materials that, when overheated, release explosive hydrogen gas is a particularly important area of investigation.
Personally, the largest part of my interest were it my investigation, would be focused on achieving designs stable in the presence of failure. Failing that, getting really good at evacuation seems like a good idea but still less important than ensuring these reactors and others in the country fail into a safe state.
The report reads like a political document. Its heavy on blame, light on root cause and the technical details of the root cause failure, and the recommended solution depends upon more regulatory oversight. The document focuses on more oversight by the Japanese Diet (a political body) and regulatory agencies but doesn't go after the core issues that lead to the nuclear release. From my perspective, the key issues are 1) scramming the reactor has to 100% stop the reaction and the passive cooling has to be sufficient to ensure the system can cool from full operating load without external power, operational oversight, or other input beyond dropping the rods. Good SCRAM systems automatically deploy and stop the nuclear reaction. This is common. What is uncommon is ensuring the system can successfully cool from a full load operational state without external input of power, cooling water, or administrative input.
The second key point that this nuclear release drove home for me is 2) all nuclear material storage areas must be seismically stable, above flood water height, maintain integrity through natural disasters, and must be able to stay stable and safe without active input or supervision for long periods of time. They can't depends upon pumped water cooling and have to 100% passive and stable for long periods without tending.
My third recommendation is arguably less important than my first two but applies to all systems: operators can’t figure out what is happening or take appropriate action without detailed visibility into the state of the system. The monitoring system needs to be independent (power, communications, sensors, …) , detailed, and able to operate correctly with large parts of the system destroyed or inoperative.
My fourth recommendation is absolutely vital and I would never trust any critical system without this: test failure modes frequently. Shut down all power to the entire facility at full operational load and establish that temperatures fall rather than rise and no containment systems are negatively impacted. Shut off the monitoring system and ensure that the system continues to operate safely. Never trust any system in any mode that hasn’t been tested.
The recommendations from the Official Report of the Fukushima Nuclear Accident Independent Investigation Commission Executive Summary follow:
Monitoring of the nuclear regulatory body by the National Diet
A permanent committee to deal with issues regarding nuclear power must be established in the National Diet in order to supervise the regulators to secure the safety of the public. Its responsibilities should be:
1. To conduct regular investigations and explanatory hearings of regulatory agencies, academics and stakeholders.
2. To establish an advisory body, including independent experts with a global perspective, to keep the committee’s knowledge updated in its dealings with regulators.
3. To continue investigations on other relevant issues.
4. To make regular reports on their activities and the implementation of their recommendations.
Reform the crisis management system
A fundamental reexamination of the crisis management system must be made. The boundaries dividing the responsibilities of the national and local governments and the operators must be made clear. This includes:
1. A reexamination of the crisis management structure of the government. A structure must be established with a consolidated chain of command and the power to deal with emergency situations.
2. National and local governments must bear responsibility for the response to off-site radiation release. They must act with public health and safety as the priority.
3. The operator must assume responsibility for on-site accident response, including the halting of operations, and reactor cooling and containment.
Government responsibility for public health and welfare
Regarding the responsibility to protect public health, the following must be implemented as soon as possible:
1. A system must be established to deal with long-term public health effects, including stress-related illness. Medical diagnosis and treatment should be covered by state funding. Information should be disclosed with public health and safety as the priority, instead of government convenience. This information must be comprehensive, for use by individual residents to make informed decisions.
2. Continued monitoring of hotspots and the spread of radioactive contamination must be undertaken to protect communities and the public. Measures to prevent any potential spread should also be implemented.
3. The government must establish a detailed and transparent program of decontamination and relocation, as well as provide information so that all residents will be knowledgeable about their compensation options.
Monitoring the operators
TEPCO must undergo fundamental corporate changes, including strengthening its governance, working towards building an organizational culture which prioritizes safety, changing its stance on information disclosure, and establishing a system which prioritizes the site. In order to prevent the Federation of Electric Power Companies (FEPC) from being used as a route for negotiating with regulatory agencies, new relationships among the electric power companies must also be established—built on safety issues, mutual supervision and transparency.
1. The government must set rules and disclose information regarding its relationship with the operators.NAIIC 23
2. Operators must construct a cross-monitoring system to maintain safety standards at the highest global levels.
3. TEPCO must undergo dramatic corporate reform, including governance and risk management and information disclosure—with safety as the sole priority.
4. All operators must accept an agency appointed by the National Diet as a monitoring authority of all aspects of their operations, including risk management, governance and safety standards, with rights to on-site investigations.
Criteria for the new regulatory body
The new regulatory organization must adhere to the following conditions. It must be:
1. Independent: The chain of command, responsible authority and work processes must be: (i) Independent from organizations promoted by the government (ii) Independent from the operators (iii) Independent from politics.
2. Transparent: (i) The decision-making process should exclude the involvement of electric power operator stakeholders. (ii) Disclosure of the decision-making process to the National Diet is a must. (iii) The committee must keep minutes of all other negotiations and meetings with promotional organizations, operators and other political organizations and disclose them to the public. (iv) The National Diet shall make the final selection of the commissioners after receiving third-party advice.
3. Professional: (i) The personnel must meet global standards. Exchange programs with overseas regulatory bodies must be promoted, and interaction and exchange of human resources must be increased. (ii) An advisory organization including knowledgeable personnel must be established. (iii) The no-return rule should be applied without exception.
4. Consolidated: The functions of the organizations, especially emergency communications, decision-making and control, should be consolidated.
5. Proactive: The organizations should keep up with the latest knowledge and technology, and undergo continuous reform activities under the supervision of the Diet.
Reforming laws related to nuclear energy
Laws concerning nuclear issues must be thoroughly reformed.
1. Existing laws should be consolidated and rewritten in order to meet global standards of safety, public health and welfare.
2. The roles for operators and all government agencies involved in emergency response activities must be clearly defined.
3. Regular monitoring and updates must be implemented, in order to maintain the highest standards and the highest technological levels of the international nuclear community.
4. New rules must be created that oversee the backfit operations of old reactors, and set criteria to determine whether reactors should be decommissioned.
Develop a system of independent investigation commissions
A system for appointing independent investigation committees, including experts largely from the private sector, must be developed to deal with unresolved issues, including, but not limited to, the decommissioning process of reactors, dealing with spent fuel issues, limiting accident effects and decontamination.
Many of the report recommendations are useful but they fall short of addressing the root cause. Here’s what I would like to see:
1. Scramming the reactor has to 100% stop the reaction and the passive cooling has to be sufficient to ensure the system can cool from full operating load without external power, operational oversight, or other input beyond dropping the rods.
2. All nuclear material storage areas must be seismically stable, above flood water height, maintain integrity through natural disasters, and must be able to stay stable and safe without active input or supervision for long periods of time.
3. The monitoring system needs to be independent, detailed, and able to operate correctly with large parts of the system destroyed or inoperative.
4. Test all failure modes frequently. Assume that all systems that haven’t been tested will not work. Surprisingly frequently, they don’t.
The Official Report of the Fukushima Nuclear Accident Independent Investigation Commission Executive Summary can be found at: http://naiic.go.jp/wp-content/uploads/2012/07/NAIIC_report_lo_res2.pdf.
Since our focus here is primarily on building reliable hardware and software systems, this best practices document may be of interest: Designing & Deploying Internet-Scale Services: http://mvdirona.com/jrh/talksAndPapers/JamesRH_Lisa.pdf
b: http://blog.mvdirona.com / http://perspectives.mvdirona.com
Cooling is the largest single non-IT (overhead) load in a modern datacenter. There are many innovative solutions to addressing the power losses in cooling systems. Many of these mechanical system innovations work well and others have great potential but none are as powerful as simply increasing the server inlet temperatures. Obviously less cooling is cheaper than more. And, the higher the target inlet temperatures, the higher percentage of time that a facility can spend running on outside air (air-side economization) without process-based cooling.
The downsides of higher temperatures are 1) high semiconductor leakage losses, 2) higher server fan speed which increases the losses to air moving, and 3) higher server mortality rates. I’ve measured the former and, although these losses are inarguably present, these losses are measureable but have a very small impact at even quite high server inlet temperatures. The negative impact of fan speed increases is real but can be mitigated via different server target temperatures and more efficient server cooling designs. If the servers are designed for higher inlet temperatures, the fans will be configured for these higher expected temperatures and won’t run faster. This is simply a server design decision and good mechanical designs work well at higher server temperatures without increased power consumption. It’s the third issue that remains the scary one: increased server mortality rates.
The net of these factors is fear of higher server mortality rates is the prime factor slowing an even more rapid increase in datacenter temperatures. An often quoted study reports the failure rate of electronics doubles with every 10C increase of temperature (MIL-HDBK 217F). This data point is incredibly widely used by the military, NASA space flight program, and in commercial electronic equipment design. I’m sure the work is excellent but it is a very old study, wasn’t focused on a large datacenter environment, and the rule of thumb that has emerged from is a linear model of failure to heat.
A recent paper that does an excellent job of methodically digging through the possible issues of high datacenter temperature and investigating each concern methodically. I like Temperature Management in Data Centers: Why Some (Might) Like it Hot for two reasons: 1) it unemotionally works through the key issues and concerns, and 2) it draws from a sample of 7 production data centers at Google so the results are credible and from a substantial sample
From the introduction:
Interestingly, one key aspect in the thermal management of a data center is still not very well understood: controlling the setpoint temperature at which to run a data center’s cooling system. Data centers typically operate in a temperature range between 20C and 22C, some are as cold as 13C degrees [8, 29]. Due to lack of scientiﬁc data, these values are often chosen based on equipment manufacturers’ (conservative) suggestions. Some estimate that increasing the setpoint temperature by just one degree can reduce energy consumption by 2 to 5 percent [8, 9]. Microsoft reports that raising the temperature by two to four degrees in one of its Silicon Valley data centers saved $250,000 in annual energy costs . Google and Facebook have also been considering increasing the temperature in their data centers .
The authors go on to observe that “the details of how increased data center temperatures will affect hardware reliability are not well understood and existing evidence is contradictory.” The remainder of the paper presents the data as measured in the 7 production datacenters under study and concludes each section with an observation. I encourage you to read the paper and I’ll cover just the observations here:
Observation 1: For the temperature range that our data covers with statistical signiﬁcance (< 50C), the prevalence of latent sector errors increases much more slowly with temperature, than reliability models suggest. Half of our model/data center pairs show no evidence of an increase, while for the others the increase is linear rather than exponential.
Observation 2: The variability in temperature tends to have a more pronounced and consistent eﬀect on Latent Sector Error rates than mere average temperature
Observation 3: Higher temperatures do not increase the expected number of Latent Sector Errors (LSEs) once a drive develops LSEs, possibly indicating that the mechanisms that cause LSEs are the same under high or low temperatures.
Observation 4: Within a range of 0-36 months, older drives are not more likely to develop Latent Sector Errors under temperature than younger drives.
Observation 5: High utilization does not increase Latent Sector Error rates under temperatures.
Observation 6: For temperatures below 50C, disk failure rates grow more slowly with temperature than common models predict. The increase tends to be linear rather than exponential, and the expected increase in failure rates for each degree increase in temperature is small compared to the magnitude of existing failure rates.
Observation 7: Neither utilization nor the age of a drive signiﬁcantly aﬀect drive failure rates as a function of temperature.
Observation 8: We do not observe evidence for increasing rates of uncorrectable DRAM errors, DRAM DIMM replacements or node outages caused by DRAM problems as a function of temperature (within the range of temperature our data comprises).
Observation 9: We observe no evidence that hotter nodes have a higher rate of node outages, node downtime or hardware replacements than colder nodes.
Observation 10: We ﬁnd that high variability in temperature seems to have a stronger eﬀect on node reliability than average temperature.
Observation 11: As ambient temperature increases, the resulting increase in power is signiﬁcant and can be mostly attributed to fan power. In comparison, leakage power is negligible.
Observation 12: Smart control of server fan speeds is imperative to run data centers hotter. A signiﬁcant fraction of the observed increase in power dissipation in our experiments could likely be avoided by more sophisticated algorithms controlling the fan speeds.
Observation 13: The degree of temperature variation across the nodes in a data center is surprisingly similar for all data centers in our study. The hottest 5% nodes tend to be more than 5C hotter than the typical node, while the hottest 1%
nodes tend to be more than 8–10C hotter.
The paper under discussion: http://www.cs.toronto.edu/~nosayba/temperature_cam.pdf.
Other notes on increased data center temperatures:
· Exploring the Limits of Datacenter Temperature
· Chillerless Data Center at 95F
· Computer Room Evaporative Cooling
· Next Point of Server Differentiation: Efficiency at Very High Temperature
· Open Compute Mechanical System Design
· Example of Efficient Mechanical Design
· Innovative Datacenter Design: Ishikari Datacenter
b: http://blog.mvdirona.com / http://perspectives.mvdirona.com
I love solar power, but in reflecting carefully on a couple of high profile datacenter deployments of solar power, I’m really developing serious reservations that this is the path to reducing data center environmental impact. I just can’t make the math work and find myself wondering if these large solar farms are really somewhere between a bad idea and pure marketing, where the environmental impact is purely optical.
The first of my two examples is the high profile installation of a large solar array at the Facebook Prineville Oregon Facility. The installation of 100 kilowatts of solar power was the culmination of the unfriend coal campaign run by Greenpeace. Many in the industry believe the campaign worked. In the purest sense, I suppose it did. But let’s look at the data more closely and make sure this really is environmental progress. What was installed in Prineville was a 100 kilowatt solar array at a more than 25 megawatt facility (Facebook Installs Solar Panels at new Data Center ). Even though this is actually a fairly large solar array, its only providing 0.4% of the overall facility power.
Unfortunately, the actually numbers are further negatively impacted by weather and high latitude. Solar arrays produce far less than their rated capacity due to night duration, cloud cover, and other negative impacts from weather. I really don’t want to screw up my Seattle recruiting pitch too much but let’s just say that occasionally there are clouds in the pacific northwest :-). Clearly there fewer clouds at 2,868’ elevation in the Oregon desert but, even at that altitude, the sun spends the bulk of the time poorly positioned for power generation.
Using this solar panel output estimator, we can see that the panels at this location and altitude, yield an effective output of 13.75%. That means that, on average, this array will only put out 13.75 killowatts. That would have this array contributing 0.055% of the facility power or, worded differently, it might run the lights in the datacenter but it has almost no measurable possible impact on the overall energy consumed. Although this is pointed to as an environmentally conscious decisions, it really has close to no influence on the overall environmental impact of this facility. As a point of comparison, this entire solar farm produces approximately as much output as one high density rack of servers consumes. Just one rack of servers is not success, it doesn’t measurably change the coal consumption, and almost certainly isn’t good price/performance.
Having said that the Facebook solar array is very close to purely marketing expense, I hasten to add that Facebook is one of the most power-efficient and environmentally-focused large datacenter operators. Ironically, they are in fact very good environmental stewards, but the solar array isn’t really a material contributor to what they are achieving.
Apple iDataCenter, Maiden, North Carolina
The second example I wanted to look at is Apple’s facility at Maiden, North Carolina, often referred as iDataCenter. In the Facebook example discussed above, the solar array was so small as to have nearly no impact on the composition or amount of power consumed by the facility. However, in this example, the solar farm deployed at the Apple Maiden facility is absolutely massive. In fact, this photo voltaic deployment is reported to be largest commercial deployment in the US at 20 megawatts. Given the scale of this deployment, it has a far better chance to work economically.
The Apple Maiden facility is reported to cost $1B for the 500,000 sq ft datacenter. Apple wisely chose not to publicly announce their power consumption numbers but estimates have been as high as 100 megawatts. If you conservatively assume that only 60% of the square footage is raised floor and they are averaging a fairly low 200W/sq ft, the critical load would still be 60MW (the same as the 700,000 sq ft Microsoft Chicago datacenter). At a moderate Power Usage Efficiency (PUE) of 1.3, Apple Maiden would be at 78MW of total power. Even using these fairly conservative numbers for a modern datacenter build, it would be 78MW total power, which is huge. The actual number is likely somewhat higher.
Apple elected to put in a 20MW solar array at this facility. Again, using the location and elevation data from Wikipedia and the solar array output model referenced above, we see that the Apple location is more solar friendly than Oregon. Using this model, we see that the 20MW photo voltaic deployment has an average output of 15.8% which yields 3.2MW.
The solar array requires 171 acres of land which is 7.4 million sq ft. What if we were to build an solar array large enough to power the entire facility using these solar and land consumption numbers? If the solar farm were to be able to supply all the power of the facility it would need to be 24.4 times larger. It would be a 488 megawatt capacity array requiring 4,172 acres which is 181 million sq ft. That means that a 500,000 sq ft facility would require 181 million sq ft of power generation or, converted to a ratio, each data center sq ft would require 362 sq ft of land.
Do we really want to give up that much space at each data center? Most data centers are in highly populated areas, where a ratio of 1 sq ft of datacenter floor space requiring 362 sq ft of power generation space is ridiculous on its own and made close to impossible by the power generation space needing to be un-shadowed. There isn’t enough roof top space across all of NY to take this approach. It is simply not possible in that venue.
Let’s focus instead on large datacenters in rural areas where the space can be found. Apple is reported to have cleared trees off of 171 acres of land in order to provide photo voltaic power for 4% of their overall estimate data center consumption. Is that gain worth clearing and consuming 171 acres? In Apple Planning Solar Array Near iDataCenter, the author Rich Miller of Data Center Knowledge quotes local North Carolina media reporting that “local residents are complaining about smoke in the area from fires to burn off cleared trees and debris on the Apple property.”
I’m personally not crazy about clearing 171 acres in order to supply only 4% of the power at this facility. There are many ways to radically reduce aggregate data center environmental impact without as much land consumption. Personally, I look first to increasing the efficiency of power distribution, cooling, storage, networking and server and increasing overall utilization and the best routes to lowering industry environmental impact.
Looking more deeply at the Solar Array at Apple Maiden, the panels are built by SunPower. Sunpower is reportedly carrying $820m in debt and has received a $1.2B federal government loan guarantee. The panels are built on taxpayer guarantees and installed using tax payer funded tax incentives. It might possibly be a win for the overall economy but, as I work through the numbers, it seems less clear. And, after the spectacular failure of solar cell producer Solyndra which failed in bankruptcy with a $535 million dollar federal loan guarantee, it’s obvious there are large costs being carried by tax payers in these deployments. Generally, as much as I like data centers, I’m not convinced that tax payers should by paying to power them.
As I work through the numbers from two of the most widely reported upon datacenter solar array deployments, they just don’t seem to balance out positively without tax incentives. I’m not convinced that having the tax base fund datacenter deployments is a scalable solution. And, even if it could be shown that this will eventually become tax neutral, I’m not convinced we want to see datacenter deployments consuming 100s of acres of land on power generation. And, when trees are taken down to allow the solar deployment, it’s even harder to feel good about it. From what I have seen so far, this is not heading in the right direction. If we had $x dollars to invest in lowering datacenter environmental impact and the marketing department was not involved in the decision, I’m not convinced the right next step will be solar.
b: http://blog.mvdirona.com / http://perspectives.mvdirona.com
In the past, I’ve written about the cost of latency and how reducing latency can drive more customer engagement and increase revenue. Two example of this are: 1) The Cost of Latency and 2) Economic Incentives applied to Web Latency. Nowhere is latency reduction more valuable than in high frequency trading applications. Because these trades can be incredibly valuable, the cost of the infrastructure on which they trade is more or less an afterthought. Good people at the major trading firms work hard to minimize costs but, if the cost of infrastructure was to double tomorrow, high frequency trading would continue unabated.
High frequency trading is very sensitive to latency and it is nearly insensitive to costs. That makes it an interesting application area and its one I watch reasonably closely. It’s a great domain to test ideas that might not yet make economic sense more broadly. Some of these ideas will never see more general use but many ideas get proved out in high frequency trading and can be applied to more cost sensitive application areas once the techniques have been refined or there is more volume.
One suggestion that comes up in jest on nearly every team upon which I have worked is the need to move bits faster than the speed of light. Faster than the speed of light communications would help cloud hosted applications and cloud computing in general but physics blocks progress in this area resolutely.
What if it really were possible to transmit data at roughly 33% faster than the speed of light? It turns out this is actually possible and may even make economic sense in high frequency trading. Before you cancel your RSS feed to this blog, let’s look more deeply at what is being sped up, how much, and why it really is possible to substantially beat today’s optical communication links.
When you get into the details, every “law” is actually more complex than the simple statement that gets repeated over and over. This is one of the reasons I tell anyone who joins Amazon that the only engineering law around here is there are no unchallengeable laws. It’s all about understanding the details and applying good engineering judgment.
For example the speed of light is 186,000 miles per second right? Absolutely. But the fine print is that the speed of light is 186k m/s in a vacuum. The actual speed of light is dependent upon the medium in which the light is propagating. In an optical fiber, the speed of light is actually roughly 33% slower than a in a vacuum. More specifically, the index of refraction of most common optical fibers is 1.52. What this means is that the speed of light in a fiber is actually just over 122,000 miles/second.
The index of refraction of light in air is very close to 1 which is to say that the speed of light in air is just about the same as the speed of light in vacuum. This means that free space optics -- the use of light for data communications without a fiber wave guide -- is roughly 50% faster than the speed of light in a fiber. Unfortunately, this only matters over long distances but its only practical over short distances. There have been test deployments over metro-area distances – we actually have one where I work – but, generally, it’s a niche technology that hasn’t proven practical and widely applicable. On this approach, I’m not particularly excited.
Continuing this search for low refraction index data communications, we find that microwaves transmitted in air are again have a refraction index near 1 which is to say that microwave is around 50% faster than light in a fiber. As before, this is only of interest over longer distances but, unlike free space optics, Microwave is very practical over longer distances. On longer runs, it needs to be received and retransmitted periodically but this is practical, cost effective, and is fairly heavily used in the telecom industry. What hasn’t been exploited in the past is that Microwave is actually faster than the speed of light in a fiber.
The 50% speed-up of Microwave over fiber optics seems exploitable and an enterprising set of entrepreneurs are doing exactly that. This plan was outlined in the Gigaom article from yesterday titled Wall Street gains edge by trading over microwave.
In this approach, McKay Brothers are planning on linking New York city with Chicago using microwave transmission. This is a 790 mile distance but fiber seldom takes the most direct route. Let’s assume a fiber path distance of 850 miles which will yield 6.9 msec propagation delay if there are no routers or other networking gear in the way. Give that both optical and microwave require repeaters, I’m not including their impact in this analysis. Covering the 790 miles using microwave will require 4.2 msec. Using these data, we would have the microwave link a full 2.7 msec faster. That’s a very substantial time difference and, in the world of high frequency trading and 2.7 msec is very monetizable. In fact, I’ve seen HFT customers extremely excited about very small portions of a msec. Getting 2.7msec back is potentially a very big deal.
From the McKay Brothers web site:
Profitability in High Frequency Trading (“HFT”) is about being the first to respond to market events. Events which occur in Chicago markets impact New York markets. The first to learn about this information in New York can take appropriate positions and benefit. There is nothing new in this principle. Paul Reuters, founder of the Reuters news agency, used carrier pigeons to fill a gap in the telegraph lines and bring financial news from Berlin to Paris. The groundbreaking idea of the time was to use an old technology – the carrier pigeon – to fill a gap. What Paul Reuters did 160 years ago is being done again.
Today, we are revisiting an old technology, microwave transmission, to connect Chicago and New York at speeds faster than fiber optic transmission will ever be able to deliver.
This technology is emerging just two years after Spread Networks is reported to have spent 300 million dollars developing a low latency fiber optic connection between Chicago and New York. Spread’s fiber connection will soon be much slower than routes available by microwave.
The Gigaom article is at: http://gigaom.com/broadband/wall-street-gains-an-edge-by-trading-over-microwaves. The McKay Brothers web site is at: http://www.mckay-brothers.com/. Thanks for Amazon’s Alan Judge for pointing me to this one.
b: http://blog.mvdirona.com / http://perspectives.mvdirona.com
Occasionally I come across a noteworthy datacenter design that is worth covering. Late last year a very interesting Japanese facility was brought to my attention by Mikio Uzawa an IT consultant who authors the Agile Cat blog. I know Mikio because he occasionally translates Perspectives articles for publication in Japan.
Mikio pointed me to the Ishikari Datacenter in Ishikari City, Hokkaido Japan. Phase I of this facility was just completed in November 2011. This facility is interesting for a variety of reasons but the design features I found most interesting are: 1) High voltage direct current power distribution, 2) whole building ductless cooling, and 3) aggressive free air cooling.
High Voltage Direct Current Power Distribution
I first came across the use of direct current when Annabel Pratt took me through the joint work Intel was doing with Lawrence Berkeley National Lab on datacenter HVDC distribution (Evaluation of Direct Current Distribution in Data Centers to Improve Energy Efficiency). In this approach they distribute 400V direct current rather than the more conventional 208V to 240V alternating current used in most facilities today.
High voltage direct current work in datacenters has been around for around a decade and it is in extensive test at many facilities world-wide. Many companies are 100% focused on HVDC design consulting with Validus being one of the better known.
The savings potential of HVDC are often shown to be very exciting with numbers beyond 30% frequently quoted. But the marketing material I’ve gone through in detail compare excellent HVDC designs with very poor AC designs. Predictably the savings are around 30%. Unfortunately, the difference between good AC and bad AC designs are also around 30% :-).
When I look closely at HVDC distribution, I see slight improvements in efficiency at around 3 to 5%, somewhat higher costs of equipment since it is less broadly used, less equipment availability and longer delivery times, and somewhat more complex jurisdictional issues with permitting and other approvals taking longer in some regions. Nonetheless, the picture continues to improve, the industry as a whole continues to learn, and I think there is a good chance that high voltage DC distribution will end up becoming a more common choice in modern datacenters.
The Ishikari facility is a high voltage DC distribution design. I’m looking forward to learning more about this aspect of the facility and watching how the system performs.
Whole Building Ductless Cooling
Air handling ducts costs money and restrict flow so why not recognize that the entire purpose of a datacenter shell is to keep the equipment dry and secure and to transport heat. Instead of installing extensive duct work, just treat the entire building as a very large air duct.
Perhaps the nicest mechanical design I’ve come across based upon ductless cooling is the Facebook Prineville facility. In this design, they use the entire second floor of the building for air handling and the lower floor for the server rooms.
The Ishikari design shares many design aspects with the Intel Jones Farms facility where the IT equipment is on the second floor and the electrical equipment is on the first.
Aggressive Free-Air Cooling
Looking at the air flow diagram above, you can see that the Ishikari Datacenter is making good use of the datacenter friendly climate of Japan and aggressively using free-air cooling. Free-air cooling, often called air side economization, is one of the most effective ways of driving down datacenter costs and substantially increasing overall efficiency. It’s good to see this design point spreading rapidly.
More information is available at: http://ishikari.sakura.ad.jp/index_eng.html
Some datacenter designs I’ve covered in the past:
· Facebook Prineville Mechanical Design
· Facebook Prineville UPS & Power Supply
· Example of Efficient Mechanical Design
· 46MW with Water Cooling at a PUE of 1.10
· Yahoo! Compute Coop Design
· Microsoft Gen 4 Modular Data Centers
b: http://blog.mvdirona.com / http://perspectives.mvdirona.com