A decade later, Windows XP still in business
HP puts down Oracle, which puts up Solaris

You've got the power, but you can use less

Last week we we tried to discover the power needs of an A-Class vs. a comparable Series 9x9 HP 3000. We were prompted by a roadshow talk that HP delivered today here in Austin. Part of the content touted HP's cooler hardware designs, and we don't mean "more hip" when we say cooler.

We mean power and cooling energy efficiency, a measurement that ranges from the wattage a CPU draws to the needs of a blade server or blade storage, up to the electricity required to keep a full server enclosure running. Austin was a good place to have the talk, since we've posted 82 days of 100 degrees or more this summer, blasting our own record from 1925.

PowerMeterHP's solutions "span across IT and facilities to optimize and adapt energy use, reclaim capacity, and to reduce energy costs." Sullivan's Steakhouse lunchtime diners heard about the latest advances in power protection, distribution, and cooling. HP showed the numbers on calculating operational costs "to help extend the life of the datacenter."

Datacenters are migrating in the 3000 market. We're polling the community about their career and company changes over the 10 years since HP pulled out of the market. Some switched off 3000s because of high power needs. A new case history by MB Foster about decommissioning a Series 969 3000 at San Mateo Health Care cites cutting power as a spark to get off a 3000.

Comparing power needs requires great access to hardware manuals, which are genuinely useful in PDF format. Before it pulled away from your market, HP always crowed about slashing the power needs of a 3000 with the PCI-based systems introduced in 2001. The power-efficiency of Integrity blade systems running in, for example, a C3000 enclosure (the smallest) is even more pronounced over those 9x9 servers.

Brian Edminster has been our go-to fellow for a host of news stories recently, including his own Applied Technologies open source repository for software that can help HP 3000 owners. He says that power and heat specs for 9x9 vs. A- or N-Class 3000s can be found in an HP datasheet "that spells out 'basic' specs for A-Class systems and the same for N-Class systems. A site prep guide for 9x9 servers is online at ManualShark, at http://www.manualshark.org/manualshark/files/28/pdf_26712.pdf

Edminster warned us that "the power use ratios are very rough-cut based on 'system chassis' only (simply CPU, memory, and internal disk/tapes). They are basically the smallest system that can boot MPEi/X.

The data I've provided the weblinks to is only a rough guideline, but enough to show what HP was talking about regarding the power consumption: significantly lower with PCI-based systems. The A and N-class guides are just for the system proper (no external expansions). You can dig the same data out of that 9x9 manual, but it's harder to get at.

In short, HP was right, at least about power requirements. The ratio of power usage (based on A-Class being very roughly = 1) is A=1, N=3, 989=6. You really don't even want to know about the 99x systems. Another point to consider is that peripherals (especially external disk arrays) can really be power hogs too -- and the newer arrays (with fewer but larger drive mechs) had similar power usage ratios.

A fully built-out multi-rack Series 989 system would likely have in excess of a 10:1 power draw compared to a roughly equivalent N-Class, which would only take about half a rack of space. So if space, power, or HVAC capacity are at issue in your datacenter, newer is better.

Edminster added that he's curious how the new Stromasys HPA/3000 CHARON systems will fall in, "given that they'll be based on pretty much cutting edge hardware." The emulator was running on a gaming PC with SSD storage at the HP3000 Reunion in September. You couldn't even hear a fan.