Previous month:
March 2021

May 2021

Making Your Legacy Foundation Open

By Scott Hirsh

After having been immersed in networked storage, I’ve had a lot of time to think about infrastructure architecture. The first lesson is that storage (along with networking) is the foundation of an IT architecture. So it stands to reason that an infrastructure that’s built to last will begin with external storage, ideally from a company dedicated to storage with as much heterogeneous operating system support as possible.

What you get from an external storage platform that supports multiple operating systems is the ability to change vendors, hosts, and operating systems with a minimum of fuss. Yes, a migration of any kind is not without its pain, but it’s a lot more painful when all hardware and software is tied to one vendor. That’s a lesson everyone reading this should have learned by now.

Furthermore, these independent storage players have extensive expertise in supporting multiple platforms, including migrating customers from one to another. And frankly, unless you’re contemplating a non-mainstream operating system, networked storage is an excellent investment because it is a best practice for storage vendors to provide ongoing support for any operating system with critical mass.

For example, any HP 3000 users running on Symmetrix will have no problem using that same storage on HP-UX, Solaris, Linux, Wintel, and many others. If you’re running on internal disk, you’re stuck with HP-UX, best case — not that there’s anything wrong with that.

The Best Offense is a Good Defense

Here are some quick guidelines that recap the concept of minimizing transfer costs:

Start with a networked storage platform that supports as many operating systems as possible. This is the foundation layer for your IT infrastructure.

The best Total Cost of Ownership in IT is based on consolidation. However, that doesn’t necessarily imply homogeneity. It’s a matter of degree. It’s a matter of physical location as well.

Software drives hardware. Choose DBMS, applications, tools based on support for multiple operating systems and hardware. Be cautious regarding any decision that locks you into one vendor. For example, SQL Server-based solutions, which only run on Wintel, will have higher transfer costs than Oracle.

Keep your vendors honest, but at the same time don’t underestimate the value of a true partnership. One company I consulted for dropped HP after learning that HP felt they “owned” them. Any time one side thinks they have the other over a barrel, there’s bound to be trouble. We’re all in this together.

The Glue That Holds It All Together – You

In the new, defensive, minimum transfer cost environment, IT departments take on the role of systems integrator. That’s the catch for designing maximum flexibility into your environment. The IT staff must make everything work together, and be prepared to shift gears at a moment’s notice. To me, that’s the silver lining to this otherwise dreary story of no loyalty and diminishing options. More than ever, it’s the people who make the difference.

Back in the day, hardware was expensive and people were not. Today, if you're still using your own hardware on-premise, the hardware is cheap and the people are expensive.

Perhaps the greatest legacy of the HP 3000, and what will ensure our continued leadership in IT, is the hard-earned knowledge of what’s a best practice and what is not.


Worst Practices: Staying on HP's 3000s?

By Scott Hirsh

In the years since HP’s end-of-life decision for the HP 3000 has worn off, was it a “worst practice” to be an HP 3000 user over its final vendor life? What could we have done differently, and how do we avoid painting ourselves into a technology corner in the future?

The key is a concept that vendors understand intimately – transfer cost. Transfer cost is the cost of changing from one vendor or platform to another. For example, switching phone carriers involves low transfer costs. You hardly know it happens. But changing from one operating system to another – say, HP 3000 to Solaris – means high transfer costs. Vendors try to make transfer costs high, without being obvious about it, to discourage customers from switching to competitors.

It is your job to identify potential transfer costs in your technology decisions, and to keep them as low as possible. One way to lessen risk to spread the risk over multiple platforms and vendors. Trust no one, and always have a Plan B.

This means making the assumption that everything that has been happening – vendor consolidation, commoditization of hardware and the subordination of operating system to DBMS and application – will continue unabated.

The days of the “HP shop” are long over. Even if you've decided to standardize on HP, Sun, IBM, you should do so with the knowledge that one day you may need to switch gears abruptly. In other words, these companies are noted for their legacy hardware, which you must be prepared to dump for another brand with as little pain as possible.

Start with a networked storage platform that supports as many operating systems as possible. This is the foundation layer for your IT infrastructure.

The best Total Cost of Ownership in IT is based on consolidation. However, that doesn’t necessarily imply homogeneity. It’s a matter of degree. It’s a matter of physical location as well.

Software drives hardware. Choose DBMS, applications, tools based on support for multiple operating systems and hardware. Be cautious regarding any decision that locks you into one vendor. For example, SQL Server based solutions, which only run on Wintel, will have higher transfer costs than Oracle or Sybase solutions.

Keep your vendors honest, but at the same time don’t underestimate the value of a true partnership. One company I consulted for dropped HP after learning that HP felt they “owned” them. Any time one side thinks they have the other over a barrel, there’s bound to be trouble. We’re all in this together.

No loyalty

We now know more than ever that there is no loyalty on either side of the bargaining table. The IT culture of planned obsolescence has accelerated, and any hope that a technology vendor will watch out for the customer is laughable at best. Ironically, in the computing arena, it’s IBM who seems to protect its customers the best. The shop I managed for 12 years was an IBM System 3 to HP 3000 Series III conversion. Who could have imagined?

Mix and Match

For the longest time I enjoyed being an HP 3000 user and an HP customer. Rather than see the HP 3000 as a “proprietary” platform that I was locked into, I looked it as an integrated platform where everything was guaranteed (more or less) to work together — unlike the emerging PC world where getting all the various components to work together was a nightmare.

But around the time that HP decided commercial Unix was the next big thing, the concept of heterogeneous computing was reaching critical mass. As discussed in my last column, the glory days of the HP 3000 were just too easy. IT decision makers seemed to have a complexity death wish, and we live with this legacy. Consequently, the way to lessen risk today is to spread the risk over multiple platforms and vendors. Trust no one, and always have a Plan B.

This means making the assumption that everything that has been happening for the past few years – vendor consolidation, commoditization of hardware and the subordination of operating system to DBMS and application – will continue unabated.

Separation of OS and Hardware

When the concept of hardware independence first manifested itself in the form of Posix, I was intrigued. Is this too good to be true, the user community having the upper hand on its technology destiny? Perhaps not the holy grail of binary compatibility among Unix flavors, but a quick recompile and hello new hardware. Well, it was too good to be true and nobody’s talked about Posix lately that I’ve heard anyway.

Likewise for Java. Write once, run everywhere — slowly. Yes, there are lots of handy applets and specialized tools that are Java based, but many of these Java applications use “extensions” the scourge of openness.

Two main operating systems that facilitate hardware independence are Linux and Windows. Each has its issues, from the standpoint of transfer costs. Linux, of course, comes in several flavors; all based on the same kernel, but tweaked just enough to derail binary compatibility. (Can’t we all just get along?) And Windows is from Microsoft, who knows something about locking people in then shaking them down. But while these two options are not without their problems, they represent at least the short-term future of computing.

Of the two hardware independent operating system solutions, Linux seems to me the better story. Clearly the flavor is a major decision, with Red Hat having the most support from major hardware vendors. But I have seen other distributions – notably SUse – adopted in large organizations, so don’t necessarily assume only one choice. The idea is not to turn this into “Linux everywhere” discussion, but to illustrate the concept of Linux as a means of avoiding being painted into a corner.

But will everything run on Linux? No. You almost certainly will need some kind of Windows presence, although I do business with some companies who absolutely, positively want nothing to do with Windows (and Microsoft). But that’s not typical. Most of us in IT resign ourselves to doing at least a little business with Microsoft.

Microsoft, however, has shown itself to be the boa constrictor of software companies. They never stop squeezing, especially when they know they have your critical applications. The hardware independence story is good, but Microsoft substitutes software dependence. Proceed with caution.

But the principle here is that even if you choose an operating system that only runs on one vendor’s hardware, you can at least mitigate the risk by choosing a DBMS and applications that can be transferred to another hardware and OS if necessary.