« October 2018 | Main

November 12, 2018

What are we doing talking 3000s in 2018?

UT-Club
This club side of UT's stadium only rose with the 3000

We came together at the UT Club last month. I had lunch at the University of Texas alumni club, deep in the heart of Darryl K. Royal Stadium, to talk with Chad Lester about something older than the football palace's official name: the way that MPE has been sold to the world. 

"Here we are in 2018 sitting at the UT Club, still talking about MPE and how we can go infiltrate those accounts" he said. Some of the reason third parties still find 3000 budget this year is that HP didn't position its business strategy around back-end revenues for the server. HP wanted its money up front. The up-front money meant that by the late 1990s the 3000 division at HP was sending a SWAT team of presales experts to talk at user group meetings or with IT managers who had trouble getting an order approved for a newer 3000.

HP 3000 SWAT team members like Vince Clapps were a proud addition to the sales effort. Now it looks like that push to place new hardware and earn the revenue up front for a system replacement was a fatigued concept. SWAT members locked down new customers doing ecommerce, but many times they'd speak at spots like a RUG conference to save a customer from migration.

Third party application vendors roadblocked the future for market growth, too, because they needed their revenue up front, too. Vendors like Cognos learned to create pricing that prohibited the upgrades of systems. Every boost of power threatened to ripple tens of thousands of dollars of software upgrades because the vendors were allowed to clamp on like pilot fish to the leviathan of buying a bigger 3000.

"They were reversed on how they handled licensing," Lester said over lunch. "In the channel today, these vendors make all of their money off the back-end rebates from Microsoft and the security companies out there. That became the new norm while HP was still on the front side of the sale."

Lester's employer Thomas Tech wants to educate the 3000 community that another generation of storage can be integrated with MPE that runs on HP's systems. HP-built computers are still the predominant hardware platform the MPE computing that will head toward 2028.

This back side of the newer revenue stream is what keeps vendors providing newer components. It's not about the computer gear as it was in those SWAT days. By 2018 the value lies in support and the opportunity to access the datacenter's non-MPE systems. To win the battle to keep 3000 resources on the market, new strategies are in play.

UT called the stadium War Memorial Stadium as it opened in 1924. The UT student body dedicated it in honor of the 198,520 Texans – 5,280 of whom lost their lives – who fought in the Great War which marks the centennial of its armistance this week. DKR, as the Texas football stadium is known informally, has a legacy that goes as far back in football as the HP 3000 goes in minicomputing. The concrete version of the stadium replaced wooden bleachers in 1923. Mainframes were the wooden bleachers of 1972 when the 3000 arrived.

Forty-six years later the 3000's heartbeat MPE/iX is still ticking away. The owners of those 3000s protect their jobs by hiring the right vendors. In 2018 those vendors supply support. Choosing a good support provider is the top asset an owner can call upon. With expertise on the wane for MPE/iX, it's crucial to stay in touch with people who can talk about the 3000 in 2018.

05:45 PM in Homesteading | Permalink | Comments (0)

November 09, 2018

Fine-Tune: Test for disasters in any season

Test-siren
NewsWire Classic

Editor's Note: In October of 2001 the world worked in the aftermath of 9/11 attacks. Our Worst Practices columnist Scott Hirsh wrote this advice about the need to test for disasters. Another crisis was going to rise up for 3000 owners just a few weeks after this article appeared, this one triggered by HP. Regardless of where your datacenter is focused, it's always a good practice to test.

This Is Not a Test

By Scott Hirsh

For those of us in the United States entrusted with a company’s information resources, the events of September 11 changed everything. Before our business continuity or disaster recovery plans were primarily concerned with so-called “acts of God.” But we must now plan for the most improbable human acts imaginable. Who among us, prior to September 11, had a plan that took into account multiple high-rise office buildings being destroyed within minutes of each other? As you read this, the insurance industry is revising its assumptions. Likewise, we must now reconsider our approach to managing and protecting the assets for which we are responsible. Never before has the probability of actually needing to execute our recovery plans been so great.

As of this writing there have already been numerous business continuity and disaster recovery articles in the computer press. By now we understand the distinction between keeping the business going – not just IT, but also the whole business – and recovering after some (hopefully minor) interruption. And we’ve covered the issue of risk, where all the trade-offs and costs are negotiated. This whole topic was explored anew in the last few months, but it is still worthwhile to emphasize some early lessons of the attacks, from which we are still recovering.

It Had Better Work

Worst Practice 1: Trying to Fake It — I was visiting a friend’s datacenter recently, where I was told about a recent audit. This friend’s company spent the whole time trying to fake all the audit criteria: disaster recovery preparedness, security, audit trails, etc. At the risk of sounding like your parents, whom does this behavior really hurt? An audit is an ideal opportunity to validate all the necessary hard work required to run a professional datacenter. And should you ever be subjected to attack, electronic or otherwise, you know that your datacenter will survive.

If you didn’t get it before, you’d better get it now: Faking it is unacceptable. Chances are, at some point you will be required to do a real, honest-to-goodness recovery. And if you think you’re safe just because there may not be very many hijacked planes running into buildings such as yours, think again. The threats to your datacenter are diverse and numerous. And, by the way, violent weather, earthquakes and other natural disasters are still there too.

Worst Practice 2: Not Testing — Once you’re serious about continuity and recovery, not only will you plan, but you’ll test that plan often. There are lots of reasons to test your recovery capability often. Among them are: the ability to react quickly in a crisis; catching changes in your environment since your last test; accommodating changes to staff since your last test. A real recovery is a terrible time to do discovery.

Worst Practice 3: Not Documenting — One of the biggest problems with disasters is no warning. That’s why so many tests are a waste of time. Anyone can recover when you know exactly when and how. The truly prepared can recover when caught by surprise. Since you won’t get any warning – except, perhaps, with some natural disasters – you’ll want to have current, updated procedures. Since you’ll probably be on vacation (or wish you were) when disaster strikes, make sure the recovery procedures are off-site and available. If you’re the only one who knows what to do, even if you never take a day off there still won’t be enough of you to go around at crunch time.

Increasing the Odds of Recovery

Worst Practice 4: Taking Too Long — At this point in technology, there are two main ways to deal with a disaster: fail-over and reconstruction. With fail-over, you are replicating data between your main site and a recovery site. These sites can be relatively near each other – across town or perhaps in an adjoining states – or far away. This kind of remote clustering, if you will, is what the largest and most critical institutions use, and the cost is considerable. However, the cost of not doing it is considerably more.

Reconstruction is more about recovery than continuity. I am guessing that the vast majority of e3000 shops base their recovery plans on recalling tapes from a vault (e.g., Iron Mountain) to a recovery site, then restoring their data either to a bare machine or one on which only MPE has been installed. This was certainly true for my own operation, as my management always deemed this less expensive method “adequate.”

But that was then. Today, the amount of data that must be reloaded is so massive, that the time to recover renders this method all but worthless. True, your plan can call for a critical subset of data to be restored (not the entire data warehouse). But even current data can now stretch into the terabytes, once you include the applications, utilities, etc.

So the point here is to make sure your recovery methodology is practical from a business standpoint, as well as a technical standpoint. You don’t want to be in the position of estimating “just three more days” before you’re up and running.

Worst Practice 5: Not Recovering a Complete Environment — As the state of the art advances, some technology is left behind. We’ll keep it succinct here: If you need to keep an old technology alive, you may need to provide some or all of the solution yourself. Don’t expect the recovery site to stock or maintain every peripheral ever made just because you have one esoteric requirement. And don’t forget to keep backup copies of any obsolete software packages as well.

Another aspect to this issue, recently discovered at a customer site, is the fact that diverse platforms are now highly integrated. It’s not enough just to recover the e3000. The non-e3000 systems that share data feeds must also be recovered. And don’t forget any outside data sources either. Again, if you’re faking it, you can declare victory when you’ve reconstructed an e3000 at the recovery site. In reality, that only counts if the e3000 system can support the business on its own without any external feeds.

Worst Practice 6: Ignoring the Human Factor — Even the best plans don’t execute themselves. Keep in mind who will be doing what and how things will get done if key individuals are unable to perform their tasks. As we know, families come first, which is proper: so we mustn’t lose sight of our humanity in times of crisis. Any recovery is hard work. That counts double when there are casualties.

Reassess Your Assumptions

Worst Practice 7: A Defeatist Attitude — If you’ve been subjected to the “fake it” mentality, you’re probably demoralized. After all, who among us just wants to go through the motions? Well, it’s now a whole new world, and you have a really good shot at doing things right. But you need to forcefully make your case to those who didn’t take contingency planning seriously in the past. By the time you read this there may be stories about companies that unfortunately couldn’t recover from the September 11 attacks. We can emerge from this atrocity stronger if we do some honest introspection. Every rational businessperson should now be willing to do proper planning. If you can get over the bad practices of the past, you can position yourself and your business to be survivors.

Worst Practice 8: Datacenter Placement — As much as I enjoyed the view from my 29th floor datacenter, it’s pretty obvious now that datacenters don’t belong in certain places – high-rise buildings among them. Besides the obvious prohibitive cost of floor space, there are safety and security issues not obvious until recent events.

I have visited many co-location facilities in the past year, and they all had a several things in common:

1. They were in the low-rent district.

2. They were very difficult to find, as they were essentially unmarked.

3. They were very secure (at least relative to downtown datacenters), both physically and electronically.

4. They were redundant up the wazoo.

If this does not describe your datacenter, then perhaps it’s time to consider relocation. Let’s face it, even if there are good reasons why your datacenter needs to be right downtown, I’ll bet your recovery site is in the middle of nowhere. That should tell you something.

Hope for the Best

We’re currently in reactive mode. We’ve now seen one type of unimaginable act, using airliners as missiles. For those unlucky enough to be on the front lines of that atrocity, there was no way to plan for that series of events. And it’s likely that the next event will also be difficult to imagine, and hence plan for. So even the best plans require a great deal of luck, as even the best plan is useless if there is widespread devastation beyond your control. We should be honest about those aspects of business continuity and recovery that are within our control. We must be truly prepared. But we can still hope that we never need to actually use those plans. Not like we did after September 11. At least that’s the hope.

06:14 PM in Hidden Value, Homesteading | Permalink | Comments (0)

November 07, 2018

Wayback: A month to download 3000 Jazz

Jazz-announcement
Ten years ago this month HP was advising its customers to get free software while it was still online. HP said that its Jazz web server was going dark because its 3000 labs would end operations on Dec. 31. Maintained by HP's lab staff, Jazz was being unplugged after 12 years. The software played an essential role in getting the 3000 into the Internet age. Eventually HP learned to market the server as the e3000.

Bootstrapping development fundamentals such as the GNU Tools, the open source gcc compiler, and utilities ported by independent developer Mark Klein had a home on Jazz for a decade. More than 80 other programs were hosted on the server, some with HP support and others ported and created by HP but unsupported by the vendor.

The software is still online 10 years later. Fresche Solutions, which began as Speedware, continues to host Jazz programs and papers at hpmigrations.com/HPe3000_resources. HP was clear in 2008 that customers had better grab what they needed before Jazz went unplugged. HP wasn't going to move the downloadable programs onto the IT Resource Center servers to doc.hp.com.

"Anything that people will need they should download before Dec. 31, 2008," said business manager Jennie Hou. "That's our recommendation."

The list of programs online is long and worth a visit for a 3000 manager looking for help to keep MPE/iX well connected to their datacenter. HP created more than a dozen open source programs which it even supported as of 2008. The list is significant.

• Apache
• BIND
• Many command files
• dnscheck
• Porting Scanner
• Porting Wrappers
• Samba
• The System Inventory Utility
• Syslog
• WebWise

Open source software produced or ported as unsupported freeware by HP includes

• JServ
• NTP
• OpenSSL
• Perl
• Sendmail

Open source software produced/ported by individuals:

• Analog
• autoconf
• bash
• gdbm
• Glimpse
• ht://Dig
• mmencode/sendmime
• MPE::CIvar
• MPE::IMAGE
• NetPBM
• OpenLDAP
• Ploticus
• Python
• SAURCS
• SLS
• texinfo
• Tidy
• TIFF library
• wget

Binary-only software produced/ported and "supported" by HP:

• CRYPT
• DBUTIL
• Firmware
• Java
• LDAP
• LineJet Utilities
• Patch/iX
• VERSION
• VT3K

06:03 PM in History, Homesteading, News Outta HP | Permalink | Comments (0)

November 05, 2018

A Pro's World After 3000 Retirement

Cruise-ship-retirement
Over the past few months we've talked about the 3000 veteran John Clogg. His name is written all over the 3000 online community, as well as in the histories of companies that continue use MPE/iX for manufacturing. He's been helpful to us in telling the story of the end of his career, one that reaches back to 1974.

He was a part of the NewsWire blog from the very first week we pushed it online. In June of 2005, but HP's exit-the-3000 decision less than three years old, Clogg wrote this about the future of access to MPE/iX source code.

HP has had three and a half years since its 3000 EOL announcement — and who knows how long before — to consider the source code issue. It is no longer a credible claim that they have not made a decision. Instead, they are are simply keeping their decision secret for whatever reason.

To me that says one thing: the answer isn't the one we want. Either HP is hoping to kill off interest in non-HP support for MPE by delaying an announcement to the point that no one can afford to wait any longer, or they want to wait to further alienate the HP 3000 installed base until they are no longer serious prospects for other HP servers. In either case, homesteaders had better not base any of their plans on being able to obtain future enhancements to MPE. The handwriting is on the wall -- in flourescent paint! I just wish HP would admit it.

Postscript: HP never did the right thing by releasing the OS source to the community. Seven support companies and developers (including Pivital Solutions) got read-only access. But on a brighter note, like a lot of 3000 pros, Clogg's personal life is about to get richer after all that he's left to his employers and the community. We asked what his retirement by the end of this year is going to bring. 

For the last 44 years I have been on call virtually 24/7/365. I haven't had a New Year's holiday in a few years, and for the first time in 25 years I have a job with only two weeks of vacation. Mostly I just look forward to having time: time to play, time to explore, time to develop new interests that remain unnamed at this point. I have a good job with a good company, but I am simply burned out.

In the longer term, I know I will need something to keep me busy and engaged. I have been asked by my employer whether I would be available for part-time work, so I expect there will be some of that.  I might offer my services to friends and others who need help with PC issues.

My wife and I are going on a cruise shortly after my retirement date as a sort of celebration. As an interesting window into how retirement changes things, when we were looking into airline schedules for getting to and from the embarkation point, we realized we have as much time as we want.  We can drive there and enjoy sights along the way, and on the way back. It was a revelation.

He adds, "Volunteer work of some kind

is something I will investigate, and I may take up a hobby, such as woodworking. The possibilities are many and I have made no decisions about them. In the near term, I am just looking forward to having time with my family and being able to travel."

Clogg, and other experts of 40-plus years, carry stories and legends that can serve communities in the years to come. Practices of today arrived on the backs of experience built by 24/7/365 people in development and production. We've begun to work on a set of oral histories with these 40-plus-years of service folks. Not biographies, but stories about how this 3000 thing got started. Get in touch with me if you want to sit for a portrait.

03:17 PM in Homesteading, User Reports | Permalink | Comments (0)

November 02, 2018

Fine-Tune: Ensure Logical Data Consistency

Database_design_concepts
NewsWire Classic

The MPE/iX Transaction Manager for IMAGE does not guarantee logical consistency of your data. How do you ensure logical consistency? Use DBXBEGIN and DBXEND calls around all the DBPUT, DBUPDATE and DBDELETE calls that you make for your logical transaction. Yes, the definition of a logical transaction is up to the programmer.

There can be a lot of confusion about logical consistency, mostly because IMAGE kept adding logging and recovery features over its years of development. Gavin Scott gives a clear explanation of the state of affairs.

It’s amazing how much superstition exists surrounding this kind of stuff, and how many unnecessary rituals and sacrifices are performed daily to appease the mythical pantheon of data integrity gods. Real broken chains are supposed to be impossible to achieve with IMAGE on MPE/iX, no matter what application programs do, or how they are aborted, or how many times the system crashes!

The Transaction Manager provides absolute protection against internal database inconsistencies, as long as there are no bugs in the system and as long as the hardware is not corrupting data. No action or configuration is required on the part of the user.

Logical inconsistencies (order detail without an associated order header record, for example) can easily be created by aborting an application that’s in the middle of performing a database update that spans multiple records. Of course, IMAGE doesn’t care whether your data is logically correct or not, that’s the job of application programmers.

Using DBBEGIN/DBEND will have no effect whatsoever on logical integrity, unless you actually run DBRECOV to roll forward or roll back the database to a consistent point every time you abort a program or suffer any other failure.

By using DBXBEGIN/DBXEND XM style transactions, you can extend IMAGE’s guarantee of physical integrity to the logical integrity of your database. The system will ensure that no matter what happens, either all changes inside a DBX transaction will be applied, or none of them will be. Of course, it’s still possible to use this feature incorrectly (locking strategies are non-trivial as you need to lock the data that you read as well as that which you intend to write in many cases).

HP introduced a feature, far back in the MPE V days, called Intrinsic-Level Recovery (ILR). ILR can still can be enabled for a database. This was sort of a mini-XM that forced updates to disk each time an Intrinsic call completed in order to ensure structural integrity of the database in the face of system failures.

I believe that on MPE/iX, enabling ILR for a database does something really nasty like forcing an XM post after every update intrinsic call, which is a serious performance problem. ILR is no longer required on MPE/iX as XM will ensure integrity without it. With ILR you might be guaranteed that every committed transaction will survive a system abort, whereas without it XM might end up having to roll back the last fraction of a second’s worth of transactions. For almost any application this difference is negligible. Do not turn ILR on!

There are more complexities if your application performs transactions that affect multiple databases or databases and non-database files. It’s possible to do multi-database IMAGE transactions, but only if the databases reside on the same volume set, I believe.

01:44 PM in Hidden Value, Homesteading | Permalink | Comments (0)