« November 2018 | Main | January 2019 »

December 31, 2018

Date upgrade deadline: now in single digits

Countdown-9
When MPE/iX systems, both virtual and physical, see their clocks tick over tonight at midnight, it will be a significant date. The end of Dec. 31 puts MPE/iX, as crafted by its creators — into single digits for years remaining. Nine is tomorrow's number.

Whether that's nine years until end of life depends on your IT plans. If like more than a few managers you're retiring clean -- with configurations in place to survive into 2028 — the nine years will show you're prepared. You've made your changes to work around the loss of accurate MPE/iX date keeping. At least one vendor is taking orders for this service.

Others, meanwhile, are doing the work and leaving the credit to others. Stromasys has a lot at stake in the 3000 market to make 2028 a year of smooth pavement. We've gotten word they're ready with a software solution to carry MPE/iX beyond HP's wildest visions.

For the IT manager who's retiring without a 2028 plan — and leaving Dec. 31, 2027 as a shutdown date — tomorrow is the start of the final nine years for that HP 3000. It goes without saying these managers have no current interest in the Charon virtualizer for HP's MPE/iX iron.

Everything ends sometime. 2018 wraps up this evening. Lau Tao wrote in another century, "New beginnings are often disguised as painful endings." May your year to come be a new beginning without such pain. We'll see you in a future where options are still emerging for a suprising decade-plus to come. Some 3000 managers will be joining the march toward a Double-Digit Future for MPE/iX.

08:09 AM in Homesteading, Migration | Permalink | Comments (0)

December 28, 2018

Fine Tune: Optimized Disaster Recovery

Disasters
By Gilles Schipper

While working with a customer on the design and implementation of disaster recovery (DR) plan for a large HP 3000 system, it became apparent the implementation had room for improvement.

In this specific example, the customer had a production N-Class HP 3000 and a backup HP 3000 Series 969 system in a location several hundred miles from the primary.

The process of implementing the DR was completed entirely from a remote location — thanks to VPNs and an HP Secure Web Console on the 969. One of the most labor-intensive aspects of the DR exercise was to rebuild the IO configuration of the DR machine (the 969) from the full backup tape of the production N-Class machine, which included an integrated system load tape (SLT) as part of the backup.

The ability to integrate the SLT on the same tape as the full backup is very convenient. It results in a simplified recovery procedure as well as the assurance that the SLT to be used will be as current as possible.

When rebuilding a system from scratch from a SLT/Backup tape, if the target system differs in architecture from the source system, it is usually necessary to modify all the device paths and device configuration specifications with SYSGEN and then rebooting the system in order to even be able to utilize the tape drive of the target system to restore any files at all.

(This would be apart from the files restored during the INSTALL process — which does not require proper configuration of any IO component at all).

Some would argue that this system re-configuration needs to be completed only once, since any future system rebuilds would require only a “data refresh” rather than a complete system re-INSTALL.

I say that this would be true only in very stable system environments where IO configurations — including network printer configurations — are static and where TurboIMAGE transaction logging is not utilized. Otherwise there could be unpleasant results and complications from using stale configurations in a real disaster recovery situation. In any case, there really is no reason to take any chances,

The labor-intensive step of creating a proper DR target system configuration environment is achievable minus the labor-intensive part – or at least without repetition of the manual chore of re-configuring the target system each time the DR is exercised.

Unless both the production system and the DR system are architecturally similar (i.e. they belong to same HP 3000 family) the configuration of the target system (the DR machine) cloned from the source system (the production machine) will be non-trivial.

At a minimum, before data restore can begin on the DR machine, the path hierarchy of the tape drive associated with the backup tape must be re-created. Further, if the subsequent restore requires more than just the system disk, all the path components for all the disk drives must also be created.

In a real DR situation, this task can be daunting at best – particularly since it may be difficult to access the appropriate documentation that describes the pertinent SYSGEN configuration. How much preferable would it be to be able to complete this configuration well in advance of the hope-to-never-happen event.

In fact, it is entirely possible to create an appropriate DR configuration environment that is (almost) completely integrated into one’s production environment.

SYSGEN IO requirements

In order to provision a potential DR HP 3000 system’s IO configuration requirements into an existing production HP 3000 SLT, it is only necessary to configure all of the DR path components into the existing production system’s IO configuration.

The fact that these paths do not exist on the production (source) system is immaterial — as long as you can withstand the menacing, although perfectly innocuous console error messages that accompany a reboot of a system so configured.

There is also the matter of actual device numbers — and that is why I included the “almost” when mentioning “completely integrated” earlier.

Clearly, it is not possible to have duplicate device numbers when configuring both production and DR devices into the production SYSGEN IO configuration. So, in order to distinguish between the two systems (one the real production, the other virtual DR), I simply add 100 (you can choose any number) to the device numbers associated with the virtual machine. Then when actually testing or invoking the DR process, it is a simple matter to change the device numbers in a batch job designed for that purpose.

Another batch job could be pre-built that would add the appropriate disk drives and volume sets to the system’s disk pool, using VOLUTIL. These batch jobs would be included in the full backup tape and could be restored almost immediately following the INSTALL by referencing :file tape;dev=107 (to use my example of adding 100 to the corresponding virtual device).

The command :restore *tape;{fileset}; directory;olddate; keep;create;show (where {fileset} corresponds to the fileset that would include the appropriate device number change and volutil batch jobs. One could take this technique one step further in the case where the DR target machine is unknown.

In such a situation, you could create a SYSGEN IO configuration that includes path constructs for any possible virtual machine that you could think of and include them in the host configuration – adding 100 for devices associated with virtual machine 1, 200 for virtual machine 2, and so on.

08:03 PM in Hidden Value, Homesteading, Newswire Classics | Permalink | Comments (0)

December 26, 2018

3000 security status: obscure and secure

Bank vault
Earlier this year Jeff Kubler of Kubler Consulting was trying to label the status of MPE/iX security. The distinction between hardware and software is noteworthy. Whatever security the 3000s had confers onto the virtualized 3000s running under the Charon emulator from Stromasys.

Kubler built a list of the known conditions and advantages

  • Unknown operating system
  • Password protected
  • Must know how to address it with HELLO
  • Must know or guess the user
  • Could have additional security like VEsoft strenghtening the additional login string
  • Security on the account, user and group level could keep those who even know a login from getting anything important 
  • No visiting websites while using an HP 3000 application

When Alan Yeo of ScreenJet said the 3000 security is weak ("if you have locked the doors, then it will stop someone who just tries the door handle"), Pro 3K's Mark Ranft wanted to disagree.

The correct description is Security through Obscurity. If your HP 3000 has VESOFT's Security 3000 installed, and it is properly configured with two factor authentication, I don't know if anyone, without physical access to the machine, or access to unencrypted backups media, that could break in.

Where the HP 3000 falls short is in encryption of data that is in transit between the user and the system.  For this, I recommend you turn to MiniSoft Secure 92 for terminal access.

And unfortunately, if you host a website on the HP 3000, I have to admit the HP WebWise MPE/iX Secure Web Server is not TLS 1.2 capable. This would be a showstopper for PCI certification. But this is only a big deal if you accept credit card or other protected information via the website.

Finally, depending on your location or customer base, you may also need to worry about GDPR.

That two-factor feature might not be fully available under MPE/iX, depending on your definition of 2FA.

John Clogg said that asking for two passwords or a secret question is not two factors.

One weakness of MPE is that unless you have a password insertion utility, such as STREAMX, passwords for jobs must either be typed in when streaming, which precludes many job scheduling methods, or they must be hard-coded in the jobs. If you can prevent command-line access, some of these weaknesses can be overcome. I would say that the 3000's security is pretty weak without Security/3000 or a similar product.

With MPE or any other OS, security is effective only if those administering the machine take it seriously and don't make dumb mistakes. Years ago an employee of a company I worked for was being visited by her sister who was an HP SE in another city.  I caught the sister trying to log on to our system using the default passwords for TELESUP and other standard accounts. Fortunately, I had changed them all, but I'm sure this approach works in many cases.

I often see systems where jobs with hard-coded passwords have read access granted to "ANY", lots of users with excessive privileges, and so forth. Unfortunately, these problems persist because most IT auditors don't know an HP 3000 from a hole in the ground.

Ranft got in the last word on the matter, which seems to suggest the Vesoft Security 3000 is essential.

If you set up Security 3000 to ask you for a series of questions, like your dog's birthday, instead of just a second password. I am pretty certain that qualifies as two factor authentication. Wikipedia defines it as: Two-factor authentication (also known as 2FA) is a type (subset) of multi-factor authentication. It is a method of confirming a user's claimed identity by utilizing a combination of two different factors: 1) something they know, 2) something they have, or 3) something they are.

And you are correct. Most un-enhanced HP 3000 systems had poor security. Vladimir Volokh of VEsoft made a living visiting companies and selling them Security/3000 and the rest of the VEsoft suite by breaking in while they sat beside him at the console. I would always enjoy my visits with Vlad. After a few visits, I learned enough that he was no longer able to break into my systems. But in those days there were some backdoor ways to get PM capability.

05:40 PM in Homesteading, User Reports | Permalink | Comments (0)

December 24, 2018

Gifts given, 11 years after a Christmas

Gifts-under-tree
Eleven years ago we wished for nine things that would help 3000 users in the years to come. At the close of 2007 there was no virtual HP 3000 product like Charon. We didn't even allow ourselves to wish for such a thing.

But here on the last office day before Christmas, it's fun to review our holiday wish list. Let's see what we got and what HP withheld until it was too late for the vendor to supply what the community requested.

We've heard these desires from HP 3000 customers, consultants and vendors. Some of the wishes might be like the Red Ryder BB-Gun that's at the center of the holiday epic A Christmas Story. As in, "You don't want that, you'll put your eye out." If you're unfamiliar with the movie, the line means "I don't want you to have that, because I worry what you will hurt once you get it."

1. Unleashing the full horsepower of A-Class and N-Class 3000 hardware
2. Just unleashing the power of the A-Class 3000s (since every one of the models operates at a quarter of its possible speed)
3. Well, then at least unleash the N-Class systems' full clock speeds
4. HP's requirements to license a company for MPE/iX source code use
5. A way to use more than 16GB of memory on a 3000
6. A 3000 network link just one-tenth as fast as the new 10Gbit Ethernet
7. A water-cooled HP 3000 cluster, just like IBM used to make
8. A guaranteed ending date of HP's 3000 support for MPE/iX
9. Freedom to re-license your own copy of MPE/iX during a sale of an 3000

HP finally supplied Numbers 4 and 8. The first created the Source Code Seven, vendors who hold licenses that let them create workarounds and custom patches for MPE/iX issues. Number 8 arrived during the following year. It can be argued HP didn't end all of its MPE/iX support for several years beyond that official Dec. 31, 2010 date.

Some of the more inventive indie support companies have devised ways to use 32 GB of memory for 3000s, too. Ask yours about Number 5.

The last two items seem like real BB-Guns. But they have a chance of helping the community see the 3000 future more clearly, instead of putting its eye out.

A guaranteed ending date for HP's 3000 support is something both homesteaders and migration experts desire. By moving the finish line twice already, HP has kept customers from finishing migrations, or even starting them, according to migration partners.

What's more, the "we're not sure when support is really done" message keeps the 3000's service and support aftermarket in limbo. Customers tell us that they will be using their HP 3000 systems until their business demands they migrate away. HP plans to change its business practices someday for the HP 3000. But nobody knows for certain what day that will be.

That brings us to No. 9, the freedom to re-license your own MPE/iX. HP development on this software ends in one year. That's the end of changes to the operating environment, a genuine Freeze Line for MPE/iX. HP should be able to compete on a level field with the rest of the community. HP Services seems to need those special 3000 licenses.

Number 10? A wish for a long life and continued interest in MPE/iX from the HP 3000 gurus of the community. Someone can bring some these gifts after there's no one inside HP to cares about the 3000 community.

07:40 AM in History, Homesteading, Newswire Classics | Permalink | Comments (0)

December 21, 2018

Fine Tune: Rebooting a 3000 Remotely

Reboot
I want to provide an option for rebooting an HP 3000 remotely using LDEV 21. How do I do it?  Can I using a modem and landline or an IP address to get to LDEV 21?

Gary Stephens replies

Yes, you can use a modem on the remote support port set to auto answer. This will definitely work. It was more about controlling access to the console remotely. I recall one site that had a modem with a remote call-back to a known inbound number that was effective. Upon answer you were prompted for a U and P that had a dial-back associated with it. Ultimately it's all down to risk and your appetite for change.

Billy Brewer adds

You could hook up a cheap laptop connected to the console port (with a USB to serial converter). Then use TeamViewer or any shared desktop utility you prefer.

Tracy Johnson reports

An older PC with one NIC (you can remote into) and a serial port will also do. These days we just have a PC as the console via a terminal emulator (Minisoft, Reflection, or QCTerm). We use two NICs and just remote into the PC and open a window from there to do our remote reboots.

Mark Ranft notes

All the newer A- and N-Class systems have IP configurable remote console. I assume yours is an older system.

The PC options are excellent. If you have a DTC, you can set up back-to-back DTC switching. You configure a host (outbound) port on the DTC and you choose it by IP and TCP port ID.

07:27 AM | Permalink | Comments (0)

December 19, 2018

Even DTCs can spark memories for 3000s

DTC to 3000 N-Class config
The Distributed Terminal Controller was a networking device with intelligence that stood between an HP 3000 and a peripheral. We use the past tense to describe the DTC usage for many of the homesteading 3000 sites. In some places, DTCs continue to let 3000s shake hands with other devices.

At TE Connectivity in Hampton Roads, Va. the box works between an N-Class 3000 (the ultimate generation) and an impact printer (of considerably older peerage). Al Nizzardini makes the pair work for the company that employs 3000s across the globe, from North America to China.

"Our DTC 48 with 3-pin ports died on us," Nizzardini said. "We have an impact printer connected to the 48, the only thing that is hanging off that DTC." At first the solution to the blocked connection was to use an even older controller, the DTC16 with modem ports. That would've involved shorting out pins on the DTC 16.

Nizzardini asked and a few veterans answered. Francois Desrochers said Nizzardini would need pins 2, 3 and 7 (send, receive, ground). "You may have to short out 5 and 20," he added. Another combination from Gary Robillard suggested connecting 4 and 5 together and 6, 8, and 20 together. "We always had 2 and 3 crossed—2 to 3 and 3 to 2," he said.

It's been 20 years since HP last released a DTC, something that's still useful for older peripherals. The intel to keep one connected to the latest 3000s is still available in the 3000 community. Old doesn't mean dead when someone remembers the essentials. Nizzardini solved his problem without shorting out pins, just by locating another working DTC 48. MANMAN drives the workflow at TE Connectivity, but the real driver is pros like Nizzardini, helping one another remember.

 

05:35 PM in Hidden Value, Homesteading | Permalink | Comments (0)

December 17, 2018

What to say back to "Your system sucks."

Rolling-stones
There are still moments out there waiting for the homesteading 3000 manager. The ones where someone in IT who's pretty sure they know better about systems says something like "MPE sucks." Or anything equally glib, dressed up little to hide the ignorance.
 
"MPE sucks" is something like "the Stones were hacks." It’s a matter of taste and what you know. There’s too much legacy software out there doing production work to dismiss anything out of hand. I find that the more technical the IT administrator, the more they seem to like the clean choices, those with shorter pedigrees and clearer parentage. MPE/iX, being in its late 40's of existence, feels like it's just too out of date.
 
If that were true, then a company like Stromasys would have failed at selling an emulator into the MPE/iX marketplace. Charon is working and moving data where it needs to go.
 
I’ve talked for thousands of hours to people who cut code and build application suites. The dance between developer, administrator-CIO, and end user is interesting and frustrating. Using something older is not an ignorant move. What sucks, if anything, is a tunnel vision about the best tool to preserve a company's investment.
 
I've read the following in the last 24 hours, shared by a vendor who really needs you to see that cloud IT is your next best future.
The person in charge of the software isn’t generally involved in the day to day. The only thing they know is that the job is getting done, and “If it ain’t broke, don’t ax it.” They’re too removed to realize that it is broken, and there’s no one questioning them about whether something could be done 20 percent faster or 10 times easier.

Neither of these stakeholders is in a position where they can see the problems. What
they need is a different perspective.

When a different perspective can respect the investment in MPE/iX, and acknowlege how much less faster or easier an alternative is once you factor in the cost of change — then it might be time to talk futures and alternatives
 
People like the tools that they like. I don’t try to win the PC vs Mac debates anymore. It does annoy me to see a tech expert dismiss something. I have a friend who loves Android and slams iOS, who uses Linux and hoots at Windows. For him, the ability to flip a million software switches and manage his own filesystem is the smartest way to go. The 3000 marketplace started to see this when SAP crept in to try to replace MPE/iX. That's why Kenandy has been able to stand in at a few 3000 sites. Its switches are already set in positions that let work get done.
 
Advocates of the more complex choices usually don’t understand how smart they are in relation to everybody else. I encountered this in our editorial business just a few days ago.
A colleague who's a website developer belted out that "Wordpress sucks" stink bomb. To veer into the weeds on editorial and author websites, I'm one of the many who use Wordpress to manage content — blog entries, pages for books or services, and more. Wordpress is classic and yet evolving, and it's everywhere. Some might think of it like a legacy choice. The alternatives for content management are harder to customize. Joomla is that's colleague's favorite. For me, it was the bad, beautiful girlfriend who never felt safe.
 
I tried to make Joomla work for more than three years, but it was impenetrable. Some of that was probably my build-out of its theme, some my unfamiliarity with Joomla (I know WordPress dashboards and plug-ins much better.) I didn’t cut code. I made content and thought up business practices.
 
Keeping a website in clean enough shape to remain useful is not automatic, not in 2018. 
 
It didn’t get better once the Joomla site for the Writer's Workshop got injected with malware scripts. Twice. I changed hosting and got an intermediate firewall company (SiteLock, $60 monthly) to make the security problems go away. 
 
Like an MPE/iX customer who's finding problems with hardware, I had to get my resource chain in order. My new host auto-updates my WordPress (and believe me, I know people are trying to hack into WordPress. It’s everywhere, like Windows) and my new webmaster wants to secure my site as much as I do.
 
Insisting on control and customization — like the MPE/iX customers must, because they're supporting decades of data — makes things stickier. My challenge with website user experience is I learned publishing in the paper era. We controlled every user experience because the medium was the same everywhere. Losing control, and giving myself over the the dynamic nature of web, still annoys me. Three different sizes of smartphones, and two mobile operating systems, plus the vagueries of browsers, alters everyone's the experience.
 
Don't let anybody tell you your legacy choices suck. IT can sing from more than one set of chords.
 

01:58 PM in Homesteading, Migration | Permalink | Comments (0)

December 14, 2018

Routers and switches and hubs, oh my!

Lions-and-tigers-and-bears
Editor's Note: Initial HP 3000 hardware networking can be like a trip down a Yellow Brick Road. Here's a primer for the administrator who's wondering if that HP 3000 can link to a network

By Curtis Larsen

Auntie MAU! Auntie MAU! A Twisted Pair! A Twisted Pair!

Once upon a time networks were as flat as the Kansas prairie, and computers on them were a lot like early prairie farmsteads: few and far between, pretty much speaking to each other only when they had to. (“Business looks good again this year.” “Yep.”) Most systems still used dumb terminals, and when speaking to anything outside the LAN, system-to-system modem connections were the way to do it.

A tornado named the Internet suddenly appeared in this landscape. It uprooted established standards and practices, swept aside protocols and speed limitations, and took us into a Technicolor networking landscape very different than what was there before.

Toto, I get the feeling our packets aren’t in Kansas anymore

Smaller companies were tossed before the tornado to eventually land and quickly begin growing again in the new environment. Large companies like IBM, HP, Digital, and Microsoft, who were rooted and established in their own proprietary standards (it sounds like an oxymoron, but it’s true) survived by generally ignoring the howling winds. Eventually, munchkin-like, they all came out to see what the general fuss was about, and found that a house-sized chunk of change (pun intended) had landed.

Networking, and the TCP/IP protocol had truly arrived in style, bringing strange new applications and markets. Serial connections and proprietary networking (“What do you mean we don’t need SNA to connect to the Wichita office anymore?”) gave way to a new kid on the block. And her little dog, too.

Follow the Yellow-Colored-Cable-and-Labeled-at-Both-Ends Road!

So then the HP 3000 managers found themselves sitting in a strange new networking land of strange new networking things. And for some of us, trying to understand the whole of it all — especially in relation to “legacy” system like the HP e3000 — was a little daunting. What are all these networking black boxes we plug the system into, and what do they all do? How can they make life better? (How can they make life worse?) If you’re not sure (or just plain curious) read on.

We’re off to see the wizard — this wonderful wizard of ours!

The networking wizard of your HP 3000 system is a program named “NMMGR.” It allows you to define networking hardware and tells you how to create connections with them. But what things can you define? Before we talk about connecting to things, we should probably take a crash-course in the things you’re connecting to.

Which path do I take? Well, you could take this one, that one, or both...

The basic networking boxes you’ll connect to are hubs, routers, switches, bridges, and gateways. Oh My. Let’s take them one at a time.

Since life is like an analogy, I’ll stretch one for the hub to go like this: If your network traffic is like water through a hose, then a hub is like a splitter, allowing multiple exits. Generally speaking, a hub simply splits the traffic from the “incoming” line into each connected port “out.” This is cheap and simple to set up if you don’t have a lot of connections, but like too many divisions on any hose, too many hubs will make the end connections anemic. The fewer connections the better, so most hubs have no more than 24 ports total.

Obviously, to make things better for all connections in larger networks, more “water pressure” was needed — and the switch was born.

Pay no attention to that man behind the curtain!

No, I’m not talking about the System Administrator. A switch looks very similar to a hub, but the appearance ends there. Again, if your network is like a stream of water in a hose, then your garden-variety switch is like a water tank, adding pressure to the line. Huge water tanks are placed at the heart of a city’s water system, while small tanks are placed on buildings. At the heart of most networks — tended by a cooing Network Administrator — is a core switch (the main tank).

Additional “work group” switches (building-sized tanks) can be used in wiring closets for special-need areas of the network. So, although a hub and a switch both offer multiple connections, the resulting “streams” have vastly different origin and force. Now that we’re one big speedy networking family, no one minds if it all fails, right? No? Well love can build a bridge, and so can electronics.

My Network’s crashing… What a world! What a world…

Having all your network connections on one physical segment isn’t too grand — especially when it fails. By segregating physical networks and then “bridging” them together, you ensure that in the face of adversity, some people can still laugh at the ones who can’t work. Aquariously speaking, a simple bridge is like a valved pipe between two water systems, passing water in both directions, and shutting one side’s valve if that system “loses pressure” (goes down). You say you want to route water based on content? Well then.

This here’s the ‘Packet of a Different Header’ you’ve heard about

Simply put, a basic router is an intelligent (logically) one-way bridge, examining network data information and very quickly sending data packets down one line or another. In our epic analogy, a router could be a thermal valve, forcing only cold water to flow this way, and hot water that way, preserving us from the heartbreak of tepidity. Since the router has to work quickly, it usually works at a lower level than other equipment does, caring less about content and more about destination. You say you’d like to exchange hot water with someone else? You’d like the gate to swing both ways?

There’s no place like the home network! No place like the home network…

A router is excellent at sending packets from Here to There (and not necessarily Back Again), but nothing beats the gateway for two-way communication. A gateway takes data from one network and sends it to another, even re-creating the data packet on the other side, if need be.

To stretch our analogy to its limits, we could say that two different water systems exist, having the same characteristics, including temperature. One system is chlorinated, while the other is not, and so simply allowing the water to pass unmolested would be an issue — one system would become diluted, and the other exposed. What we need is a filtration pump that allows the water to be pumped in either direction, adding chlorine one way, and taking it out in the other direction.

Connecting to the Internet requires a gateway, since your home network doesn’t “know” how to reach something out there. What it does know is how to hand off a data packet destined for “Not Here” to a gateway for processing. The gateway in turns checks the packet’s address and sends it to the best possible network closer to the packet’s Ultimate Destination, re-labeling the packet as it does so, and putting its own address in the packet’s “return address.” If the packet’s Ultimate Destination isn’t on the new network either, then the gateway there does the same thing until the packet finally hits the Emerald City.

On its way back home, because of all the “return addresses” it picked up, the packet passes back through each gateway that it came from until, clicking its little ruby slippers, the packet realizes it is in no place but home.

Because of its intensive examination work, a gateway is almost always dedicated to its task, especially on larger networks. It was the gateway’s filtering abilities that led to using them as a firewall to protect networks by purposefully filtering and/or denying different types of connections and data. But the firewall is a topic all its own — just make sure you use one!

And you were there, and you, and you.
Oh Auntie Carly, there’s no place like MPE!

So there you have it — Networking Devices 101. Now that you know what you can connect your e3000 to, you can come up with some ideas on how to use them, and answer questions about what to connect to. Should an 3000 be connected to a hub, or to a switch? (Switch!) Does a printer need to be connected to a hub or a switch? (A hub will usually be fine.) Should I use my 3000 as a gateway? (I think not.) Should the physical part of my network the e3000 is on be bridged? (Yes.) Can I configure a gateway and connect my e3000 to the Internet? (Certainly. But make sure you have a firewall first!)

Curtis Larsen has been working with HP 3000s for over 25 years, and believes that, given enough time, any application can be written using the CI.

07:02 PM in Hidden Value, Newswire Classics | Permalink | Comments (0)

December 12, 2018

Source code for MPE/iX: Security, by now

Blanket-Ad
Ten years ago this week the 3000 community was in a state of anticipation about MPE/iX. HP had an offer it was preparing that would give select vendors the right to use the operating system code. The vendors would have a reference-use-only license agreement for MPE/iX. No one knew whether the source would have any value, said Adager CEO Rene Woc.

Adager, the company whose 3000 products are so omnipresent they held a spot on the Hewlett-Packard corporate price list, believed there was potential for independent support and development vendors. What was far less certain was how far HP would let source go to solve problems for the 3000 community.

"Source code is important whenever these kinds of [vendors] have support from HP, which most of them do," he said in that month of 2008. But HP engineers can look at source, just as third parties will do, "and the answers won't come instantaneously. In the meantime, you have to get your business back on track, and I think that's what the customer is eventually interested in. It will be nice to have that additional [source code] resource — especially in the sense that it will not be lost to the community."

There was a chance that HP's source licensing terms would be too restrictive, "to the point where you say that you are better off not knowing, because then we're free to use all the methods we've worked with while we didn't have source." After getting a license to source, Woc added, "you might have to prove that you got your knowledge through a difference source than HP's source code. We will see."

That sort of proof has never been required. Not in a public display, at least. Source code, held by vendors such as Pivital Solutions and others, has been a useful component in workarounds and fixes. HP never gave the community the right to modify MPE/iX. This turned out to be a good thing, as it kept the 3000s stable and made support a manageable business for application vendors.

There was also the wisdom that the resource of HP's code would have to prove itself. At least it held a chance for rescue and repair.

The source code "is probably a security blanket," Woc said in 2008. "In that respect, it's good that it will be available, that they're starting to offer some things. We'll have to see what kind of conditions HP will offer in their license agreements." 

Having source access though a license did not automatically make license holders better providers of products and services, he added. "You cannot assume, even with good source code readers, that the solutions will pop up," he said. "A lot of the problems we see these days are due to interactions between products. So the benefit for the customer would be based more on the troubleshooting skills that an organization can provide."

"The basic resources [of source] won't make things better by themselves," Woc said. "It's a matter of troubleshooting." 

02:18 PM in History, News Outta HP | Permalink | Comments (0)

December 10, 2018

HP's 3000 boxes step closer to solid storage

SCSI2SD-V6-RevF-2T
Almost two years ago, an expert in HP's 3000 systems was working to use solid state disks (SSD) with the computer. John Zoltak was trying to link the server to microSD cards late in 2016. He checked in with us this week to report success on the project.

SSD on 3000 hardware from HP has been a dream for several decades. Imperial Computer had a solid state unit early in the 1990s that held a promise of faster IO transfer on MPE/iX. The cost was astounding compared to moving media and the capacity was a fraction of spinning disk drives'. Much later, SSD has become something of a desktop standard and is an active choice in enterprise servers, too.

The MPE/iX hardware from HP -- to us, something called an HP 3000 -- wanted to play from SSD, too. In his prior report, Zoltak was trying to copy one 917LX disk to a new disk on the server's SCSI bus. A 4GB drive is standard on a 917, so just about any microSD card would match that storage. Now there's a V6 edition of SCSI2SD, a combination of hardware and software that delivers SD storage to HP's 3000 iron.

The combination now works beautifully, said Zoltak, who's working at Fives North American Combustion in Cleveland, Ohio. "You want the V6 boards," he said. "The V5's are much slower. The V6 takes a full size SSD card and up to 128GB has been tested." Michael McMaster, the inventor based in Australia, has engineered the latest version of his product "as a complete redesign for the V6 boards, which use a completely different microcontroller." The device is for sale online at Intertial Computing. Today's price is $105 including 16GB of microSD.

The product employs a SCSI-2 Narrow 8-bit 50-pin connector. It does SCSI FAST10 synchronous transfers at 10MB/second. Zoltak is reaching way back into the HP 3000 hardware closet to test. He's attached the SCSI2SD to a Series 917.

"I have the board sitting on top of the system with a cable around to the back on the same SCSI as the 917's DAT and DLT drives. I did a reconfigure and a restore to the SSD. Seems to be fairly quick. While restore was running I used HP Glance and saw that the disk was doing about 65-70 IO's per second. This is not as fast as the Nike array it came off of, but then it was on a differential wide SCSI."

The bigger benefit is that the HP MPE/iX iron can rely on SSD instead of moving media. Disks are among the leading culprits in HP's 3000 failures in 2018. Tape is a close second. Storing and moving bits gets complicated while using the hardware that HP certified for storage with 3000s than a decade ago.

Newer storage reduces the risk of homesteading. This is one of the benefits of using a virtualized 3000, too.

Zoltak has been working directly with McMaster. "After many go arounds he sent me a new revision of the board.
It wasn't until now that I finally got around to trying again and it works beautifully. It really is amazing to see a HP 3000 system like this that which used to run on [disks the size of] washing machines now running off of a 1-inch square and cardboard-thick media."

SD-based storage can be a staple of the MPE/iX experience using Charon from Stromasys, too. IOs are faster in a native configuration where SSD on an Intel box links directly to an PCIe bus. Using PC-based disks, of course, is one of the serious advantages to using a Stromasys Charon emulator for 3000 work. The 9x7s are so old they don't have a Charon equivalent, but the strategy is the same. 

01:44 PM in Homesteading | Permalink | Comments (0)

December 07, 2018

Memory and Disk Rules for Performance

Concentration
NewsWire Classic

By Jeff Kubler

You need to get management support for your efforts to keep your systems performing at their best. Memory and disk are two components of your performance picture under MPE/iX. Main Memory is the scratch pad for all the work that the CPU performs. Every item of data that the CPU needs to perform calculations on or updating to must be brought into Main Memory.

CPU used to manage Main Memory: The CPU must manage memory. It must cycle through the memory pages, marking some as Overlay Candidates (this means that new data from disk may be placed here), noting that some are in continued use, and swapping others out to virtual or what is called transient storage. Swapping to disk occurs when data is in continued use but a higher priority process needs room for its data. To accommodate this higher priority process and its need for memory space, the Memory Manager will swap the memory for the lower priority process out to disk. The more activity the Memory Manager performs, the more CPU it takes to do this. Therefore it is the percentage of CPU used to manage memory that we use as a measurement.

Page Faults per Second: A Page Fault occurs each time a memory object is not found in memory. The threshold for the number of Page Faults per second that can be incurred before a memory problem is indicated varies with the size and the power of the CPU. Larger machines can handle more Page Faults per second while a smaller box will encounter problems with far fewer.

An exceptional number of Page Faults should never be used as the sole indicator of memory problems but when observed should be tested with the memory manager percentage. If both agree, you have a memory shortage. There are some strange things that I have observed with Page Faults, so it does not stand alone as an indicator of memory shortage.

The number of Page Faults per second and the amount of CPU needed to manage Memory are always evaluated in conjunction with each other. That is to say the high Page Fault Rate will not be considered a problem if the Memory Manager Percentage is not above 4 percent.

The Disk Environment is usually referred to as Secondary Storage. This is where all the data needed for system use is stored. Since Main Memory is not large enough to store all of the data that will be needed by all the processes, there must be a location for this larger pool of data. In the MPE/iX environment a great attempt was made to limit the impact of the Disk Environment so that it could not be the bottleneck that it once was in the Classic environment. Even though the Disk Environment does not have the significance it once had, this area can still be a bottleneck. As the CPU speeds increase, bottlenecks will become more significant.

Several different factors can affect the Disk Environment. One of these is data locality. Data locality pertains to two different types. There is data locality within Image datasets and data locality across the disk itself.

Data locality across Disk: This refers to the location of separate pieces of files (called extents). When files are placed on the disk, they can be placed in contiguous sectors or sections of files, or they can be placed in non-contiguous locations or even on many different disks. When files are not in contiguous locations they are said to be fragmented. The advantage of contiguous location is that greater efficiencies are allowed in retrieving data. When files need to be read, the head movement of the disk drive is minimal if files are in contiguous locations. The head moves to the location and the retrieval begins.

As the disk fills up the system cannot find one contiguous location to build any new file. Therefore, the system breaks the file up into extents and places the file wherever it can. A system reload will put files back into contiguous location (usually back on the location of the files file label) or products such as Lund Performance Solutions De-Frag/X can be used to put the files back into contiguous location.

Operating systems allocate disk space in chunks as they create and expand files and transient disk space (swap areas, etc.). When files are purged, these chunks are released for reuse. Over time the disc space may end up fragmented into many small pieces, which can slow the performance and the reliability of the system.

To observe and correct MPE fragmentation on MPE, you can use the De-Frag/X product from Lund Performance Software or the Contigvol command of Volutil. The latter is stable and reliable, but requires multiple passes to get the best results.

Data locality within IMAGE data sets is the other area of major concern. There there are two different types of datasets to be concerned with, detail datasets and automatic or master sets.

The Detail Datasets: this type of set holds the day to day data input. Detail sets begin with nothing in them. When records are added 1 is added to something called the high-water-mark, a number that tells how many records have been in the set, and the record is placed in the set.

The problem is that IMAGE automatically reuses space that is given up when a record is deleted. This space is often called the delete chain. New records are placed in the most recent location available on the "delete chain." This means that new records are not in the same physical locality as the rest of the records and may be far removed from the other records.

The ideal state for a detail database is one where the detail entries are sorted by the key field. This allows the data to be retrieved in the smallest amount of IO's making efficient use of the MPE systems pre-fetching of data. When this is not the case we can measure the dataset lack of efficiency with something called the Elongation factor. This is simply a measure of how many more IOs the user must perform to retrieve desired data.

The Master Datasets have unique identifiers (field names). There are two types of master sets, a manual master and an automatic master set. Manual masters have user-entered master entries while automatic masters have automatic entries placed in them to accommodate access to detail records. The issue of importance to performance here is something called the hashing algorithm. This is the method used by the database to calculate the location of the next record placed in the database. The intent is to cause the master set to be as equally distributed as possible.

The hashing algorithm uses the size of the set in its calculation. A poor size or a size that is not large enough will result in an unequally distributed database. A poor size is most easily described as one that does not consist of a prime number. This means that when the hashing algorithm calculates a location there is a higher potential that a record will already exist in that location. When this happens a secondary position must be calculated. When secondaries are placed in another block within the database, another I/O must occur to retrieve needed data. Since IO to disk is the slowest type of access, we want to avoid this at all costs.

02:10 PM in Hidden Value, Homesteading | Permalink | Comments (0)

December 05, 2018

One Alternative to $1 Million of 3000 Costs

Charon Portfolio
In a webinar this week, Stromasys made its case for how shutting down HP's 3000 hardware can reduce an IT budget. Using data from Gartner analysts and other sources, the company estimates that downtime costs companies $1 million per year on average. Any alternative to 15- to 25-year-old servers is a good shot at making the future more stable.

There still hasn't been a computer system built that will never fail. Hot-swap backups with automatic server failovers were never a big part of the 3000 datacenter experience. If you had to handicap which server was a likely failure candidate, HP's MPE/iX hardware would give you short odds of failure. In this case, short is not a good measure.

One million per year in losses is a big enough number to get the attention of a corporation's C-level. It's the same number, coincidentally, that Stromasys used this week to describe the costs of migrating MPE/iX apps. The text circled in the slide above "implies investments of $1 million+" for migrations.

These millions, lost through downtime or surrendered in datacenter budget, are averages. Smaller 3000 customers may not approach the $1 million in yearly lost revenues. Migration costs track closer to that number, but they're a one-time hit. The alternative is Charon, of course. During the webinar we learned that an additional HP market is coming online to use Charon. HP's Unix PA-RISC servers will be the latest Stromasys virtualization segment, according to Dave Campbell.

There's no specific release date for Charon's HPA-9000 version yet. Such a product has been hinted at for a long time and rumored to be in development more recently. Stromasys needs the cooperation of the system manufacturers — HP and Oracle, to be exact — to bring out a virtualization of the old vendor hardware.

The news of a new virtualization marketplace tells us that replacing aging hardware with virtualized systems running on modern iron is a growing business. It may not even be growing as fast as it could. One customer on the webinar asked about IBM AS400 (Series i) virtualization. Campbell noted that the vendor's participation is essential to making another edition of Charon.

IBM may not be ready to help AS400 users for some time. Their POWER-based iron is still being built and sold. The only hardware still being built to support MPE/iX today is Intel x86-based, the platform for the Linux+Charon solutions. There can be millions of reasons why newer hardware plus the cost of Charon software would leave a customer miles away from failures. Charon isn't inexpensive, but when compared to $1 million for an MPE/iX user, it may feel like a better choice for a legacy budget.

12:13 PM in Homesteading, Migration | Permalink | Comments (0)

December 03, 2018

HP 3000 dream tracks close to virtualization

Railroad switches
An HP 9000 HP-UX virtualization product is in development. In that kind of design, a single Intel server with enough computing power (concurrent threads) could host both HP 3000 and HP 9000 virtualizations. HP had the same objective almost 20 years ago for its largest enterprise platforms.

Early in 1999 HP's Harry Sterling spoke at an all-day user meeting in the UK hosted by Riva Systems. Sterling, who'd retire before the end of that year, said a multi-OS server was within HP's vision for the 3000 and 9000 customers.

Sterling’s mentioned the possibility of running MPE, NT and Unix concurrently on the HP 3000 "sometime in the future." There was even the possibility of a “hot-swap” version of MPE alongside the production system. John Dunlop reported for us at the time.

The passing mention indicated that separate processors in one box would be able to run different operating systems. Sterling did suggest that a hot-swap version of MPE might be a valid use, so that there would be some redundancy with the live operating system.

This seemed to lead to the subject of more uptime. From these comments, it’s possible that HP is looking at allowing online changes to a hot-swap system and then just switching it over to achieve the so-called “magic weekend.” This is a system upgrade that occurs seamlessly and transparently to both the users and management.

That would be a dream not realized. Hot-swap didn't make it any further into the customer base than architect discussions. Sterling noted that in 1997 customers expressed concern about the future of the 3000. To counter that feeling and give the customers more confidence, he outlined in 1999 a five-year roadmap for the 3000.

Marketing was on board as well in that year that led to Y2K. It would take another 13 years before a multiple OS host for MPE/iX would emerge.

Marketing Manager Christine Martino told that crowd about HP’s commitment to the 3000 by hiring more MPE engineers and setting up of new centers of expertise. There was also to be a higher investment in training and education. Finally, she stated that HP was running over 650 HP 3000s to power core Hewlett-Packard corporation business applications.

By now it's going to be 20 years on to the promised land of one host for Unix and MPE. A big enough Intel server — and it might well be one that will evoke enterprise iron costs — could do what Sterling proposed before he retired. All it took was the cooperation of HP's engineers to release internals to emulate PA-RISC systems for Unix and MPE/iX. Those are legacy systems by today, but at least there's an independent software vendor to make the twin-OS dream a possibility.

01:28 PM in History, Migration, News Outta HP | Permalink | Comments (0)