Graduate to more HP 3000 performance

A commencement address to those ready to matriculate to faster HP 3000 systems

By Mike Hornsby

Speed costs money. How fast do you want to go? You want to get all of the speed out of your current HP 3000 configuration. There are ways of trimming existing costs and applying the savings to improved performance. This article provides insights and methodologies for saving on ongoing costs and improving interactive response times and batch throughput.

Speed traps

If you’re using DTCs, are the terminals running at 19.2 kbaud? Do you have more than four disks per HP-IB or single ended SCSI interface? Do you have a single LANIC for all network traffic? Have you switched to Jetdirect or LPD-based printing sharing?

Hardware options

Maximize your memory configuration/interleave. Switch to Web or socket interfaces to reduce ‘hpusercount,’ allowing a faster server with a smaller ‘hpuserlimit’. Add disc drives in user volumes to spread transaction management. Add or upgrade tape devices to speed up backup operations. Split your fourth-generation language and reporting licenses into developers and runtime licenses, and put the developers on a separate, limited-user system.

Database management

Make sure that datasets are blocked efficiently to save disk space and maximize serial prefetching. If you’re performing chained reads, repack on a periodic basis. Place your heaviest accessed files on Fast/Wide SCSI disks. Use HWMPUT or a serial repack to avoid a deleted entry chain. Split your automatic master for small detail from those associated with large details. Watch for master capacities that cause clustering. Avoid the use of master dynamic expansion except for emergency overflows.

Process management

Tune your CQ=152,200,200,2000 if you’re using VPlus. Otherwise, tune CQ=152,200,100,2000. Watch for socket applications such as ODBC always running at 152. Use NSCONTROL SERVER=MIN,MAX to pre allocate server processes.

Logging

If you’re using mirroring or RAID 5, reevaluate the requirement for logging. If you are logging, make sure that proper procedures are followed after system aborts.

OCTCOMP/Allocate

Object Code Translate and allocate Compatibility Mode programs such as FCOPY and any other user CM programs. Install the Native Mode versions of QUERY and QUAD if you use them.

Lot-o-spoolfiles

If you’re using spooling utilities, minimize the number of spoolfiles on the system. Use set stdlist=delete in trivial batch jobs. If a program produces a large stdlist, send it to a circular file. At reboot, the system must recover any spoolfiles that were opened at the time of a crash. This used to be done prior to SYSSTART, but now is a background process — but it still can hog the system for a while. Also, doing a print ;start=-20 on a large spoolfile causes all records to be read, because spoolfiles have variable records.

Backups

The backup is usually the single longest, most intensive job. Many times, slow response during the day can be attributed to an online restore. Use the store listing or BIGFILES to identify the largest files. It is a shame to waste time backing up memory dumps or month-end work files in each full back up. It is a crime to wait for these files to roll in during a re-install!

Free tools

Some tools that are available for free: At www.beechglen.com: BIGFILES, a utility to list the files on the system in descending order by size w/cutoff. QPLUS, a command file script that approximates GLANCE or SOS.

At www.allegro.com: FILERPT, a utility that summarizes file usage from system log files. SYSLOG, a utility that allows dynamic switching of system log file events. RAMUSAGE, a utility that describes memory content better than GLANCE. DBLOADNG, a utility to report on database efficiencies

Application tips

Don’t use PIC 9 fields in COBOL for arithmetic operations. The overhead for ASCII-to-binary is substantial. Do use VPlus forms caching. This is easy to implement in the COM area, and most terminals and emulators support it.

Use DBUPDATE in place of delete/put; this CIUPDATE feature can dramatically reduce the overhead to modify a search or sort item. Do use mode 6 DBGET w/date cut off. Many programs read down the chain selecting for date >, until end of chain. It is usually much faster to read up the chain stopping at date <.

IMAGE does not prefetch for either mode 5 or 6 DBGETS.

Do use Robelle’s Suprtool to extract/sort work files. Many systems have one or two ‘killer’ batch reports. These usually sequentially read a master and chain to a detail producing a work file that is then sorted and passed to a report or output section. Suprtool can usually produce the same work file in one-tenth of the time.

Do use IMAGE b-trees to replace KSAM look up files. IMAGE now has the capability to incorporate b-tree lookups. It is very easy to implement and very transparent to the application code.

Other cost-saving options

Have leased-line and ISP agreements re-quoted annually. Just by asking you can usually save 10-15 percent. Review hardware and software maintenance agreements annually. Look for items that you already have spares for — terminals, DTCs, and tape drives, or thosethat can be replaced inexpensively.

Eliminate OpenView. Many sites have this product to allow DTC switching. This was bundled into MPE/iX 5.5 as host-based DTC control, along with host based Telnet. Look to grow operations and user staff into programmers. Which would you rather have: a great programmer that doesn’t know the business, or a fair programmer that already knows the business and people well?

Mike Hornsby’s company Beechglen Consulting (513.922.0509) provides contract administration, contract programming, and customized system and application software support, specializing in HP 3000s and promoting an “Anyone can call about anything” philosophy. A former HP senior systems engineer, he co-founded Beechglen in 1988.


Understanding how to use Mirrored Disk on HP 3000s

HP’s add-on product provides vital disaster recovery, but you’ll need advice on set-up, disk errors and split-volumes

By Andreas Schmidt

Mirrored Disk/iX is an optional subsystem for HP 3000 mission critical systems, and it’s vital to guaranteeing high availability of your company’s data. This article explains the fundamentals of HP’s Mirrored Disk/iX: how to set up volume sets, how to deal with disk errors and how to establish a split-volume backup.

The complete resource on this subject is, of course, the HP manual for Mirrored Disk/iX (User’s Guide, HP Part No. 30349-90003.) Some of what follows is based on this. But we all know that a summary sometimes will help better than reading through a complete HP manual — especially if you are under pressure in a delicate situation.

What are mirrored disks?

Mirrored Disk/iX is a subsystem for HP 3000s which needs to be ordered separately. The installation follows the normal subsystem Installation Process as documented in the Installation Manual. Mirrored Disk/iX is designed to work only with non-system volumes. To make it very clear: Mirrored Disk/iX does not support mirroring the HP 3000’s system volumes.

It supports disk drives that use HP-FL cards or NIO SCSI cards. But mirrored partners must be the same model of fiber-link drive or NIO SCSI drive, and mirrored partners must be connected to different HP-FL cards or NIO SCSI. Otherwise a single point of failure would still exist.

Mirrored disks are designed to provide high data availability by automatically maintaining identical information on two partner disks. When an application writes to a disk, disk mirroring causes the information to be written to both drive partners. When an application reads from a disk, there are two places to access the requested data. This may give performance benefits on large systems which do a lot of reads for queries but only a few writes to the same data. Applications running on the system are unaware that disk mirroring is present.

Once disk mirrors have been established using the VOLUTIL utility, a mirrored disk acts just like any other disk connected to the system, until a disk failure occurs. If either disk of any pair fails, normal system operation continues. When the partner is ready to resume operation, the system copies data from the good disk, bringing the pair to a consistent state, and normal mirroring resumes.

Once mirrored disks have been installed, you can use them like any other disks connected to the system. Additionally, you can perform split-volume backup of mirrored disk data while still accessing the data.

So, Mirrored Disk/iX supports the following features:

High data availability: System automatically maintains identical information on two partner disks. Users continue to access data if either disk of any pair is disabled or under repair.

Reduced downtime: Users continue to access data while system performs file backup.

Disk failure recovery: System detects failed drive, continues to run application, and discontinues mirroring until drive is repaired.

Resume mirroring: System allows for the removal of the failed drive from pair, the mounting of another drive in its place while the system is running, then copies data to the new drive, and resumes disk mirroring.

Data consistency: System writes data to both partners of a mirrored pair, so data is always consistent, even during the repair process.

The installation of Mirrored Disk/iX is easy:

• Use the SYSGEN utility to configure the disks into the system.

• Install the disk hardware.

• Boot the system with the new configuration.

• Use the AUTOINST utility to install the mirrored disk software.

• Use the VOLUTIL utility to create a mirrored volume set.

• Move files, if necessary.

• Set up accounts and groups.

The subsystem installation of Mirrored Disk/iX enhances the HP 3000’s VOLUTIL commands. HP provides both commands VOLUTIL and MIRVUTIL to make life easier for the System Manager. The functionality is the same when Mirrored Disk/iX has been installed.

Please be careful: The Create Volumes (CV) capability is required to use VOLUTIL to initialize mirrored volumes. You also need it to input system commands from the system console to perform split-volume backups.

How to set up volume sets

Assuming that the subsystem has been installed and the hardware as been plugged in and is configured, the new volumes will be in state SCRATCH or UNKNOWN. Verify this via :DTSTAT ALL

LDEV-TYPE          STATUS         VOLUME (VOLUME SET - GEN)
30-079370          SCRATCH
31-079370          SCRATCH
32-079370          SCRATCH
33-079370          SCRATCH


Now invoke :VOLUTIL or :MIRVUTIL and create the new set’s master disk as mirrored pair:

volutil: NEWMIRRSET PROD_SET:MEMBER1 (30,31)

and verify via DSTAT:


volutil: :DSTAT ALL

LDEV-TYPE          STATUS         VOLUME (VOLUME SET - GEN)
30-079370          MASTER-MD      MEMBER1 (PROD_SET-0)
31-079370          MASTER-MD      MEMBER1 (PROD_SET-0)
32-079370          SCRATCH
33-079370          SCRATCH

Now add volumes in this mirrored set. These volumes must be in state SCRATCH or UNKNOWN.

volutil: NEWMIRRVOL PROD_SET:MEMBER2 (32,33)

and check via DSTAT again:

volutil: :DSTAT ALL

LDEV-TYPE          STATUS         VOLUME (VOLUME SET - GEN)
30-079370          MASTER-MD      MEMBER1 (PROD_SET-0)
31-079370          MASTER-MD      MEMBER1 (PROD_SET-0)
32-079370          MEMBER-MD      MEMBER2 (PROD_SET-0)
33-079370          MEMBER-MD      MEMBER2 (PROD_SET-0)

In VOLUTIL, the command SHOWSET with the MIRROR option (this option has been installed via the added subsystem) will show the state of the mirrored set:

volutil: SHOWSET PROD_SET MIRROR

Volume Name   Vol Status     Mirr Status          Ldev      Mirr Ldev
MEMBER1       MASTER        NORMAL                  30       31
MEMBER1       MASTER        NORMAL                  31       30
MEMBER2       MEMBER        NORMAL                  32       33
MEMBER2       MEMBER        NORMAL                  33       32

How to deal with Disk Errors

There are two types of disk errors: Disk errors after mount (in normal operation) and a disk cannot be mounted (already defective in booting the box)

Disk Error after Mount: If a disk has a problem after the mount, the system will continue to work automatically with only one disk of the affected pair.

Possibilities for Disk Errors are:

Disk reports ERRORS — Disk will immediately become DISABLED and the System will continue to work without a mirroring for the affected volume set without any interruption.

Disk does not answer any longer — The system waits about two minutes for an answer by the disk. During this period, all I/O processes are suspended. If the disk will answer in this interval, the system will continue to work with mirroring for this pair. If the disk will not answer the disk will become DISABLED. The system will continue without mirroring for the affected volume.

Here’s an example: During normal operation LDEV 32 fails. The following message will appear on the Console:

?09:09/12/MIRRORED VOLUME DISABLED ON LDEV#32
?09:09/22/ACKNOWLEDGE MIRRORED VOLUME DISABLED ON LDEV#32 (Y/N)?

:REPLY 22 ,Y

The system will continue to run. The problem may be that this message and the reply will be overlooked on big systems. It’s recommended to have a monitor in place. HP’s OpenView Operations Center (a.k.a. IT/O OpC) may be one option.

A check using DSTAT and VOLUTIL will show the following:

:DSTAT ALL

LDEV-TYPE    STATUS              VOLUME (VOLUME SET - GEN)
30-079370     MASTER-MD          MEMBER1 (PROD_SET-0)
31-079370     MASTER-MD          MEMBER1 (PROD_SET-0)
32-079370     *DISABLED-MD       MEMBER2 (PROD_SET-0)
33-079370     MEMBER-MD          MEMBER2 (PROD_SET-0)

volutil: SHOWSET PROD_SET MIRROR

Volume Name     Vol Status           Mirr Status       Ldev      Mirr Ldev
MEMBER1            MASTER             NORMAL           30           31
MEMBER1            MASTER             NORMAL           31           30
MEMBER2            MEMBER             DISABLED         32           33
MEMBER2            MEMBER             NON-MIRROR       33           32

Having repaired or exchanged the disk, you must continue by issuing the command: volutil: REPLACEMIRRVOL PROD_SET:MEMBER2 32 and re-establish the mirroring. Check it via

volutil: SHOWSET PROD_SET MIRROR

Volume Name Vol Status    Mirr Status              Ldev  Mirr Ldev
MEMBER1         MASTER     NORMAL                  30      31
MEMBER1         MASTER     NORMAL                  31      30
MEMBER2         MEMBER     REPAIR-DEST             32      33
MEMBER2         MEMBER     REPAIR-SRCE             33      32

The repair will happen automatically. No further intervention will be needed. This process is fully transparent for applications and users of the data on the affected volume set.

Disk Error before Mount: Here’s another example: LDEV 33 cannot be mounted during the system’s startup. Following message will appear on the console:

?09:09/12/MIRRORED PARTNER MISSING FOR LDEV# 32

The system sets the “good” mirrored disk LDEV 32 on PENDING and waits for SUSPENDMIRRVOL to allow LDEV 32 to work without the mirrored partner:

?09:09/22/ACKNOWLEDGE MIRRORRED PARTNER MISSING FOR LDEV# 32(Y/N)?


:REPLY 22 ,Y

This volume (here: MEMBER2) will stay unavailable until either the mirrored pair (here: LDEV 33) will become available again, or the mirror will become suspended via SUSPENDMIRRVOL command. DSTAT will show the following:

:DSTAT ALL

LDEV-TYPE STATUS VOLUME (VOLUME SET - GEN)
30-079370 MASTER-MD MEMBER1 (PROD_SET-0)
31-079370 MASTER-MD MEMBER1 (PROD_SET-0)
32-079370 *PENDING-MD MEMBER2 (PROD_SET-0)

By using SUSPENDMIRRVOL the mirror will become suspended, and the disk will work without a mirror for it. Now, this disk should not become faulty — it’s a single point of failure now! SUSPENDMIRRVOL will work only for disks in state PENDING!

volutil: SUSPENDMIRRVOL PROD_SET:MEMBER2 32

MEMBER2 is now available again, but without a mirror:

volutil: SHOWSET PROD_SET MIRROR

Volume Name Vol Status Mirr Status Ldev Mirr Ldev
MEMBER1 MASTER NORMAL 30 31
MEMBER1 MASTER NORMAL 31 30
MEMBER2 MEMBER SUSPEND-MIRR 32 33

Having repaired the disk (LDEV 33), the mirror must become active again:
:DSTAT ALL

LDEV-TYPE STATUS VOLUME (VOLUME SET - GEN)
30-079370 MASTER-MD MEMBER1 (PROD_SET-0)
31-079370 MASTER-MD MEMBER1 (PROD_SET-0)
32-079370 *PENDING-MD MEMBER2 (PROD_SET-0)
33-079370 SCRATCH

By using REPLACEMIRRVOL, the repaired LDEV 33 will be initialized as mirror for LDEV 32 :

volutil: REPLACEMIRRVOL PROD_SET:MEMBER2 33

You can verify this with SHOWSET

volutil: SHOWSET PROD_SET MIRROR

Volume Name Vol Status Mirr Status Ldev Mirr Ldev
MEMBER1 MASTER NORMAL 30 31
MEMBER1 MASTER NORMAL 31 30
MEMBER2 MEMBER REPAIR-SRCE 32 33
MEMBER2 MEMBER REPAIR-DEST 33 32

Split-Volume Backup

Another feature of Mirrored Disk/iX is the way it can make backups. It is only necessary to take (physically spoken) one disk’s content out of a mirrored pair. This is possible via the split-volume backup. To split a volume set, no user is allowed to stay logged on for this volume set: You can use the command :TELL @ LOGOFF FOR BACKUP. The volume set will become unavailable now for one half:

:VSCLOSE PROD_SET; SPLIT
:DSTAT ALL

LDEV-TYPE STATUS VOLUME (VOLUME SET - GEN)
30-079370 LONER-SU MEMBER1 (PROD_SET-0)
31-079370 LONER-SB MEMBER1 (PROD_SET-0)
32-079370 LONER-SU MEMBER2 (PROD_SET-0)
33-079370 LONER-SB MEMBER2 (PROD_SET-0)

Now make via VSOPEN both SETS (USER and BACKUP) available: :VSOPEN PROD_SET

PROD_SET SPLITT USER VOLUME MOUNTED ON LDEV 32 (AVR23)
PROD_SET SPLITT BACKUP VOLUME MOUNTED ON LDEV 33 (AVR24)

Now the PROD_SET is available for production but without the mirroring:

:TELL @ SYSTEM IS AVAILABLE NOW
:DSTAT ALL

LDEV-TYPE STATUS VOLUME (VOLUME SET - GEN)
30-079370 MASTER-SU MEMBER1 (PROD_SET-0)
31-079370 MASTER-SB MEMBER1 (PROD_SET-0)
32-079370 MEMBER-SU MEMBER2 (PROD_SET-0)
33-079370 MEMBER-SB MEMBER2 (PROD_SET-0)

Start the backup with the command: :FILE T;DEV=TAPE :STORE /; *T; SPLITVS=PROD_SET; SHOW. This backup is compatible to the “normal” STORE.

Having finished the backup, the split can be canceled:

volutil: JOINMIRRSET PROD_SET SOURCE=USER

SOURCE=USER tells the system that during the JOIN and the following REPAIR to synchronize the BACKUP with the USER split, so the on-line users are allowed to continue to work. The status in VOLUTIL during the synchronization will look like:

volutil: SHOWSET PROD_SET MIRROR

Volume Name Vol Status Mirr Status Ldev Mirr Ldev
MEMBER1 MASTER REPAIR-SRCE 30 31
MEMBER1 MASTER REPAIR-DEST 31 30
MEMBER2 MEMBER REPAIR-SRCE 32 33
MEMBER2 MEMBER REPAIR-DEST 33 32

At most, six mirrored pairs will become synchronized in the same period for performance reasons. That means as soon as one of the six pairs is finished the next waiting pair will be processed.

This is an optional way to make a backup — as I said earlier, Mirrored Disk/iX is fully transparent to all applications. We’re still using TurboStore/iX online to back up our mirrored volume sets, and didn’t encounter any problem because of Mirrored Disk/iX.

Summary

Mirrored Disk/iX is a MUST for mission critical systems to guarantee high availability of the data. In case of problems because of the physical disks, the data stays available. The automatic repair processes in Mirrored Disk/iX are transparent to the users. The procedures are quite easy — but you must know them! This article will help all HP 3000 System Managers to have this handy.


3000 Network Hardware: Routers and Switches and Hubs, Oh My!

HP 3000 hardware networking can be like a trip down a Yellow Brick Road

By Curtis Larsen

Auntie MAU! Auntie MAU! A Twisted Pair! A Twisted Pair!

Once upon a time, networks were as flat as the Kansas prairie, and computers on them were a lot like early prairie farmsteads: few and far between, pretty much speaking to each other only when they had to. (“Business looks good again this year.” “Yep.”) Most systems still used dumb terminals, and when speaking to anything outside the LAN, system-to-system modem connections were the way to do it.

A tornado named the Internet appeared in this landscape. It uprooted established standards and practices, swept aside protocols and speed limitations, and took us into a Technicolor networking landscape very different than what was there before.

Toto, I get the feeling our packets aren’t in Kansas anymore

Smaller companies were tossed before the tornado to eventually land and quickly begin growing again in the new environment. Large companies like IBM, HP, Digital, and Microsoft, who were rooted and established in their own proprietary standards (it sounds like an oxymoron, but it’s true) survived by generally ignoring the howling winds. Eventually, munchkin-like, they all came out to see what the general fuss was about, and found that a house-sized chunk of change (pun intended) had landed.

Networking, and the TCP/IP protocol had truly arrived in style, bringing strange new applications and markets. Serial connections and proprietary networking (“What do you mean we don’t need SNA to connect to the Wichita office anymore?”) gave way to a new kid on the block. And her little dog, too.

Follow the Yellow-Colored-Cable-and-Labeled-at-Both-Ends Road!

So here we are, sitting in a strange new networking land of strange new networking things. And for some of us, trying to understand the whole of it all — especially in relation to “legacy” system like the HP e3000 — is a little daunting. What are all these networking black boxes we plug the system into, and what do they all do? How can they make life better? (How can they make life worse?) If you’re not sure (or just plain curious) read on.

We’re off to see the wizard — this wonderful wizard of ours!

The networking wizard of your HP e3000 system is a program named “NMMGR.” It allows you to define networking hardware and tells you how to create connections with them. But what things can you define? Before we talk about connecting to things, we should probably take a crash-course in the things you’re connecting to.

Which path do I take? Well, you could take this one, that one, or both...

The basic networking boxes you’ll connect to are hubs, routers, switches, bridges, and gateways. Oh My. Let’s take them one at a time.

Since life is like an analogy, I’ll stretch one for the hub to go like this: If your network traffic is like water through a hose, then a hub is like a splitter, allowing multiple exits. Generally speaking, a hub simply splits the traffic from the “incoming” line into each connected port “out.” This is cheap and simple to set up if you don’t have a lot of connections, but like too many divisions on any hose, too many hubs will make the end connections anemic. The fewer connections the better, so most hubs have no more than 24 ports total.

Obviously, to make things better for all connections in larger networks, more “water pressure” was needed — and the switch was born.

Pay no attention to that man behind the curtain!

No, I’m not talking about the System Administrator. A switch looks very similar to a hub, but the appearance ends there. Again, if your network is like a stream of water in a hose, then your garden-variety switch is like a water tank, adding pressure to the line. Huge water tanks are placed at the heart of a city’s water system, while small tanks are placed on buildings. At the heart of most networks — tended by a cooing Network Administrator — is a core switch (the main tank).

Additional “work group” switches (building-sized tanks) can be used in wiring closets for special-need areas of the network. So, although a hub and a switch both offer multiple connections, the resulting “streams” have vastly different origin and force. Now that we’re one big speedy networking family, no one minds if it all fails, right? No? Well love can build a bridge, and so can electronics.

My Network’s crashing… What a world! What a world…

Having all your network connections on one physical segment isn’t too grand — especially when it fails. By segregating physical networks and then “bridging” them together, you ensure that in the face of adversity, some people can still laugh at the ones who can’t work. Aquariously speaking, a simple bridge is like a valved pipe between two water systems, passing water in both directions, and shutting one side’s valve if that system “loses pressure” (goes down). You say you want to route water based on content? Well then.

This here’s the ‘Packet of a Different Header’ you’ve heard about

Simply put, a basic router is an intelligent (logically) one-way bridge, examining network data information and very quickly sending data packets down one line or another. In our epic analogy, a router could be a thermal valve, forcing only cold water to flow this way, and hot water that way, preserving us from the heartbreak of tepidity. Since the router has to work quickly, it usually works at a lower level than other equipment does, caring less about content and more about destination. You say you’d like to exchange hot water with someone else? You’d like the gate to swing both ways?

There’s no place like the home network! No place like the home network…

A router is excellent at sending packets from Here to There (and not necessarily Back Again), but nothing beats the gateway for two-way communication. A gateway takes data from one network and sends it to another, even re-creating the data packet on the other side, if need be. To stretch our analogy to its limits, we could say that two different water systems exist, having the same characteristics, including temperature. One system is chlorinated, while the other is not, and so simply allowing the water to pass unmolested would be an issue — one system would become diluted, and the other exposed. What we need is a filtration pump that allows the water to be pumped in either direction, adding chlorine one way, and taking it out in the other direction.

Connecting to the Internet requires a gateway, since your home network doesn’t “know” how to reach something out there. What it does know is how to hand off a data packet destined for “Not Here” to a gateway for processing. The gateway in turns checks the packet’s address and sends it to the best possible network closer to the packet’s Ultimate Destination, re-labeling the packet as it does so, and putting its own address in the packet’s “return address.” If the packet’s Ultimate Destination isn’t on the new network either, then the gateway there does the same thing until the packet finally hits the Emerald City.

On its way back home, because of all the “return addresses” it picked up, the packet passes back through each gateway that it came from until, clicking its little ruby slippers, the packet realizes it is in no place but home.

Because of its intensive examination work, a gateway is almost always dedicated to its task, especially on larger networks. It was the gateway’s filtering abilities that led to using them as a firewall to protect networks by purposefully filtering and/or denying different types of connections and data. But the firewall is a topic all its own — just make sure you use one!

And you were there, and you, and you.
Oh Auntie Carly, there’s no place like HP!

So there you have it — Networking Devices 101. Now that you know what you can connect your e3000 to, you can come up with some ideas on how to use them, and answer questions about what to connect to. Should an e3000 be connected to a hub, or to a switch? (Switch!) Does a printer need to be connected to a hub or a switch? (A hub will usually be fine.) Should I use my e3000 as a gateway? (I think not.) Should the physical part of my network the e3000 is on be bridged? (Yes.) Can I configure a gateway and connect my e3000 to the Internet? (Certainly. But make sure you have a firewall first!) Can I use other protocols or connections besides TCP/IP and Ethernet? (Absolutely! X.25, SNA, FDDI, and a number of other connections are available, but they change a lot, so check with your favorite sales rep first.)

So long as HP continues to expand and extend the capabilities of their workhorse system, the e3000 will continue to be the perfect business computer choice. As everyone who uses and loves it knows – stick with it and your business just keeps flowing along, leaving your competitors all wet.


Now, HP's Unix transitions to legacy

Wall of books
Hewlett-Packard Enterprise has issued dates to terminate support for two releases of its HP-UX Unix environment. Next year will mark the end of HPE’s support for HP-UX 11.11 and 11.12. The final, terminal version of HP-UX, 11.31, is already in the MPS category. This Mature Product Support repairs crucial bugs. HPE adds that this support level is “without sustaining engineering.”

MPS is a milestone that the MPE/iX operating system visited in 2007. In this state, the operating system is frozen for features. The legacy managers in the HP 3000 market found a silver lining in frozen status. Fewer elements of the MPE/iX environment were likely to break, since changes did not find their way into the base software. Already a reliable OS, taking MPE/iX into Mature support makes it even more stable.

HP-UX is another matter for a legacy manager. Unix, touted as the replacement for HP 3000 datacenters, holds a riveting reputation. Security flaws are a major element in an OS that powers IT so frequently. The more Unix running in the world, the less secure it becomes.

The year 2022 ends HP’s active support for HP-UX, but the shift away from the vendor’s teams isn’t stopping legacy use. This legacy milestone usually arrives while independent support companies take on the vendor accounts relying on the OS. Change is inevitable, but changes to legacy IT are fewer. Losing vendor support may not even mean different experts will take on the work. At VMS Software Inc., some support team members shifted from HPE jobs to work at VSI.

Indies to the rescue

In the HP 3000 marketplace, Beechglen Development took up HP 3000 support, among other companies. Just about the time HP announced in 2011 it was migrating its best HP-UX features to Linux, MPE/iX support from HP ended. Beechglen remains a support resource for legacy IT in both HP 3000 and HP 9000 communities. The company uses Nickel, a program to assess the state of software on an HP-UX server.

This Network Information Collector, Keeper, and Elaborator is “a shell collection script from Hewlett Packard,” Beechglen explains. It’s been maintained and modified through the decades by Beechglen. A NICKEL script runs on HP-UX systems 10.20, 11.0, 11.11, 11.23, and 11.31.

Nickels run as a review and reference for general system health. “The script also provides aid after system events for troubleshooting,” Beechglen adds, “and getting a system back up and running in as little time as necessary.”

Beechglen, Allegro Consultants, and other companies keep supporting legacy environments after the vendor leaves the market. These companies tout expertise from a “team who eats, breathes, and sleeps HP-UX and MPE for 33-plus years, in the most demanding environments anywhere in the world.”

HP’s Unix is entering the era where MPE/iX visited 14 years earlier. Like MPE/iX did, HP-UX has gained an extra year of vendor support. System vendors will continue to collect support dollars until the latest possible date. There’s plenty of value in legacy IT, all through the years after the vendor stops selling it.


Making Emulation Serve Migration

Call-compatible subroutines, utilities help Unix behave like the MPE/iX environment

By Charles Finley

Let's look at methods to migrate applications from HP 3000s. One tool is to making subroutines and routines call-compatible, to let existing business logic work on other platforms.

Various utilities and subroutines can help a migrated application run on the target system. Examples include the automated data structure mapping of KSAM files to an indexed file system of the target computer and the export and import of KSAM data. This can be accomplished, for example, using either the Informix C-ISAM or bytedesign’s D-ISAM file system.

An MPE-compatible print queue manager which also operates on Unix platforms can enhance the limited capability of printer and print job control on Unix systems. This manager can provide an emulation of the MPE spooler in addition to functionality such as:

• Printer management by forms and paper stock

• Operator control and intervention

• Multiple and partial file printing

• Post-submission modification of print job characteristics

• Print file review and display

• Physical and virtual printer support

• Dynamic modification of printer characteristics

• Application Program Interface for direct printer control

• Automatic and transparent network operation

In addition, a batch job manager can provide functionality that is otherwise very limited on a Unix system. The batch job manager presents a centralized and more powerful point of control over the batch environment and gives capabilities very similar to those on MPE.

Porting of non-COBOL applications

This series has emphasized COBOL migration, since that is the primary development language on the HP e3000. It is also possible to port Fortran, Pascal, C, SPL, RPG, and BASIC third-generation languages, as well as fourth-generation languages Powerhouse and Speedware.

Fortran 77 and Pascal porting are dependent on the capabilities of the compilers and their runtime libraries on the target platform. Both Fortran and Pascal have some hidden dependencies on the MPE file system that must be addressed prior to porting. C/iX is perhaps the most portable language on the HP e3000.

There are two versions of BASIC on MPE: Business Basic and Basic/3000. They are both somewhat of a challenge to port because they are unique dialects and also have MPE file system and intrinsic dependencies built in. SPL is surprisingly portable thanks to the SPLASH compiler from Allegro Consultants (www.allegro.com). It is capable of changing SPL to C code. RPG should be translated to either C or COBOL in order to move it to another platform. Finally, products from the fourth-generation language vendors Cognos and Speedware are perhaps among the easiest applications to port. They run on a number of different platforms and their code is quite portable.

It is sometimes desirable to move away from some of the older languages to one that can be more easily enhanced or supported. Translators either exist or can be built to translate many of the third-generation languages to C, C++, Java, or even Visual Basic. Moreover, translators could potentially be built to translate some of the more obscure discontinued languages such as Business Report Writer to C or Java. There has also been some interest expressed in translating Powerhouse or Speedware code to Java or C++.

Optimizing terminal operation and screen management

The standard migration procedure for VPlus form files is to convert them with the same character-based look and feel. Migration of VPlus forms files provides a significant opportunity for application enhancement. The block-mode look and feel of VPlus is an area in which modernization to a GUI and its capabilities can give positive benefits. Several packages are available to accomplish this on the HP e3000. Migration to Unix, however, involves:

• Automated translation of the VPlus forms files to a format readable on the target platform.

• Preservation of the editing specifications for these forms.

• A management utility to maintain and enhance these migrated forms on the target platform.

• A call-compatible library of VPlus intrinsics.

Since the goal is to achieve as close to a 100-percent automated migration as possible, all possible editing specifications and editing constructs must be supported. The call-compatible library of intrinsics must support all VPlus functionality to avoid any manual changes to the COBOL code.

Terminal support on Unix is handled by the termcap and terminfo utilities. The termcap scheme was developed to support vi, the screen-oriented visual display editor (or very interesting editor) for Unix. The termcap file contains descriptions of the features supported by the terminal (how many lines and rows, whether the terminal supports backspace, etc.) and ways to make the terminal perform certain operations (clear the screen, move the cursor to a given location, etc.).

As more and more terminal types appeared, terminfo and its associated curses library were developed. Terminal descriptions in terminfo are essentially compiled versions of a textual description and can be located faster at run time. Again, terminfo performs typical operations (clear the screen, move the cursor) on a wide variety of terminals. The curses library provides functions that give added ability, such as setting raw mode and setting echo on and off.

The limitation of curses is that it was designed for character-based terminals, while today the trend is toward pixel-based graphics terminals supporting graphical user interfaces. Curses screen performance also has some limitations in terms of unnecessary screen clearing and cursor movement.

Screen management technology has been enhanced for character-based terminals to provide high levels of performance. This involves low-level routines to use escape sequences to control cursor positioning, highlighting, graphics and line drawing, and display of data. These routines are designed to maintain logical buffering of before and after images of the screen, as well as optimization of screen attributes, screen positioning, and data display and capture.

The implementation of a GUI solution for VPlus screens can involve the use of advanced terminal emulators running in a Windows environment as a front end to the character-based IO and screen control routines. With these emulators, it is possible to incorporate capabilities such as hypertext help and copy and paste. Modernization involves the use of window objects such as message boxes, dialogue boxes, text boxes, list boxes, scroll bars, tool bars, and image displays.

The Unix COBOL vendors have integrated GUI screen developers and managers into their compiler products as front ends. This can involve a fair amount of reengineering and time to produce a solution if there are a large number of VPlus forms files to migrate and no automated tool.

Finally, solutions now exist to migrate VPlus forms files automatically to advanced GUI management systems. These are graphical PC-based client-server application development systems. Migrated forms will run as true window clients on PCs under Microsoft Windows or on X terminals and workstations under OSF/Motif. These tools clearly will give the most advanced modernization capabilities and functionality available. With their development interfaces, text/input fields can be replaced with typical graphical user elements. The front end will automatically map the appropriate GUI objects to VPlus screen elements.

Conclusions

Tools are available to migrate existing MPE COBOL II applications to Unix or Windows, as well as more modern COBOL development environments. The use of these tools presents the best opportunity for a successful migration. The suggested strategy is to migrate using automated tools and utilities. This approach minimizes risk and development and deployment time and costs.

If there is a need to enhance the application, it is best to port the entire application using the existing methods of use. This provides for an immediate test environment for the newly changed system and allows for parallelism in testing and future development.


Putting Job Queues to Good Use

By Neil Armstrong

Our development environment at Robelle was quite unusual, in that we had a single job stream, which launched all of the necessary compiling and testing steps for each product. So if I wanted to compile and run the entire test suite for Suprtool, I just have to issue a single stream command, :stream jrelease.suprtool

If I wanted run the Qedit test suite, then I just have to stream the qedit jrelease with the command :stream jrelease.qedit

There’s a problem though: the individual job streams that test each product can only run single threaded. Each job must complete before the next one begins, so we have to keep the job limit perfect all the time. This also means that we couldn't run the Qedit and Suprtool test suites simultaneously.

This has always been a problem, as sometimes people or jobs alter the limit incorrectly, which means that multiple jobs would stream at the same time, causing test jobs to fail and results to be incorrect. This could mean losing a night’s “productive” testing, as the job streams are generally streamed in the early hours of the morning, after the nightly backup. So if you had just made a major change to a module of Suprtool, you wouldn’t know the impact of that change for another day.

Mike Shumko suggested that we try to implement jobq’s for our environment, to address this problem. Without knowing what jobq’s really were, I naturally volunteered for the job in the hope that I could alleviate this dependency we had on the job limit.

What I hoped jobq’s would do

Without even reading about jobq’s I thought they were a way to have job streams operate in specified queues, and thus be independent of jobs in other queues and the main job queue.

The Commands

To start using jobq’s immediately, you need only do two things:
- create the jobq with the newjobq command, and
- change the !job card in your job stream to use the jobq that you built.

So to create the jobq for the Suprtool job streams I used the newjobq command. I only wanted one job to run at at time in this jobq so I specified a limit of 1:

:newjobq suprtool;limit=1

Then I changed all of the job cards to have the “;jobq=suprtool” at the end:

!job jtest01,user.acct,suprtest;outclass=lp,3,1;inpri=7;jobq=suprtool

Since I have MPEX, I used the MPEX %qedit command to make a global change to all jobs:

%qedit [email protected]@,append “;jobq=suprtool” “!job”

This will open each file that qualifies, and append to the job card line the jobq specification. Voila, Suprtool now could run in its own jobq independently!

There are some other useful commands associated with jobq’s:

:listjobq

JOBQ        LIMIT   EXEC    TOTAL

HPSYSJQ     3500     8       10
SUPRTOOL      1      0        0
QEDIT         1      0        0
RANDMISC      1      0        0

The purgejobq command (:purgejobq suprtool) will allow you to remove any jobq’s that you’ve defined.
Showjob ;jobq will show you the jobs with the jobq in which they are running in. A detail line from that output would look as follows:

:showjob ;jobq

JOBNUM  STATE IPRI JIN  JLIST JOBQ     INTRODUCED    JOB NAME
#J4567  EXEC   10   S   LP    SUPRTOOL WED 11:48A  JTEST01,USER.ACCT
                                                

You can also alter the jobq that a job is running in with the altjob command, by typing :altjob #j4567;jobq=qedit

The Practice

Since it took me only a few commands to create and change all the jobq’s on our development system, I had everything changed to take advantage of the jobq’s in short order.

So I started testing running jobs associated with the Qedit and Suprtool test suites at the same time. I quickly discovered that each jobq requires that a slot be open in the global jobq in order for the job to run. I found an explanation in the newjobq command documentation:

“The global limit takes precedence over individual queue limits. That is, even if a jobqueue has a slot available, if the overall limit has been reached, jobs have to wait till one of the jobs finish or the global limit is increased. When a global slot becomes available, the next job is picked from among the eligible jobqueues (those which haven’t yet reached their individual limits).”

I hadn’t expected this; however, it merely meant that I needed to expand the job limit to some huge value. In retrospect, I should probably have created a separate jobq for our regular background jobs, like inetd or the apache webserver. That way I could create a small job to periodically check that queue, to ensure that all the required background jobs are “up,” and take appropriate action if a job has failed.

Maintenance

To my surprise, I found that after a “start norecovery” on this system, all my jobqs were missing. Again, the newjobq command help revealed what had happened:

“The job queues persist across reboots, provided a START RECOVERY is done. Any other system starts will cause the job queues to be deleted and they will have to be created again.This command is available in a session, job, or in BREAK. Pressing [Break] has no effect on this command. This command is not allowed in the SYSSTART file.”

We have a command called Startall, which is used to start all of the system jobs, so I put the newjobq commands in this command file to insure all of the jobq’s were built with the proper names and limits:

setvar hpmsgfence 2
newjobq suprtool;limit=1
newjobq randmisc;limit=1
newjobq qedit;limit=1

This way I am assured that the jobq’s always exist when we restart the system.

Problems

Personally, I have found no unexplainable problems with the new jobq feature; however recent traffic on the 3000-L mailing list did showcase this query from David Knispel:

“We ran out of disk space over the weekend. Now my JOBQs are screwed up. When I do LISTJOBQ for HPSYSJQ, it shows 6 EXEC but only a limit of 3 and total of 3. When I do a SHOWJOB, only 3 jobs show for this queue. I’m having the same problem with other queues also. Any way to fix this without bouncing the system?”

To which Richard Bayly from HP responded:

“The patch you are after is: MPELXC2B - 6.5; MPELXC2D - 7.0; MPELXC2C - 6.0 but superseded by MPELXL7A.”

Are jobq’s for you?

In conclusion, I do find the new jobq feature quite valuable, as it gives me a tool to break up my jobq’s logically if not physically. I am able to manage some groups of jobs more easily and have more jobs working concurrently, getting more out of my HP e3000. If you have problems with concurrency, or having jobs run when they shouldn’t, then perhaps implementing jobq’s is the way to go.


Use FTP transfers to shadow an HP 3000

Included software, tested job offers some disaster recovery

By Wirt Atmar

We have been using one of our HP 3000 Series 918s to shadow our primary development 918, using the MPE-to-MPE FTP capabilities of MPE. The purpose of doing this wasn’t to eliminate tape backups, but rather to ensure that if we lost one machine — and all of the recent backup tapes that tend to lie around it — we’d have another duplicate machine, completely ready to run at a moment’s notice in a completely separate building, far enough away where the possibility of both being destroyed by fire was highly unlikely.

Firstly, the job is extremely simple — and of course free. Indeed, there’s nothing complicated about any of it, nor are there any costs other than a few minutes’ time. The job is constructed as shown below, streaming itself to run at 3 AM every morning:

!job ftpxfer,user1.acct1,group1
!ftp 192.168.1.1
user user1.acct1,group1 psw,psw
prompt
mget *
quit
!set stdlist=delete
!eoj
!job ftpxfer,user2.acct2,group2
!ftp 192.168.1.1
user user2.acct2,group2 psw,psw
prompt
mget *
quit
!set stdlist=delete
!eoj
!job ftpxfer,user3.acct3,group3
!ftp 192.168.1.1
user user3.acct3,group3 psw,psw
prompt
mget *
quit
!set stdlist=delete
!eoj
!job ftpxfer,user4.acct4,group4
!ftp 192.168.1.1
user user4.acct4,group4 psw,psw
prompt
mget *
quit
!set stdlist=delete
!eoj
.
.
.
!job ftpxfer,manager.sys,manager
!pause 90
!stream ftpjob.manager.sys;at=03:00
!set stdlist=delete
!eoj

The primary development machine is running MPE/iX 5.5 PowerPatch 7, and the shadower running MPE/iX 5.5 PowerPatch 4. The primary machine has only 4Gb of disk; the shadower has 8Gb. There is a 10Mbit LAN between the two machines.

The backup of the primary machine onto a DDS-2 drive takes about 45 minutes to make a full CSLT/store copy of all files (approximately. 3.5Gb), using hardware compression, but there’s no need for us to transfer all of the user files from one machine to the other on a regular basis. The seven jobs represented in the jobfile above transfer only the regularly active development and corporate accounts and their databases. These seven jobs represent about 2.5Gb worth of material.

My initial experiments indicated that transfers across the LAN were just about as fast as DDS/compression on backups. That still seems to be true. A complete CSLT/store backup of the primary takes about 45 minutes; the transfer of about half that material across the LAN takes 23 minutes.

There is a caveat in that 23-minute number, however. That transfer rate occurs when we allow all seven FTP jobs to run in parallel. A 10Mbit LAN is the equivalent of six T1 lines. If the seven jobs all run in parallel, we seem to consume about 60 percent of the LAN’s bandwidth — leaving the equivalent of two T1s available for all other internal traffic, and that seems more than enough. While this intense internal traffic is flowing, normal terminal communications or Web page draws seem unaffected.

However, with all seven jobs running simultaneously, the disks on both machines are being exercised to their maximums, to the point that I consider it excessive wear on the disks. They’re really clattering. Thus, we now set the job limit to just one above the background jobs, so that the seven jobs execute in single file; however, doing this increases the transfer time from 23 minutes to 64 minutes.

A good portion of a single-file’s FTP bandwidth (dead-time) is consumed in just the negotiation between the two machines. Once the file transfer begins flowing, the data moves at a good speed. When all seven jobs are running simultaneously, whatever dead time on the LAN is left over by one job is filled in by one or several of the others. Nevertheless, the excess wear on the disks doesn’t seem worth the additional 40 minutes you save, especially at three o’clock in the morning.

I’ve tried multiple FTP syntaxes, but the one used in the job above seems by far to be the most certain. However, it only transfers MPE-named files, but that’s all we currently use, so it’s not a problem for us. IMAGE databases, KSAM files, and all “regular” files transfer with no problem. Symbolic links transfer, but seem to lose their “symbolicness” in the process. HFS files don’t transfer at all, using this syntax and the MPE version we’re currently on.

I’m also planning on upgrading both machines to 6.5. Once that’s done, we’ll be able to use Jeff Vance’s new Store-to-Disk option. If my understanding is correct, the great advantage of doing this is that we’ll be able to get all of the advantages of using the STORE command and assemble the store material into one file that can be FTPed between the two machines, and then RESTORED on the shadow.

However, I’ve never been particularly fond of partial backups, but in order to make the STD option work well, that’s what we’ll have to do. The FTP jobs above transfer every file in the specified groups and accounts, regardless of the recency of their modifications.

One final note: FTPing 2.5Gb of files between the two machines does not have the same impact on users as does a STORE-like backup does, where all of the files are marked for some time for exclusive access. Only one file at a time is locked as the MGET process walks through the file list. That attribute can obviously be either good or bad, depending on individual circumstances. Nonetheless, using the simple job that I’ve outlined above, that is the current behavior.

Overall, I’m quite tickled as to how it’s working. If we should lose the primary machine, we are (nearly) guaranteed of having a perfect replicate machine, available and ready to go, in building three.


Enabling 2FA authentication on the HP 3000

State-of-the-art security is possible using tokens, agents and RSA secure servers with MPE/iX

By Andreas Schmidt

This article describes a way to build a security barrier around your HP e3000. It is based on material offered by Security Dynamics (www.rsasecurity.com), and on my own HP 3000 experiences in a project to roll out a Two-Factor Token Authentication to all platforms for a big company in the chemical industry, one which chose NT and HP 3000 systems as pilots.

Background

Today it’s a risk to trust only static passwords. Some studies have shown that approximately 60 percent of big companies have detected a security violation in the last two years, and more than 80 percent still use static passwords, especially to connect to the network. So it’s a good approach not only to rely on static MPE security, or third-party security solution passwords, but also to introduce a Two-Factor Token Authentication.

Two-Factor Token Authentication

Authentication is the process to verify the identity of users. On the Internet, cookies can provide automatic authentication for user names and passwords having been registered at a site, especially with SSL (Secure Socket Layer) encryption for things like credit card information.

On intranets, the authentication could happen with simple passwords, along with tokens or smart cards, or with biometrics that utilize the individual’s physical characteristics such as finger-print or retina.

Because passwords can become cracked or sniffed on the WAN/LAN, all methods using static authentication are not secure. The Two-Factor Authentication requires two factors before an user will gain access to a network or system:

• The person needs to have something: a token which produces a tokencode. Such a tokencode will change about every minute.

• The person has to know a PIN. This Personal Identification Number is unique and only valid for the physical token the person possess.

If an individual needs access to a server or a network component, they must log on to an Authentication Server or Agent and enter the PIN and the actual tokencode. As soon as the token has been recognized by the logon name and the Authentication Server has accepted the code combination, it cannot be used a second time. A sniffer has no chance to use the same combination of PIN plus tokencode.

The vendors of such authentication environments (server, agents, and tokens) ensure that each authentication token is unique and it is impossible to predict the value of a future tokencode by recording prior tokencodes. Thus when a correct tokencode is supplied, there is a high degree of certainty that the person is the valid user in possession of the physical authentication token and the remembered PIN.

One environment provider is RSA Security Dynamics, offering a SecurID Server, Agent, and Tokens.The heart is the RSA ACE/Server, the authentication engine on the network. It runs on different Unix flavors and Windows NT or 2000. This server is the management component of the RSA SecurID product family, used to verify authentication requests and to administer policies for enterprise networks.

All network components which should become authenticated must be defined on this server and become RSA ACE/Agents. When a person attempts to access a protected system, this software agent initiates an RSA ACE/Server authentication session instead of a basic password session. (This is true for Unix and NT, but not in this case for MPE, as we will see later.)

The user is required to enter the user name and instead of a static password, the current tokencode from the authentication device (the token) plus the PIN. The agent software hashes the information supplied by the user with additional data known only by the protected device (agent platform). It then transmits portions of the hashed information to the RSA ACE/Server, which approves access when the information is validated. The user is granted access, and this is logged onto the Server as well.

The last component is the physical device the user possesses, the authenticator or token. Many forms are offered. The one most used is a key fob: a device with a built-in chip and an LCD window to display the tokencode, small enough to be attached to a key ring. Others are credit card look-alike devices. The latest edition is software for palmtops generating the actual tokencode. Using any one of these devices, a person must possess them to authenticate, employing the same patented algorithm for encrypting and hashing the tokencode.

Authentication Advantages

Having installed such a system on all devices before access for an individual is granted, the security of a company’s network will be drastically improved. The four primary benefits are:

• Enterprise Authentication: Only those persons can access the devices when they are authorized to possess and use a token and PIN.

• Access Control: This is essential to protect against outsider attacks or malicious employees.

• Evasion of Attack: Hackers will try to unexpectedly gain access to a network. The Server identifies threat conditions, which are reported to the logs and alert the system manager.

• User Accountability: Damage may be done using an employee’s password without the knowledge of that employee. However, by logging the authentication process which requires PIN and tokencode, it provides more assurance of employee involvement in any unauthorized activities. The knowledge of this and of the comprehensive logging helps employees recognize their accountability for information security, so that their behavior is more secure.

The HP e3000 Agent Solution

The chemical company, which CSC is providing with IT services, decided to roll out Two-Factor Token Authentication over all platforms within one year. NT and MPE were selected as pilots: NT because of the large number of servers running that environment; and MPE because of the thinking that this platform might be different from all others and more difficult to implement. However, the company also recognized the importance of running its 3000-based Order Fulfillment Process with a lot of different outside partners.

RSA’s first attempt to develop an agent for MPE was very simple: A token had to become configured for a combination of MPE-USER-ID.MPE ACCOUNT. This combination could not be reused on another token. It was not possible to use wildcards or to add SESSION-IDs or MPE-GROUP to have a complete logon string. Because of the MPE characteristic to share logons (on all levels of capabilities) this version of the agent was not what we were looking for. (More drastically: This agent could not function for the MPE platform).

The second attempt was much better: everything was changed to the chemical company’s already-existing Security/3000 setup. Now Security/3000 invokes the RSA Agent to contact the RSA Server. It transmits either the SESSION-ID or the MPE-USER-ID as the name of the token. If the token is known and allowed to access the HP 3000, the agent asks the user for the current tokencode plus PIN.

This agent also functions without Security/3000 by adding some lines to the System’s Logon UDC. This drops some additional functions in combination with Security/3000, like verifying a user profile in any case (SESSION-ID,MPE-USER-ID.MPE-ACCOUNT is defined as allowed logon in Security/3000, all others will be refused before starting anything), but it will work.

One thing is essential: The RSA Agent for MPE does not replace the MPE password process like it does for Unix or NT! It is activated first when the HELLO string has been entered and the MPE password hurdle has been passed (Account, User, and/or Group Password) and (as an option) the basic check within Security/3000 for profile existence is passed. Now any other logon UDC functions are invoked, and this activates the RSA Agent.

Having Security/3000 in place is a good idea to replace the session passwords (if any) by supplying the tokencode.

Not having session names in place, the RSA Agent will add an additional password. I do not recommend eliminating the MPE password — it’s still a fence around your system and is needed for batch security (depending on the streaming security you have in place).

Having activated the RSA Agent, a logon sequence will look like Figure 1. The setup within Security/3000’s SECURCON.DATA.VESOFT file is shown in Figure 2.

Figure 1

SYSTEMA:hello paul,manager.sys
ENTER USER (MANAGER) PASSWORD: xxxxxxxxxxx

HP3000  Release: C.55.00   User Version: C.55.00   FRI, JUN  9, 2000,  7:34 PM
MPE/iX  HP31900 C.05.08  Copyright Hewlett-Packard 1987.  All rights reserved.
*************************************************************
                  This is a private computer facility.
      Access to it for any reason must be specifically authorized.
      Unauthorized access to this computer facility will expose you
      to criminal and/or civil proceedings.

      All information contained in this computer system,
      including messages, is the property of the company.
      The company reserves the right to access and disclose
      all information sent through or stored in this computer
      for any purpose.

************************************************************
               ********************************************
               You are at: Bad Homburg   SYSTEMA Series 960
               ********************************************

Enter PASSCODE:
PASSCODE Accepted
Welcome!  You are now signed on.
END OF PROGRAM
SYSTEMA: 

Figure 2

(* Setup for the RSA ACE SecurID Agent on HP3000    *)
(* =============================================    *)
(* Usersets if needed and wanted for later use      *)
$DEFINE-USERSET MPESEC &
            @[email protected]
(* SEC/3000 Session Names to identify individuals   *)
(* 1st line: all Online Logons                      *)
(* 2nd line: Accounts secured with MPE Methode      *)
(* 3rd line: general Exceptions                    *)
(* 4th line: AM user of MPE Methode (2nd Userset)   *)
$FORBID
"MPE('/var/ace/sdshell ![DWNS(HPJOBNAME)]') <> 0"
"SDI ACE Failed"
@[email protected]&ONLINE-LDEV=20-LDEV=21-&
!MPESEC-&
@.TELESUP-SYSTEMB,XEALRMON.XFER-M,@.XFER-S,@.XFER
!MPESEC&ONLINE&CAP=AM

(* MPE UserIDs to identify individuals              *)
$FORBID
"MPE('/var/ace/sdshell ![DWNS(HPUSER)]') <> 0"
"SDI ACE Failed"
!MPESEC&ONLINE&CAP<>AM

(* Needed for both Methods                          *)
(* Exception Line: general Exceptions              *)
$FORBID
"IVAR(';SECURID',1) = 1"
"Invalid SecurID PASSCODE"
@[email protected]&ONLINE-LDEV=20-LDEV=21-&
@.TELESUP-SYSTEMB,XEALRMON.XEXFER-M,@.XFER-S,@.XFER 

Here are our detailed notes on the Security/3000 configuration.

$DEFINE-USERSET: If in your current setup you identify individuals by their session name or their MPE-USER-ID, it’s a good idea to define a userset for later use.

$FORBID: in this section for all accounts, where the session name is used to identify an individual, the sessionname is downshifted (convention in defining the token on the ACE Server), and the program /var/ace/sdshell, the RSA MPE agent, is executed. If the program’s error code is not equal to 0 the logon is rejected. This will happen if the tokenname the ACE Server received is not allowed to get access to this HP 3000 Server, or is not known at all.

This $FORBID process must be executed for all sessions (but not for batch logons) except console and remote console. (If the network is down, the agent cannot authenticate with the ACE Server) The exception is for all accounts identifying the individual by the MPE-USER-ID, and some special accounts like TELESUP or EDI accounts (here:XFER).

The second userset within this $FORBID statement includes all Account Managers using the MPE-USER-ID as identifier. Here it’s assumed that individuals share one AM logon like MANAGER or MGR and are distinguishable by the SESSION-ID. The next $FORBID defines the same setup for accounts where the MPE-USER-ID is used to identify an individual. The MPE-USER-ID is used for the ACE Server Authentication process. This is needed for all session logons in these accounts except the AM logons.

The third $FORBID checks a variable named SECURID. The ACE Agent sets this variable. If the tokenname is known and allowed for this server, but the code transmitted is wrong (PIN and current tokencode for the token identified by !HPJOBNAME or !HPUSER), the logon is rejected with the error message “Invalid SecurID PASSCODE” (variable SECURID is 1). Otherwise, the logon is accepted (and logged as successful on the ACE Server like all unsuccessful attempts as well), and the individual gains access to the HP 3000. This check must take place for all sessions except those on the two consoles and the general exceptions.

This setup is proven and stable. Because of Security/3000’s flexibility, it’s almost possible to build up any other existing security concept, as long as the current setup already distinguishes between individuals. If this is not the case for any reason, it’s obvious that this must be changed first because of the characteristic of all authentication processes.

A similar setup is possible within a System’s Logon UDC if Security/3000 is not in place. It will become more complicated (some IF clauses, etc.) and will require more effort in changing the setup. But it is possible because of the simple logic of the ACE Agent and ACE Server:

Do I know the name of the token transmitted ?

NO: “SDI ACE failed.”

Is this token allowed for that network component ?

NO: “SDI ACE failed.”

YES: “Enter PASSCODE:”

Is the right PIN and tokencode transmitted ?

NO: “Invalid SecurID PASSCODE.”

YES: “PASSCODE accepted”

Access granted!

What if the ACE Server and/or the network is not available for any reason, so that no authentication could take place? In this case, a system manager must use the specific parameter to bypass the system’s logon UDCs and disable this authentication security. Because of this, it’s highly recommended to switch off the ENFORCE LOGON UDCS setup in SYSGEN (enforcelogonudcs OFF), but to have a hard-to-guess password on all MPE-USER-IDs with SM (only one, I hope).

Also, it’s obvious that in this case the only password remains the MPE-USER-ID password (and/or ACCOUNT password). It’s probably a good idea to prepare a new SECURCON.DATA.VESOFT configuration file in advance, which will be needed in this case.

For our chemical customer it’s not probable that this will happen: A second ACE server is in place with an automatic switchover if the first is not available, and the network topology is redundant. Nevertheless, for smaller companies this may become an issue. It’s not expected that RSA will port its ACE Server software to MPE to avoid such problems, if only MPE is in use.

Rollout to an HP e3000 platform

Assuming that your company wants to have such an authentication for MPE, it’s important to plan this in great detail. There are some dependencies which need to be observed.

Depending on whether you want to keep existing naming conventions to distinguish between individuals (a convention like six letters of surname plus first letter of first name) which is used for SESSION-IDs or MPE-USER IDs, or if you want to create a new one for naming the tokens (like randomized names out of a name generator), you need to make changes more or less within the HP 3000 security setup. Remember, such an authentication may be used on a company-wide level serving all platforms in place (Mainframe, Midrange, PC, Network components) and that one individual possesses only one token for all platforms. This token name is unique and usable on all platforms.

In any case, you’ll need a complete inventory of individuals accessing the HP 3000, including all their individual logons. This list could be created out of Security/3000 itself. It needs to be consolidated and expanded with the individual’s token name. If a new naming convention will be used, all MPE logon profiles (SESSION-ID,MPE-USER-ID.MPE-ACCOUNT) must be changed accordingly.

It may also be the case that the existing logon characteristics (!HPUSER, !HPJOBNAME) are used within application configuration and setup. In this case this must be changed in parallel. And first the Authentication Process on MPE system level itself must be activated.

Lastly, everybody will cry if they used automatic logon scripts: They no longer work! Whether such scripts were allowed in the past (containing passwords) depends on your setup. If so, you have to convince the users that this behavior was never secure. If not, you can be sure that nobody will continue to use such scripts — at least no longer to supply the tokencode (but probably the PIN, which is not recommended)!

You may work on an application-per-application basis, but this will negatively affect individuals who work in more that one application. (For example, in Application A the SecurID token is already needed, but in Application B it still uses the old method.)

Don’t underestimate this effort! It’s very easy to understand the process, but it is hard to roll this out in a living production environment, possibly serving many different user locations. It’s possible, but it needs a dedicated and experienced project management team to deal with all the different groups involved: techies, network people, security admin people, HR, the token vendor, application support, user help desks, user supervisors, end-users and more. People management may become more important than technology management!

Summary

Two-Factor Token Authentication is a state-of-the-art process to avoid static passwords. RSA Security Dynamics provides an MPE Agent for this purpose which worked perfectly for us with Security/3000, but also with basic MPE security. The technical approach is not simple, but manageable. The main problems may arise during the rollout because of human behavior in keeping known procedures and avoiding changes, especially for security. But to stay on HP 3000 into the future, the effort is worth it, especially for better security.


A Day That Will Always Be Marked in Red

Calendar-pages

Back in 2005, November was still a month bleeding in the red ink of memory. You can use shorthand and say "November 2001." Or you can say the day that HP's 3000 music died. The date of November 14, 2001 still marks the start of the post-HP era for MPE/iX as well as the 3000 hardware HP sold. It took another two years to stop selling the PA-RISC servers the company had just revamped with new models months before the exit-the-market announcement. PCI-based N-Class and A-Class, the market hardly knew ye before you were branded as legacy technology.

For a few years, I stopped telling this story on the anniversary, but in 2005 I cut a podcast about the history of this enterprise misstep. HP lost its faith in 2001 but the customers hadn't lost theirs and the system did not lose its life. Not after November 14 and even not today. Not a single server has been manufactured since late 2003, and even that lack of new iron hasn't killed MPE/iX. The Stromasys emulator Charon will keep the OS running in production even beyond the January 2028 date MPE/iX is supposed to stop keeping accurate dates. 

Red Letter Days were so coined because they appeared on church calendars in red. They marked the dates set aside for saints. In 1549 the first Book of Common Prayer included a calendar with holy days marked in red ink; for example, Annunciation (Lady Day), 25th March. These were high holy days and holidays. The HP 3000 came into HP's product line during a November in 1972. November is a Happening read the banners in the HP Data Systems Division. No day of that month was specified, but you might imagine it was November 14, 1972. That was a Tuesday, while the 2001 date fell on a Wednesday. A total of 1,508 weeks of HP faith.

Something important happened in that other November of 29 years later. Hewlett-Packard sent its customers into independent mode. Those who remained faithful have had a day to mark each year, logging the number of years they've created their own future. It's 20 and counting as of this year.


HP knew nothing of November during October

Sgt. Schultz

He saw nothing, nothing

From October, 2001

Just weeks before HP started to brief its vendor partners about the 3000 futures cut-off, customers asked about it. In a public forum of a webinar, the 3000's vendor relations manager, its product planning manager, as well as its customer spokesman said they knew nothing about the 3000 leaving HP's fold.

The questions surfaced in an October, 2001 broadcast. On November 14, the company released public statements. I was briefed on Nov. 9, and vendors leaked their notifications during the first week of November.

If nobody on that October Webinar knew about ending the 3000 business line, HP was certainly keeping its decision held as closely as a riverboat gambler's hand. Or perhaps a certain German sergeant on TV was the template for the answers.

After a few minutes of questions about support for disk mirroring, boot drives greater than 4Gb and other chestnuts often asked, HP began to address a number of questions about the impact of the merger on the 3000 product line. Customers asked about a published report in Network World magazine, wondering if the system was likely to survive the merger.

“I sure wish I knew the answer to that,” said Kriss Rant, CSY’s manager in charge of developer relations and a division veteran. “I don’t know any more than you do.”

“Whenever there’s a large merger like this, the press has a field day,” host Stachnik added, “speculating on exactly what it’s going to mean. I can tell you that nobody in the 3000 business has received any marching orders from Compaq or upper HP management that OpenVMS, MPE or any other operating system is supposed to survive or not. There’s been no decisions made on that. Don’t give too much credence to it.”

Platform Planning Manager Dave Snow noted that HP did a “total roll of our product line in February, and we’re delivering multiple processor support. I certainly think you can expect there will be support of MPE for many years to come.”

Other questions on the merger got a broad brush answer from Stachnik. “The correct answer at this point is, ‘We really don’t know,’ ” he said. “There are lots of open questions about whether that merger is even going to happen. The SEC needs to look at it, and there’s been all sorts of speculation in the press.

"How it’s going to impact the 3000 — we simply don’t know at this point. We’ve gotten no marching orders one way or the other, and I’m not anticipating we’re getting them anytime in the near future.”