Previous month:
November 2021
Next month:
January 2022

December 2021

Understanding how to use Mirrored Disk on HP 3000s

HP’s add-on product provides vital disaster recovery, but you’ll need advice on set-up, disk errors and split-volumes

By Andreas Schmidt

Mirrored Disk/iX is an optional subsystem for HP 3000 mission critical systems, and it’s vital to guaranteeing high availability of your company’s data. This article explains the fundamentals of HP’s Mirrored Disk/iX: how to set up volume sets, how to deal with disk errors and how to establish a split-volume backup.

The complete resource on this subject is, of course, the HP manual for Mirrored Disk/iX (User’s Guide, HP Part No. 30349-90003.) Some of what follows is based on this. But we all know that a summary sometimes will help better than reading through a complete HP manual — especially if you are under pressure in a delicate situation.

What are mirrored disks?

Mirrored Disk/iX is a subsystem for HP 3000s which needs to be ordered separately. The installation follows the normal subsystem Installation Process as documented in the Installation Manual. Mirrored Disk/iX is designed to work only with non-system volumes. To make it very clear: Mirrored Disk/iX does not support mirroring the HP 3000’s system volumes.

It supports disk drives that use HP-FL cards or NIO SCSI cards. But mirrored partners must be the same model of fiber-link drive or NIO SCSI drive, and mirrored partners must be connected to different HP-FL cards or NIO SCSI. Otherwise a single point of failure would still exist.

Mirrored disks are designed to provide high data availability by automatically maintaining identical information on two partner disks. When an application writes to a disk, disk mirroring causes the information to be written to both drive partners. When an application reads from a disk, there are two places to access the requested data. This may give performance benefits on large systems which do a lot of reads for queries but only a few writes to the same data. Applications running on the system are unaware that disk mirroring is present.

Once disk mirrors have been established using the VOLUTIL utility, a mirrored disk acts just like any other disk connected to the system, until a disk failure occurs. If either disk of any pair fails, normal system operation continues. When the partner is ready to resume operation, the system copies data from the good disk, bringing the pair to a consistent state, and normal mirroring resumes.

Once mirrored disks have been installed, you can use them like any other disks connected to the system. Additionally, you can perform split-volume backup of mirrored disk data while still accessing the data.

So, Mirrored Disk/iX supports the following features:

High data availability: System automatically maintains identical information on two partner disks. Users continue to access data if either disk of any pair is disabled or under repair.

Reduced downtime: Users continue to access data while system performs file backup.

Disk failure recovery: System detects failed drive, continues to run application, and discontinues mirroring until drive is repaired.

Resume mirroring: System allows for the removal of the failed drive from pair, the mounting of another drive in its place while the system is running, then copies data to the new drive, and resumes disk mirroring.

Data consistency: System writes data to both partners of a mirrored pair, so data is always consistent, even during the repair process.

The installation of Mirrored Disk/iX is easy:

• Use the SYSGEN utility to configure the disks into the system.

• Install the disk hardware.

• Boot the system with the new configuration.

• Use the AUTOINST utility to install the mirrored disk software.

• Use the VOLUTIL utility to create a mirrored volume set.

• Move files, if necessary.

• Set up accounts and groups.

The subsystem installation of Mirrored Disk/iX enhances the HP 3000’s VOLUTIL commands. HP provides both commands VOLUTIL and MIRVUTIL to make life easier for the System Manager. The functionality is the same when Mirrored Disk/iX has been installed.

Please be careful: The Create Volumes (CV) capability is required to use VOLUTIL to initialize mirrored volumes. You also need it to input system commands from the system console to perform split-volume backups.

How to set up volume sets

Assuming that the subsystem has been installed and the hardware as been plugged in and is configured, the new volumes will be in state SCRATCH or UNKNOWN. Verify this via :DTSTAT ALL

30-079370          SCRATCH
31-079370          SCRATCH
32-079370          SCRATCH
33-079370          SCRATCH

Now invoke :VOLUTIL or :MIRVUTIL and create the new set’s master disk as mirrored pair:


and verify via DSTAT:

volutil: :DSTAT ALL

30-079370          MASTER-MD      MEMBER1 (PROD_SET-0)
31-079370          MASTER-MD      MEMBER1 (PROD_SET-0)
32-079370          SCRATCH
33-079370          SCRATCH

Now add volumes in this mirrored set. These volumes must be in state SCRATCH or UNKNOWN.


and check via DSTAT again:

volutil: :DSTAT ALL

30-079370          MASTER-MD      MEMBER1 (PROD_SET-0)
31-079370          MASTER-MD      MEMBER1 (PROD_SET-0)
32-079370          MEMBER-MD      MEMBER2 (PROD_SET-0)
33-079370          MEMBER-MD      MEMBER2 (PROD_SET-0)

In VOLUTIL, the command SHOWSET with the MIRROR option (this option has been installed via the added subsystem) will show the state of the mirrored set:


Volume Name   Vol Status     Mirr Status          Ldev      Mirr Ldev
MEMBER1       MASTER        NORMAL                  30       31
MEMBER1       MASTER        NORMAL                  31       30
MEMBER2       MEMBER        NORMAL                  32       33
MEMBER2       MEMBER        NORMAL                  33       32

How to deal with Disk Errors

There are two types of disk errors: Disk errors after mount (in normal operation) and a disk cannot be mounted (already defective in booting the box)

Disk Error after Mount: If a disk has a problem after the mount, the system will continue to work automatically with only one disk of the affected pair.

Possibilities for Disk Errors are:

Disk reports ERRORS — Disk will immediately become DISABLED and the System will continue to work without a mirroring for the affected volume set without any interruption.

Disk does not answer any longer — The system waits about two minutes for an answer by the disk. During this period, all I/O processes are suspended. If the disk will answer in this interval, the system will continue to work with mirroring for this pair. If the disk will not answer the disk will become DISABLED. The system will continue without mirroring for the affected volume.

Here’s an example: During normal operation LDEV 32 fails. The following message will appear on the Console:


:REPLY 22 ,Y

The system will continue to run. The problem may be that this message and the reply will be overlooked on big systems. It’s recommended to have a monitor in place. HP’s OpenView Operations Center (a.k.a. IT/O OpC) may be one option.

A check using DSTAT and VOLUTIL will show the following:


30-079370     MASTER-MD          MEMBER1 (PROD_SET-0)
31-079370     MASTER-MD          MEMBER1 (PROD_SET-0)
32-079370     *DISABLED-MD       MEMBER2 (PROD_SET-0)
33-079370     MEMBER-MD          MEMBER2 (PROD_SET-0)


Volume Name     Vol Status           Mirr Status       Ldev      Mirr Ldev
MEMBER1            MASTER             NORMAL           30           31
MEMBER1            MASTER             NORMAL           31           30
MEMBER2            MEMBER             DISABLED         32           33
MEMBER2            MEMBER             NON-MIRROR       33           32

Having repaired or exchanged the disk, you must continue by issuing the command: volutil: REPLACEMIRRVOL PROD_SET:MEMBER2 32 and re-establish the mirroring. Check it via


Volume Name Vol Status    Mirr Status              Ldev  Mirr Ldev
MEMBER1         MASTER     NORMAL                  30      31
MEMBER1         MASTER     NORMAL                  31      30
MEMBER2         MEMBER     REPAIR-DEST             32      33
MEMBER2         MEMBER     REPAIR-SRCE             33      32

The repair will happen automatically. No further intervention will be needed. This process is fully transparent for applications and users of the data on the affected volume set.

Disk Error before Mount: Here’s another example: LDEV 33 cannot be mounted during the system’s startup. Following message will appear on the console:


The system sets the “good” mirrored disk LDEV 32 on PENDING and waits for SUSPENDMIRRVOL to allow LDEV 32 to work without the mirrored partner:


:REPLY 22 ,Y

This volume (here: MEMBER2) will stay unavailable until either the mirrored pair (here: LDEV 33) will become available again, or the mirror will become suspended via SUSPENDMIRRVOL command. DSTAT will show the following:



By using SUSPENDMIRRVOL the mirror will become suspended, and the disk will work without a mirror for it. Now, this disk should not become faulty — it’s a single point of failure now! SUSPENDMIRRVOL will work only for disks in state PENDING!


MEMBER2 is now available again, but without a mirror:


Volume Name Vol Status Mirr Status Ldev Mirr Ldev

Having repaired the disk (LDEV 33), the mirror must become active again:

33-079370 SCRATCH

By using REPLACEMIRRVOL, the repaired LDEV 33 will be initialized as mirror for LDEV 32 :


You can verify this with SHOWSET


Volume Name Vol Status Mirr Status Ldev Mirr Ldev

Split-Volume Backup

Another feature of Mirrored Disk/iX is the way it can make backups. It is only necessary to take (physically spoken) one disk’s content out of a mirrored pair. This is possible via the split-volume backup. To split a volume set, no user is allowed to stay logged on for this volume set: You can use the command :TELL @ LOGOFF FOR BACKUP. The volume set will become unavailable now for one half:



Now make via VSOPEN both SETS (USER and BACKUP) available: :VSOPEN PROD_SET


Now the PROD_SET is available for production but without the mirroring:



Start the backup with the command: :FILE T;DEV=TAPE :STORE /; *T; SPLITVS=PROD_SET; SHOW. This backup is compatible to the “normal” STORE.

Having finished the backup, the split can be canceled:


SOURCE=USER tells the system that during the JOIN and the following REPAIR to synchronize the BACKUP with the USER split, so the on-line users are allowed to continue to work. The status in VOLUTIL during the synchronization will look like:


Volume Name Vol Status Mirr Status Ldev Mirr Ldev

At most, six mirrored pairs will become synchronized in the same period for performance reasons. That means as soon as one of the six pairs is finished the next waiting pair will be processed.

This is an optional way to make a backup — as I said earlier, Mirrored Disk/iX is fully transparent to all applications. We’re still using TurboStore/iX online to back up our mirrored volume sets, and didn’t encounter any problem because of Mirrored Disk/iX.


Mirrored Disk/iX is a MUST for mission critical systems to guarantee high availability of the data. In case of problems because of the physical disks, the data stays available. The automatic repair processes in Mirrored Disk/iX are transparent to the users. The procedures are quite easy — but you must know them! This article will help all HP 3000 System Managers to have this handy.

3000 Network Hardware: Routers and Switches and Hubs, Oh My!

HP 3000 hardware networking can be like a trip down a Yellow Brick Road

By Curtis Larsen

Auntie MAU! Auntie MAU! A Twisted Pair! A Twisted Pair!

Once upon a time, networks were as flat as the Kansas prairie, and computers on them were a lot like early prairie farmsteads: few and far between, pretty much speaking to each other only when they had to. (“Business looks good again this year.” “Yep.”) Most systems still used dumb terminals, and when speaking to anything outside the LAN, system-to-system modem connections were the way to do it.

A tornado named the Internet appeared in this landscape. It uprooted established standards and practices, swept aside protocols and speed limitations, and took us into a Technicolor networking landscape very different than what was there before.

Toto, I get the feeling our packets aren’t in Kansas anymore

Smaller companies were tossed before the tornado to eventually land and quickly begin growing again in the new environment. Large companies like IBM, HP, Digital, and Microsoft, who were rooted and established in their own proprietary standards (it sounds like an oxymoron, but it’s true) survived by generally ignoring the howling winds. Eventually, munchkin-like, they all came out to see what the general fuss was about, and found that a house-sized chunk of change (pun intended) had landed.

Networking, and the TCP/IP protocol had truly arrived in style, bringing strange new applications and markets. Serial connections and proprietary networking (“What do you mean we don’t need SNA to connect to the Wichita office anymore?”) gave way to a new kid on the block. And her little dog, too.

Follow the Yellow-Colored-Cable-and-Labeled-at-Both-Ends Road!

So here we are, sitting in a strange new networking land of strange new networking things. And for some of us, trying to understand the whole of it all — especially in relation to “legacy” system like the HP e3000 — is a little daunting. What are all these networking black boxes we plug the system into, and what do they all do? How can they make life better? (How can they make life worse?) If you’re not sure (or just plain curious) read on.

We’re off to see the wizard — this wonderful wizard of ours!

The networking wizard of your HP e3000 system is a program named “NMMGR.” It allows you to define networking hardware and tells you how to create connections with them. But what things can you define? Before we talk about connecting to things, we should probably take a crash-course in the things you’re connecting to.

Which path do I take? Well, you could take this one, that one, or both...

The basic networking boxes you’ll connect to are hubs, routers, switches, bridges, and gateways. Oh My. Let’s take them one at a time.

Since life is like an analogy, I’ll stretch one for the hub to go like this: If your network traffic is like water through a hose, then a hub is like a splitter, allowing multiple exits. Generally speaking, a hub simply splits the traffic from the “incoming” line into each connected port “out.” This is cheap and simple to set up if you don’t have a lot of connections, but like too many divisions on any hose, too many hubs will make the end connections anemic. The fewer connections the better, so most hubs have no more than 24 ports total.

Obviously, to make things better for all connections in larger networks, more “water pressure” was needed — and the switch was born.

Pay no attention to that man behind the curtain!

No, I’m not talking about the System Administrator. A switch looks very similar to a hub, but the appearance ends there. Again, if your network is like a stream of water in a hose, then your garden-variety switch is like a water tank, adding pressure to the line. Huge water tanks are placed at the heart of a city’s water system, while small tanks are placed on buildings. At the heart of most networks — tended by a cooing Network Administrator — is a core switch (the main tank).

Additional “work group” switches (building-sized tanks) can be used in wiring closets for special-need areas of the network. So, although a hub and a switch both offer multiple connections, the resulting “streams” have vastly different origin and force. Now that we’re one big speedy networking family, no one minds if it all fails, right? No? Well love can build a bridge, and so can electronics.

My Network’s crashing… What a world! What a world…

Having all your network connections on one physical segment isn’t too grand — especially when it fails. By segregating physical networks and then “bridging” them together, you ensure that in the face of adversity, some people can still laugh at the ones who can’t work. Aquariously speaking, a simple bridge is like a valved pipe between two water systems, passing water in both directions, and shutting one side’s valve if that system “loses pressure” (goes down). You say you want to route water based on content? Well then.

This here’s the ‘Packet of a Different Header’ you’ve heard about

Simply put, a basic router is an intelligent (logically) one-way bridge, examining network data information and very quickly sending data packets down one line or another. In our epic analogy, a router could be a thermal valve, forcing only cold water to flow this way, and hot water that way, preserving us from the heartbreak of tepidity. Since the router has to work quickly, it usually works at a lower level than other equipment does, caring less about content and more about destination. You say you’d like to exchange hot water with someone else? You’d like the gate to swing both ways?

There’s no place like the home network! No place like the home network…

A router is excellent at sending packets from Here to There (and not necessarily Back Again), but nothing beats the gateway for two-way communication. A gateway takes data from one network and sends it to another, even re-creating the data packet on the other side, if need be. To stretch our analogy to its limits, we could say that two different water systems exist, having the same characteristics, including temperature. One system is chlorinated, while the other is not, and so simply allowing the water to pass unmolested would be an issue — one system would become diluted, and the other exposed. What we need is a filtration pump that allows the water to be pumped in either direction, adding chlorine one way, and taking it out in the other direction.

Connecting to the Internet requires a gateway, since your home network doesn’t “know” how to reach something out there. What it does know is how to hand off a data packet destined for “Not Here” to a gateway for processing. The gateway in turns checks the packet’s address and sends it to the best possible network closer to the packet’s Ultimate Destination, re-labeling the packet as it does so, and putting its own address in the packet’s “return address.” If the packet’s Ultimate Destination isn’t on the new network either, then the gateway there does the same thing until the packet finally hits the Emerald City.

On its way back home, because of all the “return addresses” it picked up, the packet passes back through each gateway that it came from until, clicking its little ruby slippers, the packet realizes it is in no place but home.

Because of its intensive examination work, a gateway is almost always dedicated to its task, especially on larger networks. It was the gateway’s filtering abilities that led to using them as a firewall to protect networks by purposefully filtering and/or denying different types of connections and data. But the firewall is a topic all its own — just make sure you use one!

And you were there, and you, and you.
Oh Auntie Carly, there’s no place like HP!

So there you have it — Networking Devices 101. Now that you know what you can connect your e3000 to, you can come up with some ideas on how to use them, and answer questions about what to connect to. Should an e3000 be connected to a hub, or to a switch? (Switch!) Does a printer need to be connected to a hub or a switch? (A hub will usually be fine.) Should I use my e3000 as a gateway? (I think not.) Should the physical part of my network the e3000 is on be bridged? (Yes.) Can I configure a gateway and connect my e3000 to the Internet? (Certainly. But make sure you have a firewall first!) Can I use other protocols or connections besides TCP/IP and Ethernet? (Absolutely! X.25, SNA, FDDI, and a number of other connections are available, but they change a lot, so check with your favorite sales rep first.)

So long as HP continues to expand and extend the capabilities of their workhorse system, the e3000 will continue to be the perfect business computer choice. As everyone who uses and loves it knows – stick with it and your business just keeps flowing along, leaving your competitors all wet.

Remote storage emerges as HP 3000 solution

The parts of a legacy IT shop most likely to fail are the moving parts. Old servers usually run off old disk drives. Some of the drives in the HP and Digital legacy shops hail from the 1990s. It’s as if the trouble coming with the hardware is on the move, heading on a direct course to a loss of service.

Beechglen Development has a solution that steps between a legacy server and the applications’ need for data. The BGDSAN creates a storage area network’s individual drives using a network link to a customer’s on-premise legacy server.

Virtual arrays were state-of-the-art storage solutions 20 years ago. Plenty of these VA devices still serve legacy iron. Beechglen says its “managed Storage-as-a-Service (STaaS) solution is typically comparable in cost to your existing hardware support contract.”

In one interesting wrinkle, Beechglen says it builds its BGDSAN units exclusively from HP hardware. Some of the customized configurations for this STaaS solution can include SSD units. Repairing VA arrays like the VA7410 requires parts that are not new or refurbished. “These are just used drives that have not failed yet,” the company says.

HP’s other array solution for HP 3000 and HP 9000 systems is the XP line. Both VA and XP arrays require costly hardware support compared to the BGDSAN.

Ready to restart

Then there is the ability to restart devices after a shutdown.

“Many disks will continue spinning forever – provided they never stop spinning,” Beechglen says. “Over the years we’ve seen countless disks powered down for planned maintenance that don’t spin back up. The bearings cool off and seize the motor. Or the bearings are warped from years of running hot resulting in disks that just cannot come back to operational speed after a complete stoppage.”

Remote computing is the default today for modern IT architecture. Azure Cloud services deliver virtualization servers. The scope of off-premise computing is virtually complete. If there is a spinning disk in a legacy IT shop, it’s as classic as a terminal attached via cables.

HP-labeled hardware is always going to have a terminus, because they’re not building 3000s anymore. The peripherals will see their finale, too. It could well turn out that the Charon emulation solution will be the only data route that runs into the end of the 2020s, and maybe beyond. They keep making faster Intel hardware.

Now, HP's Unix transitions to legacy

Wall of books
Hewlett-Packard Enterprise has issued dates to terminate support for two releases of its HP-UX Unix environment. Next year will mark the end of HPE’s support for HP-UX 11.11 and 11.12. The final, terminal version of HP-UX, 11.31, is already in the MPS category. This Mature Product Support repairs crucial bugs. HPE adds that this support level is “without sustaining engineering.”

MPS is a milestone that the MPE/iX operating system visited in 2007. In this state, the operating system is frozen for features. The legacy managers in the HP 3000 market found a silver lining in frozen status. Fewer elements of the MPE/iX environment were likely to break, since changes did not find their way into the base software. Already a reliable OS, taking MPE/iX into Mature support makes it even more stable.

HP-UX is another matter for a legacy manager. Unix, touted as the replacement for HP 3000 datacenters, holds a riveting reputation. Security flaws are a major element in an OS that powers IT so frequently. The more Unix running in the world, the less secure it becomes.

The year 2022 ends HP’s active support for HP-UX, but the shift away from the vendor’s teams isn’t stopping legacy use. This legacy milestone usually arrives while independent support companies take on the vendor accounts relying on the OS. Change is inevitable, but changes to legacy IT are fewer. Losing vendor support may not even mean different experts will take on the work. At VMS Software Inc., some support team members shifted from HPE jobs to work at VSI.

Indies to the rescue

In the HP 3000 marketplace, Beechglen Development took up HP 3000 support, among other companies. Just about the time HP announced in 2011 it was migrating its best HP-UX features to Linux, MPE/iX support from HP ended. Beechglen remains a support resource for legacy IT in both HP 3000 and HP 9000 communities. The company uses Nickel, a program to assess the state of software on an HP-UX server.

This Network Information Collector, Keeper, and Elaborator is “a shell collection script from Hewlett Packard,” Beechglen explains. It’s been maintained and modified through the decades by Beechglen. A NICKEL script runs on HP-UX systems 10.20, 11.0, 11.11, 11.23, and 11.31.

Nickels run as a review and reference for general system health. “The script also provides aid after system events for troubleshooting,” Beechglen adds, “and getting a system back up and running in as little time as necessary.”

Beechglen, Allegro Consultants, and other companies keep supporting legacy environments after the vendor leaves the market. These companies tout expertise from a “team who eats, breathes, and sleeps HP-UX and MPE for 33-plus years, in the most demanding environments anywhere in the world.”

HP’s Unix is entering the era where MPE/iX visited 14 years earlier. Like MPE/iX did, HP-UX has gained an extra year of vendor support. System vendors will continue to collect support dollars until the latest possible date. There’s plenty of value in legacy IT, all through the years after the vendor stops selling it.

Making Emulation Serve Migration

Call-compatible subroutines, utilities help Unix behave like the MPE/iX environment

By Charles Finley

Let's look at methods to migrate applications from HP 3000s. One tool is to making subroutines and routines call-compatible, to let existing business logic work on other platforms.

Various utilities and subroutines can help a migrated application run on the target system. Examples include the automated data structure mapping of KSAM files to an indexed file system of the target computer and the export and import of KSAM data. This can be accomplished, for example, using either the Informix C-ISAM or bytedesign’s D-ISAM file system.

An MPE-compatible print queue manager which also operates on Unix platforms can enhance the limited capability of printer and print job control on Unix systems. This manager can provide an emulation of the MPE spooler in addition to functionality such as:

• Printer management by forms and paper stock

• Operator control and intervention

• Multiple and partial file printing

• Post-submission modification of print job characteristics

• Print file review and display

• Physical and virtual printer support

• Dynamic modification of printer characteristics

• Application Program Interface for direct printer control

• Automatic and transparent network operation

In addition, a batch job manager can provide functionality that is otherwise very limited on a Unix system. The batch job manager presents a centralized and more powerful point of control over the batch environment and gives capabilities very similar to those on MPE.

Porting of non-COBOL applications

This series has emphasized COBOL migration, since that is the primary development language on the HP e3000. It is also possible to port Fortran, Pascal, C, SPL, RPG, and BASIC third-generation languages, as well as fourth-generation languages Powerhouse and Speedware.

Fortran 77 and Pascal porting are dependent on the capabilities of the compilers and their runtime libraries on the target platform. Both Fortran and Pascal have some hidden dependencies on the MPE file system that must be addressed prior to porting. C/iX is perhaps the most portable language on the HP e3000.

There are two versions of BASIC on MPE: Business Basic and Basic/3000. They are both somewhat of a challenge to port because they are unique dialects and also have MPE file system and intrinsic dependencies built in. SPL is surprisingly portable thanks to the SPLASH compiler from Allegro Consultants ( It is capable of changing SPL to C code. RPG should be translated to either C or COBOL in order to move it to another platform. Finally, products from the fourth-generation language vendors Cognos and Speedware are perhaps among the easiest applications to port. They run on a number of different platforms and their code is quite portable.

It is sometimes desirable to move away from some of the older languages to one that can be more easily enhanced or supported. Translators either exist or can be built to translate many of the third-generation languages to C, C++, Java, or even Visual Basic. Moreover, translators could potentially be built to translate some of the more obscure discontinued languages such as Business Report Writer to C or Java. There has also been some interest expressed in translating Powerhouse or Speedware code to Java or C++.

Optimizing terminal operation and screen management

The standard migration procedure for VPlus form files is to convert them with the same character-based look and feel. Migration of VPlus forms files provides a significant opportunity for application enhancement. The block-mode look and feel of VPlus is an area in which modernization to a GUI and its capabilities can give positive benefits. Several packages are available to accomplish this on the HP e3000. Migration to Unix, however, involves:

• Automated translation of the VPlus forms files to a format readable on the target platform.

• Preservation of the editing specifications for these forms.

• A management utility to maintain and enhance these migrated forms on the target platform.

• A call-compatible library of VPlus intrinsics.

Since the goal is to achieve as close to a 100-percent automated migration as possible, all possible editing specifications and editing constructs must be supported. The call-compatible library of intrinsics must support all VPlus functionality to avoid any manual changes to the COBOL code.

Terminal support on Unix is handled by the termcap and terminfo utilities. The termcap scheme was developed to support vi, the screen-oriented visual display editor (or very interesting editor) for Unix. The termcap file contains descriptions of the features supported by the terminal (how many lines and rows, whether the terminal supports backspace, etc.) and ways to make the terminal perform certain operations (clear the screen, move the cursor to a given location, etc.).

As more and more terminal types appeared, terminfo and its associated curses library were developed. Terminal descriptions in terminfo are essentially compiled versions of a textual description and can be located faster at run time. Again, terminfo performs typical operations (clear the screen, move the cursor) on a wide variety of terminals. The curses library provides functions that give added ability, such as setting raw mode and setting echo on and off.

The limitation of curses is that it was designed for character-based terminals, while today the trend is toward pixel-based graphics terminals supporting graphical user interfaces. Curses screen performance also has some limitations in terms of unnecessary screen clearing and cursor movement.

Screen management technology has been enhanced for character-based terminals to provide high levels of performance. This involves low-level routines to use escape sequences to control cursor positioning, highlighting, graphics and line drawing, and display of data. These routines are designed to maintain logical buffering of before and after images of the screen, as well as optimization of screen attributes, screen positioning, and data display and capture.

The implementation of a GUI solution for VPlus screens can involve the use of advanced terminal emulators running in a Windows environment as a front end to the character-based IO and screen control routines. With these emulators, it is possible to incorporate capabilities such as hypertext help and copy and paste. Modernization involves the use of window objects such as message boxes, dialogue boxes, text boxes, list boxes, scroll bars, tool bars, and image displays.

The Unix COBOL vendors have integrated GUI screen developers and managers into their compiler products as front ends. This can involve a fair amount of reengineering and time to produce a solution if there are a large number of VPlus forms files to migrate and no automated tool.

Finally, solutions now exist to migrate VPlus forms files automatically to advanced GUI management systems. These are graphical PC-based client-server application development systems. Migrated forms will run as true window clients on PCs under Microsoft Windows or on X terminals and workstations under OSF/Motif. These tools clearly will give the most advanced modernization capabilities and functionality available. With their development interfaces, text/input fields can be replaced with typical graphical user elements. The front end will automatically map the appropriate GUI objects to VPlus screen elements.


Tools are available to migrate existing MPE COBOL II applications to Unix or Windows, as well as more modern COBOL development environments. The use of these tools presents the best opportunity for a successful migration. The suggested strategy is to migrate using automated tools and utilities. This approach minimizes risk and development and deployment time and costs.

If there is a need to enhance the application, it is best to port the entire application using the existing methods of use. This provides for an immediate test environment for the newly changed system and allows for parallelism in testing and future development.

Putting Job Queues to Good Use

By Neil Armstrong

Our development environment at Robelle was quite unusual, in that we had a single job stream, which launched all of the necessary compiling and testing steps for each product. So if I wanted to compile and run the entire test suite for Suprtool, I just have to issue a single stream command, :stream jrelease.suprtool

If I wanted run the Qedit test suite, then I just have to stream the qedit jrelease with the command :stream jrelease.qedit

There’s a problem though: the individual job streams that test each product can only run single threaded. Each job must complete before the next one begins, so we have to keep the job limit perfect all the time. This also means that we couldn't run the Qedit and Suprtool test suites simultaneously.

This has always been a problem, as sometimes people or jobs alter the limit incorrectly, which means that multiple jobs would stream at the same time, causing test jobs to fail and results to be incorrect. This could mean losing a night’s “productive” testing, as the job streams are generally streamed in the early hours of the morning, after the nightly backup. So if you had just made a major change to a module of Suprtool, you wouldn’t know the impact of that change for another day.

Mike Shumko suggested that we try to implement jobq’s for our environment, to address this problem. Without knowing what jobq’s really were, I naturally volunteered for the job in the hope that I could alleviate this dependency we had on the job limit.

What I hoped jobq’s would do

Without even reading about jobq’s I thought they were a way to have job streams operate in specified queues, and thus be independent of jobs in other queues and the main job queue.

The Commands

To start using jobq’s immediately, you need only do two things:
- create the jobq with the newjobq command, and
- change the !job card in your job stream to use the jobq that you built.

So to create the jobq for the Suprtool job streams I used the newjobq command. I only wanted one job to run at at time in this jobq so I specified a limit of 1:

:newjobq suprtool;limit=1

Then I changed all of the job cards to have the “;jobq=suprtool” at the end:

!job jtest01,user.acct,suprtest;outclass=lp,3,1;inpri=7;jobq=suprtool

Since I have MPEX, I used the MPEX %qedit command to make a global change to all jobs:

%qedit [email protected]@,append “;jobq=suprtool” “!job”

This will open each file that qualifies, and append to the job card line the jobq specification. Voila, Suprtool now could run in its own jobq independently!

There are some other useful commands associated with jobq’s:



HPSYSJQ     3500     8       10
SUPRTOOL      1      0        0
QEDIT         1      0        0
RANDMISC      1      0        0

The purgejobq command (:purgejobq suprtool) will allow you to remove any jobq’s that you’ve defined.
Showjob ;jobq will show you the jobs with the jobq in which they are running in. A detail line from that output would look as follows:

:showjob ;jobq

#J4567  EXEC   10   S   LP    SUPRTOOL WED 11:48A  JTEST01,USER.ACCT

You can also alter the jobq that a job is running in with the altjob command, by typing :altjob #j4567;jobq=qedit

The Practice

Since it took me only a few commands to create and change all the jobq’s on our development system, I had everything changed to take advantage of the jobq’s in short order.

So I started testing running jobs associated with the Qedit and Suprtool test suites at the same time. I quickly discovered that each jobq requires that a slot be open in the global jobq in order for the job to run. I found an explanation in the newjobq command documentation:

“The global limit takes precedence over individual queue limits. That is, even if a jobqueue has a slot available, if the overall limit has been reached, jobs have to wait till one of the jobs finish or the global limit is increased. When a global slot becomes available, the next job is picked from among the eligible jobqueues (those which haven’t yet reached their individual limits).”

I hadn’t expected this; however, it merely meant that I needed to expand the job limit to some huge value. In retrospect, I should probably have created a separate jobq for our regular background jobs, like inetd or the apache webserver. That way I could create a small job to periodically check that queue, to ensure that all the required background jobs are “up,” and take appropriate action if a job has failed.


To my surprise, I found that after a “start norecovery” on this system, all my jobqs were missing. Again, the newjobq command help revealed what had happened:

“The job queues persist across reboots, provided a START RECOVERY is done. Any other system starts will cause the job queues to be deleted and they will have to be created again.This command is available in a session, job, or in BREAK. Pressing [Break] has no effect on this command. This command is not allowed in the SYSSTART file.”

We have a command called Startall, which is used to start all of the system jobs, so I put the newjobq commands in this command file to insure all of the jobq’s were built with the proper names and limits:

setvar hpmsgfence 2
newjobq suprtool;limit=1
newjobq randmisc;limit=1
newjobq qedit;limit=1

This way I am assured that the jobq’s always exist when we restart the system.


Personally, I have found no unexplainable problems with the new jobq feature; however recent traffic on the 3000-L mailing list did showcase this query from David Knispel:

“We ran out of disk space over the weekend. Now my JOBQs are screwed up. When I do LISTJOBQ for HPSYSJQ, it shows 6 EXEC but only a limit of 3 and total of 3. When I do a SHOWJOB, only 3 jobs show for this queue. I’m having the same problem with other queues also. Any way to fix this without bouncing the system?”

To which Richard Bayly from HP responded:

“The patch you are after is: MPELXC2B - 6.5; MPELXC2D - 7.0; MPELXC2C - 6.0 but superseded by MPELXL7A.”

Are jobq’s for you?

In conclusion, I do find the new jobq feature quite valuable, as it gives me a tool to break up my jobq’s logically if not physically. I am able to manage some groups of jobs more easily and have more jobs working concurrently, getting more out of my HP e3000. If you have problems with concurrency, or having jobs run when they shouldn’t, then perhaps implementing jobq’s is the way to go.