Hidden Value

Where the pieces of OpenMPE have landed

Screen Shot 2020-08-24 at 3.55.03 PM
Heading to OpenMPE.com was once an accomplishment. The open source advocacy group needed a .org at the end of its web address for the first seven years of its lifespan. The OpenMPE.com domain was parked, a resource to be used at a later time. The site tracked a list of companies using a 3000, papers devoted to MPE/iX technology. There were minutes of the monthly meetings OpenMPE was holding with HP's 3000 division.
 
The group also hosted Invent3k. That public HP 3000 development server was being shared outside of HP's labs, in that era when Hewlett-Packard was dialing back its 3000 operations. Even today, I could make a purpose for such a thing: a training platform for the few companies which need to pass along their 3000 administration to a new generation.
 
Until last year, Invent3k still churned away in a datacenter blockhouse near Lake Travis in Austin. The Support Group's Terry Floyd had generously hosted the hardware that had been donated. Being old 3000s, the Invent3k servers were power-hungry and virtually unused. Invent3k went offline in 2019 and nobody even noticed.
 
If ever there was something that OpenMPE was supposed to do, Invent3k was it. Infighting between the group's directors and a dismissed Matt Perdue, including lawsuits, blew up the group during 2010. During that first year after HP had closed up its last bit of MPE labs there were many more 3000 sites than today. It was still a time of opportunity.
 
OpenMPE might have been profitable, with some marketing. It’s a lot like a book, in that way. My memoir hasn't earned a profit yet, either. But neither have the VMS wizards at VMS Software Inc. Doing something you love is not enough to make it compensate you. It has other rewards, though, like preserving a legacy.
 
Today when you go to OpenMPE.com you get Web-style crickets, the 404 listing. The community Invent3k server files are now in Keven’s Miller's hands, and he hasn’t re-hosted Invent yet. He's rehosted the OpenMPE.com data, though.
 
OpenMPE.org, which then became OpenMPE.com after Perdue held the original domain out of the group's hands, is on Miller's 3kRanger website. It's worth a visit to see the full range of what advocacy proposed for MPE/iX in the years after HP gave up its futures for the OS.

Sorting Strategies for COBOL

By Shawn Gordon

Newswire Classic

How many times have you just had some simple data in a table in your program that you wanted to sort? It seems like a waste of time to go through and write it to a file and sort the file and read it back in. COBOL has a verb to allow you to sort tables.

I’ve actually gotten a few e-mails recently asking me about this verb and sorting strategies, so I thought I would go over it. What I have this month is both a simple bubble sort, as well as a more complex but efficient shell sort. The bubble sort in Figure 1 only requires that we have two counters, one save buffer, and one table max variable, on top of the table data.

Screen Shot 2020-04-18 at 2.26.56 PM

Here's the code in text, if you want to copy and paste, and apply your own formatting.

WORKING-STORAGE SECTION.
01 SAVE-CODE PIC X(04) VALUE SPACES.
01 S1 PIC S9(4) COMP VALUE 0.
01 S2 PIC S9(4) COMP VALUE 0.
01 TABLE-MAX PIC S9(4) COMP VALUE 0.
01 CODE-TABLE.
03 CODE-DATA PIC X(04) OCCURS 100 TIMES.
PROCEDURE DIVISION.
A1000-INIT.
*
* Do whatever steps are necessary to fill CODE-TABLE with the values
* you are going to use in your program. Make sure to increment
* TABLE-MAX for each entry you put in the table.
*
* Now we are going to perform a bubble sort of the table.
*
PERFORM VARYING S1 FROM 1 BY 1 UNTIL S1 = TABLE-MAX
PERFORM VARYING S2 FROM S1 BY 1 UNTIL S2 > TABLE-MAX
IF CODE-DATA(S2) < CODE-DATA(S1)
MOVE CODE-DATA(S1) TO SAVE-CODE
MOVE CODE-DATA(S2) TO CODE-DATA(S1)
MOVE SAVE-CODE TO CODE-DATA(S2)
END-IF
END-PERFORM
END-PERFORM.

As you can see, this is a pretty trivial and easy to implement solution for simple tables.


What we have in Figure 2 is a macro that does a shell sort. I got this originally from John Zoltak, and the following text is his, with some slight edits from me.

He says, “When I want to sort the array I use

MOVE number-of-elements to N-SUB.
%SORTTABLE(TABLE-NAME#, HOLD-AREA#).

“Figure 2 below uses the shell sort, faster than a bubble. Also since it’s a macro, I can sort on any table. The only real constraint is that it compares the whole table element, so you just have to arrange your table element so it sorts the way you want.”

Screen Shot 2020-04-18 at 2.27.20 PM

Again, here's the text from the routine for you to copy and paste

* SHELL SORT ROUTINE
*
* This macro expects parameter 1 to be the element of the
* table to be sorted. This sort compares the entire element.
* Parameter 2 is the element hold area. Can be a higher
* element of the table if you wish.
*
* To use this sort macro, you must COPY it into your program
* in the 01 LEVEL area. Four (4) variables will be declared
* and the $DEFINE for %SORTTABLE will be defined.
*
* Before invoking this macro you must set N-SUB to the
* highest table element to be sorted.
01 I-SUB PIC S9(4) COMP.
01 J-SUB PIC S9(4) COMP.
01 M-SUB PIC S9(4) COMP.
01 N-SUB PIC S9(4) COMP.
$DEFINE %SORTTABLE=
IF N-SUB > 1
MOVE N-SUB TO M-SUB
PERFORM TEST AFTER UNTIL M-SUB = 1
DIVIDE 2 INTO M-SUB
ADD 1 TO M-SUB GIVING I-SUB
PERFORM UNTIL I-SUB > N-SUB
MOVE !1(I-SUB) TO !2
MOVE I-SUB TO J-SUB
SUBTRACT M-SUB FROM J-SUB GIVING TALLY
PERFORM UNTIL J-SUB <= M-SUB OR
!1(TALLY) <= !2
MOVE !1(TALLY) TO !1(J-SUB)
SUBTRACT M-SUB FROM J-SUB
SUBTRACT M-SUB FROM J-SUB GIVING TALLY
END-PERFORM
MOVE !2 TO !1(J-SUB)
ADD 1 TO I-SUB
END-PERFORM
END-PERFORM
END-IF#


Tips on Using FTP on MPE/iX Systems

By Bob Green

Newswire Classic

Starting with MPE/iX 6.0, it has been very easy to enable the File Transfer Protocol server on your HP 3000. Once enabled, the FTP server makes it possible for you to deliver output to your own PCs, Linux servers, MPE boxes or Unix boxes, even to servers across the world. These can be your servers in other parts of your company, or of your suppliers, or of your customers.

MPE File Attributes

When transferring files from one HP 3000 to another there is no need to specify the attributes of the file, such as ;rec=-80,1,f,ascii

MPE keeps track of that for you. When transferring a file to an MPE system from a non-MPE system, or transferring through a non-MPE system, you will need to specify the file attributes on the target MPE system as in:

put mympefile myfile;rec=-80,1,f,ascii;disc=3000

The default file attributes can be specified for a file transferred to your MPE system by changing the corresponding line in the file BLDPARMS.ARPA.SYS which is shown below:

;REC=-80,,F,ASCII;DISC=204800
;REC=-256,,V,BINARY;DISC=204800
;REC=,,B;DISC=16384000

Only the first three lines are read; everything after is ignored.

You may modify the first three lines as long as you keep the same syntax, i.e., you may change the numbers, or F to V, but don’t add anything bizarre. Anything after a space is ignored, so don’t insert any spaces. If the file is missing (or any line in it), the old hard-coded defaults will be used as a backup. These are:

;REC=-80,,F,ASCII;DISC=204800 for ASCII mode,
;REC=-256,,V,BINARY;DISC=204800 for binary mode.
;REC=,,B;DISC=16384000 for byte stream mode.

Also, if either the REC= part or the DISC= part of either line has bad syntax, the default for that part will be reverted to.

Users may make local copies of this file and set their own defaults via a file equation:

:file bldparms.arpa.sys=myfile

Host Commands

You can execute commands on your local 3000 by putting a colon in front of your command such as:

ftp> :listf ,2

You can find out what commands you can do remotely with the remotehelp command:

Typically we just stream jobs on the remote system with FTP’s site command by doing the following in the FTP client:

ftp> site stream robelle.job.robelle

200 STREAM command ok.

Site is a standard FTP command, but what host commands the FTP server at the other end supports varies from server to server.

In fact the Qedit for Window Server installation has its own FTP client which FTPs the server and streams the “robelle” job to set the attributes of the Robelle account.

Filenames

On MPE the default namespace for a given file is typically the MPE namespace. For example if you put a file to your MPE system with the following FTP command:

put myfile mympefile

The file will go to the group you are currently logged into.

If you want to put files into the HFS namespace then you can just specify using the typical Posix notation:

put myfile /MYACCOUNT/MYGROUP/mydirectory/myhfsfile


MPE file equations and Unix equivalents

Blackboard equation
HP 3000s, as well as MPE, employ a unique tool to define the attributes of a file. That tool is file equations, a 3000 speciality. Robelle calls these "commands that redefine the attributes of a file, including perhaps the actual filename."

In any migration away from HP 3000s (an ill-advised move at the moment, considering the COVID-19 Crisis) managers must ensure they don't lose functionality. Unix doesn't have file equations. Customers need to learn how to make Unix's symbolic links report the information that 3000s deliver from a LISTEQ command.

3000 managers are used to checking file equations when something mysterious happens with an MPE file. Dave Oksner of 3000 application vendor Computer And Software Enterprises (CASE) offered the Unix find command as a substitute for file equations. You need to tell find to only process files of type "symbolic link."

Oksner's example of substituting find for LISTEQ:

find /tmp/ -type l -exec ls -l {} \;

which would start from the /tmp directory, look for symbolic links, and execute “ls -l” on the filenames it finds. You could, of course, eliminate the last part if you only wanted to know what the filenames were and get

find /tmp/ -type l

(I believe it’s the same as using ‘-print’ instead of ‘-exec [command]’)

Beware of output to stderr (if you don’t have permission to read a directory, you’ll get errors) getting interspersed.

Jeff Vance added that the command interpreter in MPE also can deliver file information through a listfile command:

Don't forget the CI where you can do:

:listfile @,2;seleq=[object=symlink]

:help listfile all shows other options.

Our former Inside COBOL columnist and product reviewer Shawn Gordon offers his own MPE vs. Unix paper, and Robelle's experts wrote a NewsWire column contrasting Unix shell scripts with MPE tools.

Image by Gerd Altmann from Pixabay


Making CI variables more Unix-like

By John Burke

Newswire Classic

How can you make CI variables behave more Unix-like?

For those of us who grew up on plain old MPE, CI variables were a godsend. We were so caught up in the excitement of what we could do with CI variables and command files, it took most of us a while to realize the inadequacy of the implementation. For those coming to MPE/iX from a Unix perspective, CI variables seem woefully inadequate. There were two separate questions from people with such a Unix perspective that highlighted different “problems” with the implementation of CI variables.

But first, how do CI variables work in MPE/iX? Tom Emerson gave a good, concise explanation.

“SETVAR is the MPE/iX command for setting a job/session (local) variable. I use ‘local’ somewhat loosely here because these variables are ‘global’ to your entire job or session and, by extension, are automatically available to any sub-processes within your process tree. There are some more-or-less ‘global’ variables, better known as SYSTEM variables, such as HPSUSAN, HPCPU, etc.”

The first questioner was looking for something like user-definable system variables that could be used to pass information among separate jobs/sessions. Unfortunately, no such animal exists. At least not yet, and probably not for some time if ever.

There is, however, a workaround in the form of UDCs created by John Krussel that implement system, account and user-level variables. The UDCs make use of the hierarchical file system (HFS) to create and maintain “variables.”

The second questioner was looking for something comparable to shell variables which are not automatically available at all levels. You have to export shell variables for them to be available at lower levels. Thus, there is a certain locality to shell variables.

It was at this point that Jeff Vance, HP’s principal CI Architect at CSY, noted that he had worked on a project to provide both true system-wide and local CI variables (in fact, the design was complete and some coding was done). Jeff offered a suggestion for achieving locality.

Variable names can be created using CI depth, PIN, etc. to try to create uniqueness. E.g.,

setvar foo!hppin value

setvar foo!hpcidepth value1

Mark Bixby noted that CI variables are always job/session in scope, while shell variables are local, but inherited by children if the variables have been exported. He suggested that if, working in the CI, some level of locality could be achieved by “making your CI script use unique variable names. If I’m writing a CI script called FOO, all of my variable references will start with FOO also, i.e.

SETVAR FOO_XX “temp value”

SETVAR FOO_YY “another value”

...

DELETEVAR [email protected]

“That way FOO’s variables won’t conflict with any variables in any parent scripts.”

HP has a formally documented recommendation for creating “local-ness.” 

MPE: How to create CI variables with local (command file) scope

Problem Description: I have separate command files that use the same variable names in them. If one of the command files calls the other, then they both affect the same global variable with undesirable results. Is there the concept of a CI variable with its scope local to the command file?

Solution: No. All user-defined CI variables have global (JOB/SESSION) scope. Some HP Defined CI variables (HPFILE, HPPIN, HPUSERCMDEPTH) return a different value depending on the context within the JOB/SESSION when they are called.

HPFILE returns the fully qualified filename of the command file.

HPPIN returns the PIN of the calling process.

HPUSERCMDEPTH returns the command file call nesting level.

To get the effect of local scope using global variables, you need a naming convention to prevent name collisions. There are several cases to consider.

Command file CMDA calls CMDB, both using varname VAR1.

• Use a hardcode prefix in each command file.

In CMDA use: SETVAR CMDA_VAR1 1

In CMDB use: SETVAR CMDB_VAR1 2

• Use HPFILE.

SETVAR ![FINFO(HPFILE,”FNAME”)]_VAR1 1

• Use HPUSERCMDEPTH.

SETVAR D!”HPUSERCMDEPTH”_VAR1 1 (Note: need a leading non digit)

Command file CMDA calls itself, uses varname VAR1.

• Same answer as case 1, the third solution: use HPUSERCMDEPTH.

There are two son processes. Each one calls CMDA which calls CMDB at the same nesting level.

• Same answer as case 1, the third solution: use HPUSERCMDEPTH. Not sure if this will work since not sure if HPUSERCMDEPTH is reset at JSMAIN, CI, or user process level.

• Use HPPIN and HPUSERCMDEPTH.

SETVAR P!”HPPIN”_!”HPUSERCMDEPTH”_VAR1 1

•Use HPPIN, HPUSERCMDEPTH and HPFILE (guaranteed unique, hard to read)

SETVAR P!”HPPIN”_!”HPUSERCMDEPTH”_![FINFO(HPFILE,”FN AME”)]_![FINFO(HPFILE,

“GROUP “)]_![FINFO(HPFILE,”ACCT”)]_VAR1 1

Again, there is no true local scope, only global scope for CI variables within any one session/job. The techniques presented above do provide at least a reasonable workaround for both system-wide and process-local variables.


Large Disk patch delivers 3000 jumbo limits

Marshmallows
As the HP 3000 was winding its way out of the HP product lineup, it gained a greater footprint. Storage capabilities grew with the rollout of Large Disk. The effort was undertaken because HP's disk module sizes were doubling in size approximately every two years: 4 GB to 9, 18, 36, 73, 146, and then 300 GB.

The disk project might have never seen its limited release without OpenMPE. The advocacy group that was formed after HP's exit announcement saw the same disk size trend. OpenMPE drove the initiative of "Support future large disks" in the Interex 2003 Systems Improvement Ballot.

Just two years later, Interex was dead. The directive from the 3000 community to HP labs lived onward, though. HP said its investigation found the need for more work to be done to support large disk configurations.

The MPE/iX 6.5 Large File enhancement allowed bigger Files. 6.5 also permitted more disk space in each MPE Group and Account. But several CI commands and utilities were limited in their ability to work with the resulting larger Groups and Accounts. All of these inputs were assessed during the Large Disk investigation and as many as possible were addressed by the Large Disk patches.

So what does Large Disk deliver? The patches provide the following enhancements for MPE/iX 7.5:

• Large Disk includes the ability to attach and use SYSGEN to configure any sized SCSI-2 compliant Disk. MPE/iX uses SCSI-2 protocol to connect to SE, HVD and LVD SCSI Disks as well as Fibre Channel over SCSI. The SCSI-2 standard allows for disks of up to 2 Terabytes. SCSI-3 disks may be larger but will only report up to 2 Terabytes of storage for SCSI-2 format inquiries.

• Large Disk includes the ability to initialize an MPE/iX Disk Volume of up to 512 Gigabytes on SCSI-2 compliant disks. SCSI-2 Disks that are larger than 512 GB are truncated at the 512 GB limit. No matter how big the disk, HP reported, the space beyond 512 GB will not be usable by the MPE/iX or any applications.

There are limits to how much Large Disk is available. And MPE/iX disk volume includes disk-resident OS data structures that use some disk space, so no more than 511 GB of user file space should be expected.

• Large Disk includes a number of opportunistic enhancements to MPE Command Interpreter commands and utility programs to 'smooth' user experience when dealing with large disks, large groups and large accounts. These commands and utilities are REPORT, :[ALT|LIST|NEW][GROUP|ACCT], FSCHECK, and DISCFREE.

HP strongly advises installing all of these patches at the same time using Patch/iX. The Large Disk Patches are:

• MPEMXX8(A) -- FSCHECK.MPEXL.TELESUP
• MPEMXU3(B) -- [ALT|LIST|NEW][ACCT|GROUP]
• MPEMXT3 -- SCSI Disk Driver Update
• MPEMXT4 -- SSM Optimization (>87 GB)
• MPEMXT7 -- DISCFREE.PUB.SYS
• MPEMXU3 -- REPORT
• MPEMXV2(A) -- CATALOG.PUB.SYS
• MPEMXW9(A) CIERR.PUB.SYS, CICATERR.PUB.SYS

Image by pixel1 from Pixabay


Adding a naked Seagate drive to a 3000

Seagate Barracuda 31841

James Byrne reported on a way to get a Seagate disk drive to mount in a Series 918. 

We have a 918LX that we are trying to configure as a spare. The unit has three 18Gb disc drives installed, Seagate model ST31841N. We can see the drives in Mapper at 52.56.6/5/4. We can use DISCUTIL to mount 52.56.4 and 52.56.6 — but we cannot get the drive at 52.56.5 to mount.

This problem drive is a new unit just removed from its factory packaging.

Naked Seagate SCSI drives require a low level format to a sector size of 512 for the HP 3000 to mount them. We have a Windows-based tool called Seatools from Seagate that can perform this formatting from a Windows host — at least, from a host that has a suitable 50N SCSI interface card installed.

The same thing can be accomplished by doing a full install of MPE/iX from tape to such a disc. The install of MPE/iX directly to that disk which we could not mount solved the mounting problem.


What good are Nike arrays?

HP NIke Array
By John Burke

3000 users still can employ used HP Nike Model 20 disk arrays. There was once a glut of these devices on the market — meaning they were inexpensive — and they work with older models of HP 3000s. Here's a note from one company using these Nike arrays.

"We’re upgrading from a Model 10 to a Model 20 Nike array. I’m in the middle of deciding whether to keep it in hardware RAID configuration or to switch to MPE/iX mirroring, since I can now do it on the system volume set. It wasn’t in place when the system was first bought, so we stayed with the Nike hardware RAID. We’re considering the performance issue of keeping it Nike hardware RAID versus the safety of MPE Mirroring. You can use the 2nd Fast-Wide card on the array when using MPE mirroring, but you can’t when using Model 20 hardware RAID.

So, with hardware RAID, you have to consider the single point of failure of the controller card. If we ‘split the bus’ on the array mechanism into two separate groups of drives, and then connect a separate controller to the other half of the bus, you can’t have the hardware mirrored drive on the other controller. It must be on the same path as the ‘master’ drive because MPE sees them as a single device.

Using software mirroring you can do this because both drives are independently configured in MPE. Software mirroring adds overhead to the CPU, but it’s a tradeoff you have to decide to make. We are evaluating the options, looking for the best (in our situation) combination of efficiency, performance, fault tolerance and cost.

Note: Mirrored Disk/iX does not support mirroring of the System Volume Set – never did and never will. Secondly, you most certainly can use a second FWSCSI card with a Model 20 attached to an HP 3000

All of the drives are accessible from either controller but of course via different addresses. Your installer should set the DEFAULT ownership of drives to each controller. To improve throughput, each controller should share the load. Only one controller is necessary to address all of the drives, but where MPE falls short is not having a mechanism for auto-failover of a failing controller.

In other words, SYSGEN reconfiguration would be necessary to run on a single controller after SP failure in a dual SP configuration. You could have alternate configurations stored on your system to cover both cases of a single failing controller but the best solution is to get it fixed when it breaks. The best news is that SP failures are not very common.

There is a mechanism in MPE for ‘failover’ called HAFO - High Availability FailOver. It is only supported with XP and VA arrays, and not on Nike’s or AutoRAIDs (because it does not work with those).

Andrew Popay provided some personal experience.

"We have seven Nike SP20 arrays, totaling 140 discs spread across all the arrays, using a combination of RAID 1 (for performance) and RAID 5 (for capacity). We use both SP’s on all arrays, with six arrays used over three systems (two per system). One of our systems has two arrays daisy-chained. The only failures we have suffered on any of the arrays have been due to a disc mechanism failing.

"We never find any issues with the hardware raiding; in fact, as a lot of people have mentioned, hardware raiding is much more preferred to software raiding. Software raiding has several issues, system volume, performance, ease of use, etc. Hardware raiding is far more resilient.'

As for anyone concerned about single points of failure, I would not worry too much about the Nike arrays, I would say they are almost bulletproof. For those who require a 24x7 system and can’t afford any downtime whatsoever, maybe they should consider upgrading to an N-Class, with a VA or XP. Bottom line: SP20’s are sound arrays on the HP 3000s, easy to configure, set up and maintain.


Using RAID5 on an HP 3000

Hard-Drive
By Gilles Schipper
Homesteading Editor

RAID storage, including a low-cost MOD20 array, can improve a 3000's performance. Here are a few things to consider if you will be acquiring a MOD20.

Although possible, I would not recommend utilizing RAID5 LUNs in an HP 3000 environment — unless your greatest priority is to maximize disk space availability at the expense of performance.

RAID5 offers fail-safe functionality over a group of disks (minimum of three) by means of one disk of the RAID5 disk being allocated as a parity disc. The benefit of RAID5 over RAID1 is that it results in a greater amount of overall usable disk space than RAID1. However, it performs poorly in an HP 3000 environment, and cannot be booted from if specified as the system disk (LDEV 1).

Although the supported maximum memory configuration of each Storage Processor (SP) unit is 64MB, 128MB works best (although not all of it can be used).

Each SP has 4 memory slots. You can maximize the performance of the MOD20 by populating each SP with four 32MB memory SIMMs, 72-pin, FPM with parity, 60ns.

The NIKE MOD20 is a very capable and useful solution to the fragile environment afforded by a JBOD environment — particularly because most 3000 JBOD disk systems tend to be very mature and consequently relatively unreliable and prone to failure.

And, although the MOD20 disk system itself is also quite long in the tooth, it’s got built-in fail-safe mechanisms. Also, the MOD20 would appeal to those with very limited budgets, since the devices are quite inexpensive in the used-equipment market.

There are other, more advanced RAID systems that also support the HP 3000 environment. These include the HP Autoraid12H system, various VA7400 systems, some of the HP XP-family members, as well as EMC systems. This list is in order of increasing cost, for the most part.

The bottom line: if you are not already utilizing RAID technology for your 3000, now would be a good time to consider it seriously.


MPE/iX Command File Scripts Explained

Code on screenBy Ken Robertson

The MPE/iX command interpreter has a generous command set, pushing the shell into the realm of a true programming tool. Its ability to evaluate expressions and to perform I/O on files allows the end-user to perform simple data-processing functions. The CI can be used to solve complex problems. Its code, however, is interpreted, which may cause a CI solution to execute too slowly for practical purposes.

Command files are a collection of commands in flat files, of either variable or fixed length record structure, that reside in the MPE or POSIX file space. Basically, command files are what you could call MPE Macros. Anything that you can do in the CI interactively, you can do with command files, and then some. You can use command files in situations that call for repetitive functions, such as re-compiling source code, special spooler commands, etc. Command files are also great when you want to hide details from the end-user.

A command file is executed when its name is typed in the CI, or invoked from a command file or programming shell. Just as in program execution, the user’s HPPATH variable is searched to determine the location of the command file.

MPE Scripts Versus Unix Scripts

For the average task, the MPE scripting language is easier to read and understand than most Unix scripts. For example, command line parameters in MPE have names, just like in regular programming languages.

Of course, there are several script languages on Unix and only one on MPE! On Unix you can write shell scripts for any of the many shells provided (C shell, Bourne shell, ksh, bash, etc). Although there is also now a Posix shell on MPE, most scripts are written for the CI. Several third-party tools, such as Qedit and MPEX, emulate HP scripting and integrate it with their own commands.

A command file can be as simple as a single command, such as a Showjob command with the option to only show interactive sessions (and ignore batch jobs):

:qedit
/add
1      showjob [email protected]
2      //
/keep ss
/e
:

You have created a command file called SS — when you type SS you will execute showjob [email protected]

On MPE, the user needs read (r) or execute access (x) to SS. On Unix you normally must have x access, not just r access, so you do a chmod +x on the script. This is not necessary in MPE, although, if don’t want users to be see the script, you may remove read access and enable execute access.

Structure of a Command File (aka CI script)

A script is an ASCII file with maximum 511 byte records. Unlike Unix, the records may contain an ASCII sequence number in the last 8 columns of each line. The command file consists of 3 optional parts:

1. Parameter line with a maximum of 255 arguments:
parm sessionnumber
parm filename, length=”80”

2. Option lines:
option nohelp,nobreak
option list

3. The body (i.e., the actual commands)”
showjob job=!sessionnumber
build !filename;rec=-!length,,ascii
In MPE scripts, there is no inline data, unlike Unix ‘hereis’ files.

Parameters

Notice in the example above that parameters are used with an exclamation (!), as opposed to the $ in Unix. The same is true for variables. Parameters are separated by a space, comma or semicolon. All parameter values are un-typed, regardless of quoting.

In a typical Unix script, the parameters are referenced by position only ($1, $2, $3, …). In an MPE script, the parameters have names, as in the function of a regular programming language, and can also have default values. In Unix you use [email protected] for all of the parameters as a single string; in MPE you use an ANYPARM parameter to reference the remainder of the command line (it must be the last parameter).

Here is a script to translate “subsys” and “err” numbers from MPE intrinsics into error messages. The subsys and error numbers are passed in as parameters:

parm p_subsys=108,p_error=63
setvar subsys hex(!p_subsys)
setvar error hex(!p_error)
comment the hex conversion allows for negative numbers
comment the #32765 is magic according to Stan!
setvar cmd “wl errmsg(#32765,!subsys);wl errmsg(!error,!subsys);exit”
debug !cmd

As you can see above, the Setvar command assigns a value to parameter or to a new variable. But there are also system pre-defined variables. To see them all do Showvar @;hp. To get information on variables, do help variable and to get help on a specific variable, say hpcmdtrace, do help hpcmdtrace (set TRUE for some debugging help).
In most MPE commands, you must use an explicit exclam ! to identify a variable: build !filename

However, some MPE commands expect variables, and thus do not require the explicit !. For example, Setvar, If, ElseIf, Calc, While, and for all function arguments, and inside ![expressions].

Warning: variables are “session global” in MPE. This means that if a child process, or scripts, changes a variable, it remains changed when that child process terminates. In Unix you are used to the idea that the child can do whatever it likes with its copy of the variables and not worry about any external consequences.

Of course having global variables also means that it is much easier to pass back results from a script! And this is quite common in MPE scripts.

Options

Options allow you to list the commands as they are execute (option list), disable the Break key (option nobreak), enable recursion (option recursion), and disable help about the script (option nohelp).

The script body below shows active process information. This example shows many of the commands commonly used in scripts: If, While, Pause, Setvar, Input and Run. Other commands you will see are Echo, Deletevar, Showvar, Errclear.

WHILE HPCONNSECS > 0
    IF FINFO("SQMSG",0)
       PURGE SQMSG,TEMP
    ENDIF
    BUILD SQMSG;REC=-79,,F,ASCII;TEMP;MSG
    FILE SQMSG=SQMSG,OLDTEMP
    SHOWQ;ACTIVE >*SQMSG
    SETVAR PINLIST ""
    WHILE FINFO("SQMSG",19) <> 0
         INPUT SQLINE < SQMSG
         IF POS("#",SQLINE) <> 0 THEN
           SETVAR PIN RTRIM(STR(SQLINE,47,5))
           SETVAR PINLIST "!PINLIST" + "," + "!PIN"
         ENDIF
    ENDWHILE
    IF FINFO("SPMSG",0)
       PURGE SPMSG,TEMP
    ENDIF
    BUILD SPMSG;REC=-79,,F,ASCII;TEMP;MSG
    FILE SPMSG=SPMSG,OLDTEMP
    SETVAR PROC "SHOWPROC PIN="+"!PINLIST"+";SYSTEM >*SPMSG"
    !PROC
    WHILE FINFO("SPMSG",19) <> 0
         INPUT SPLINE < SPMSG
         IF POS(":",SPLINE) <> 0 THEN
           ECHO !SPLINE
         ENDIF
    ENDWHILE
    PAUSE 30
ENDWHILE

Handling Errors

In most Unix scripts, if a step fails, you check for an error with an If-conditional and then take some action, one of which is ending the script. Without an If, the script continues on, ignoring the error.

In MPE, the default action when a step fails is to abort the script and pass back an error. To override this default, you insert a Continue command before the step that may fail. You then add If logic after the step to print an error message and perhaps Return (back 1 level) or Escape (all the way back to the CI).

     continue
      build newdata
      if cierror<>100 then
         print "unable to build newdata file"
         print !hpcierrmsg
         return
      else
         comment - duplicate file, okay
      endif

You can set HPAUTOCONT to TRUE to continue automatically in case of errors, but this can be dangerous. The default behavior at least lets you know if an unexpected problem occurs.

User Defined Commands (UDC)

UDCs are like Command File scripts, except that several are combined in a single “catalog” file. They are an older feature of MPE, so you may see them in older applications even when scripts seem like a better solution. The primary reason that they are still useful is that they support Option Logon, which invokes the command when a user logs onto the system.

More Information

Tim Ericson’s collection of UDCs and Command files has recently been resurrected and re-published in the public domain at www.3kassociates.com/index_cmd.html

Image by fancycrave1 from Pixabay


Listserv still serving advice after 26 years

Bank vault safety deposit boxes
The 3000-L Listserv repository is the HP 3000 resource that's been in the longest continuous use for the MPE/iX ecosystem. HP had a Jazz website for about 13 years, content that was carried over to Fresche Legacy's servers once HP's labs closed. 3000-L was online for about a year or so before the NewsWire entered the Web.

The content on the 3000-L was a big reason I believed we could do a monthly HP 3000 newsletter. We curated and learned, education and advice we shared with readers. Even after 26 years, 3000-L can be searched for answers that go back to the era of MPE/iX 4.0.

That repository is full of history about the people who have created the MPE ecosystem, too. with enough patience, most answers will be hiding in the hundreds of thousands of email messages. All are logged by subject line. 3000-L can be searched within date ranges, too.

3000-L was once so robust that we could publish a column about its gems once a month as part of the first 10 years of the NewsWire. 

The columns are archived in our 1996-2005 pages. We called them NetDigest, and for awhile they were written by John Burke, who helped us found the NewsWire with his knowing voice and deep technical experience.

For the source material for those columns, refer to the 3000-L search panel.

For the columns, refer to the Tech Page of the '96 - '05 issues. Once you arrive at the Tech Page, just do a search within the page for the phrase net.digest. We've got 106 columns there.

Photo by Jason Pofahl on Unsplash 


How to Encrypt 3000 Log-on Passwords

Padlock
NewsWire Classic

Is there a way to encrypt MPE logon passwords to keep auditors satisfied that the HP 3000 is secure? We need to show that they cannot be easily read with the ;pass parameter (i.e. listuser xxx.yyy;pass)

The replies generated one of the longest threads of the month on the 3000-L.

Tracy Johnson offered an opinion that “the answer to your auditors is not in encrypting passwords. The answer lies in restricting AM and SM capability to only those key personnel who can use the the “;pass” parameter within established policy. AM and SM capability also presumes the same capability to change another user’s password, and therefore also the ability to look it up.”

Chris Boggs reported in a virtual testimonial that “Our auditors were not satisfied by even limiting SM and AM capabilities to only two individuals (both in our department). Since we had Vesoft's Security/3000 already, I changed our regular logon ID’s to use the Vesoft password which is encrypted.

"There are other features in Vesoft security which are handy when dealing with auditors such as password obsolescence, password “history,” minimum password standards, inactivity logouts, day/time restrictions, automatic deactivation of logonID’s after a certain number of failed logon attempts, and probably a few others.”

Bradmark’s Jerry Fochtman said some Interex Contributed Software Library routines can help. “I developed a routine to return the passwords for user/group/account (based upon caller’s capabilities) during this time. It also signaled if the password was encrypted, simply returning blanks in this case. There was another routine which given a password, would encrypt it based upon HP’s approach and tell the caller if the entered password matched the one in the system directory.”

Fochtman also took note of the Vesoft abilities and added his humble opinion on the security solution from Monterrey Software, “SAFE/3000. It also utilizes one-way encryption for its passwords. And in terms of strictly security, it is a better tool in several areas, such as network security.”

Michael Gueterman, whose company Easy Does It Technologies does pre-audits for 3000 sites, added notes on using only session-level passwords.

“That’s fine for some things, but I still recommend keeping at least MPE Account passwords in place for all but the most “open” areas. For accounts with SM or PM, I also recommend MPE User passwords as well. Also, when at all possible, explicitly define what people are ALLOWED to access, instead of using generic wildcards. Wildcards make auditors unhappy, and an unhappy auditor is dangerous!”

Image by meineresterampe from Pixabay


Set a Watch for Jobs That Hang Others

Guard tower
Jobstreams deliver on the HP 3000's other promise. When the server was introduced in the early 1970s it promised interactive computing, well beyond the powers of batch processing. Excellent, said the market. But we want the batch power, too. Running jobs delivered on the promise that a 3000 could replace lots of mainframes.

Decades later, job management is still crucial to a 3000's success. Some jobs get hung for one reason or another, and the rest of the system processing is halted until someone discovers the problem job and aborts it. When it happens over a weekend, it's worse. You can come in Monday and see the processing waiting in queue for that hung-up job to finish.

Is there a utility that monitors job run time, so that it can auto-abort such jobs after X number of hours? Nobix sells JobRescue, a commercial product for "automatically detecting errors and exception messages; JobRescue eliminates the need for manual review of $STDLISTs, making batch processing operations more productive."

Then there's Design 3000 Plus. The vendor still has a working webpage that touts JMS/3000, a job management system that was at one time deployed at hundreds of sites. Its powers include "automatic job restart and recovery. Whenever a job fails, a recovery job can be initiated immediately."

The home-grown solutions are just waiting out there, though, considering how few 3000 sites have a budget for such superior software. Mark Ranft of Pro3K shared his job to check on jobs. The system does a self-exam and reports a problem.

Continue reading "Set a Watch for Jobs That Hang Others" »


Values hidden by time get revealed by vets

Brass treasure key
Photo by Michael Dziedzic on Unsplash

Twenty-four years ago we started unlocking Hidden Value for HP 3000s: Commands that only the veterans know, plus the processes that have been plumbed to bypass MPE's blind alleys.

Some of the value is specific to a 3000 process like using EDIT/3000. It's antique, that editor, but it's on every HP 3000.

I use cut and paste with EDIT/3000 to enter data to batch files.  It works well except that I am limited by the size of the scratch file. Can I change the size of this file so I can paste more at a time?

Immediately after entering Editor, enter "set size=######" to give yourself more space.

For other tasks, like finding forgotten passwords, and keeping them fresh and the 3000's data secret, more elaborate answers have surfaced.

A system manager pitched his plight.

"My operator, in his infinite wisdom, decided to change passwords on manager.sys.  Of course he forgot, or fingerchecked... I don’t know.  At any rate I need some help. Any suggestions, other than a blindfold and cigarette?"

Several versions of help involved the use of utilities from security experts VEsoft. "Do you have the GOD program on your system? If so, it has PM capability, and so it can give the user who runs it SM capability. So it will allow you to do a LISTUSER MANAGER.SYS;PASS=

(That's why GOD should be secured, by the way. A randomized lockword will do the job, visible only to users who have SM capability. When VEsoft installs MPEX, for example, it installs a randomized password to MGR.VESOFT, and to GOD.PUB.VESOFT.)

Paul Edwards, ever a source for HP 3000 training, ran through the backstop methods every system manager should practice to avoid such a dilemma.

1. You run BULDACCT prior to each full backup so you can look in BULDJOB1 for the passwords 
2. You have another user on the system with SM capability and a different password as a backup in case this happens  
3. Your operator used LISTUSER MANAGER.SYS;PASS just after changing the password to verify the accuracy as spelled out in the Operations Procedures section in your Systems Manager Notebook   
4. You have a Systems Manager Notebook

  Then Duane Percox of K-12 app vendor QSS opened up a clever back door:

If your operator can log onto operator.sys:
file xt=mytape;dev=disc
file syslist=$stdlist
store command.pub;*xt;directory;show

Using your favorite editor or other utility search for the string: "ALTUSER MANAGER  SYS" You will notice: PAS=

, <passwd> which is your clue


Keep Passwords Fresh on 3000s: Methods

Fresh bread
Photo by Clem Onojeghuo on Unsplash

It's usually a good practice to keep passwords fresh. A 3000 development manager once posed a question about how to do this while staying inside the bounds of MPE/iX. He had the usual limited budget associated with HP 3000 ownership.

"Management wants users to be forced to change their passwords on a regular basis. Also, certain rules must be applied to the new password. I don't have budget for the good tools for this, like Security/3000, so I need to write something myself, or see if there's any contributed code to do the job."

Homegrown and bundled solutions followed. When Jeff Vance worked in the 3000 lab at HP, he offered the pseudo random password generator as a solution. It's in the HP Migrations webpage hosted on the website of the company formerly known as Speedware. These HP Jazz solutions that used to be on the HP website are still available at Fresche Solutions.  

There are UDCs on Jazz which force a password to be supplied when using NEWUSER, NEWACCT and NEWGROUP CI commands. These required passwords can be random (uses the script above) or user entered with a minimal length enforced.

Then Vance added as an afterthought, a strategy to program your own password system:

I haven’t thought about it much, but it seems you could have a password file (maybe a CIRcular file?) for each user on the system. This file would have their last N passwords, and the modified date of the file would be the date their password was most recently changed.

A logon UDC could detect if the password file for that user exists. If not create it and require a new password right then.  If the password file exists then get it’s modified date and compare that to today’s date. If greater than X days then in a loop prompt for a new password. Validate the entered password against previous N passwords and your other rules. Maybe run a dictionary checking program to make sure the password is not common, etc.

Update the user-specific password file with their new password, and then logon the user.

From the user community, Donna Hofmeister weighed in with this advice:

If you have no choice other than to develop your own software, then I’d certainly model it after what VEsoft has already done. That is:

Based on a system-wide UDC, examine all sessions (it is just sessions, yes?  By the way, a DSLOGON from inside a job is still a session....) against a ‘database’ (By the way, just how secure is this database?  A real database needs passwords... Who’s going to maintain that? A flat file could be lockworded... but that’s not a slamdunk answer.) which is looking for the ‘age’ of the password (By the way, are you going to provide an advance warning period?). 

If it is time to change the password, get the ‘new’ password from the user... but writing the rules is a pain, and keeping track of reused passwords is just annoying. Auditors in the states love when you can say the password is one-way encrypted.  Dunno what your management is saying for encrypting an MPE password.

Continue reading "Keep Passwords Fresh on 3000s: Methods" »


Debugging the diagnostics

Fire-ant
Photo by Mikhail Vasilyev on Unsplash

The Command Support Tools Manager (CSTM) replaced SYSDIAG as of MPE/iX 6.5. Managers who are keeping MPE/iX working here in 2019 rely on CSTM, just as they did SYSDIAG before it.

There's evidence out there that CSTM has problems while running on 6.5 MPE/iX systems. One well-schooled developer recently noted while trying to run CSTM on his MPE/iX system that the diagnostic told him on startup, "an error dialog could not be built to display an error."

The developer community suggested a few fixes for this problem with the diagnostic software. CSTM was ported onto the HP 3000 from HP-UX, so the repairs that CSTM itself suggested regarding memory (increasing it, removing processes, reconfiguring kernel memory limits) probably don't fit.  CSTM has a special page in the Hewlett-Packard Enterprise website devoted to the problem.

The developer at least had another 3000 running the same version of MPE/iX, a system where CSTM was starting up without a problem. One bit of advice suggests that while using console debug, "check out what a your working system looks like at the CSTM prompt when idle. Use psuedomap “XL” to get symbols from the libraries and program. Attempt to set some breakpoints near initial program launch."

Using DEBUG, the open heart surgery of HP 3000 management, is sometimes a required diagnosis. When your diagnostics software requires diagnosis, nothing but DEBUG will get the job done.

Much more detail followed on using DEBUG to discover what's failing in CSTM.

Continue reading "Debugging the diagnostics" »


ERP Tips: Using work orders to backflush

Pipe-and-plumbingPhoto by Samuel Sianipar on Unsplash

MANMAN still runs operations at companies around the world. Not a lot of companies, of course. It's 2019 and everything is smaller in size, not just your hearing aids. The MANMAN managers are still looking for tips. Here's one generated from a question out of the Altra Industrial Motion Corp. from senior systems analyst James English.

We are on MANMAN version 9.1 on an HP 3000. We have all MANMAN modules, including MANMAN/Repetitive. Is it possible to backflush work orders without using Repetitive? Our one manufacturing location is looking at simplifying work order transactions. They are manually transacting each operation on their work orders, even though they don’t collect actual hours.

Short question: How can they use work orders instead of using Repetitive?

When a work order has been received into stock, it comes to the scheduler-planner to push the times through each sequence, since the operation no longer does time cards. Once that time-pushing is done, the work orders are closed for material and labor. Once a work order is received into FG, instead of pushing the time through each operation, could we just back flush?

Alice West of Aware Consulting says

You can set all the components on your bill as “consumable” and then when you complete the WO the system will consume all the materials.  We always called this feature “poor man’s Repetitive.” 

However, it sounds like you are trying to simplify the labor portion of the transaction.  For that, you can look at your COMIN variable settings. Here is a chart I put together to show how 3 different variables work together.

Continue reading "ERP Tips: Using work orders to backflush" »


Using VSTORE to verify backups

Storage-1209059_1920
The VSTORE command of MPE/iX has a role in system backup verification. It's good standard practice to include VSTORE in every backup job's command process. Using VSTORE is documented in the manuals for the original OS in which it was introduced: 5.0.

If possible, do your VSTOREs on a different (but compatible model) of tape drive than the one the tape was created on. Why? DDS tape drives slowly go out of alignment as they wear.

In other words, it's possible to write a backup tape, and have it successfully VSTORE on the same drive. But if you have to take that same tape to a different server with a new and in-alignment drive, you could have it not be readable!

If you'll only ever need to read tapes on the same drive as you wrote them, you're still not safe. What happens if you write a tape on a worn drive, have the drive fail at some later date -- and that replacement drive cannot read old backup tapes?

Using the 'two-drive' method to validate backup (and even SLT) tapes is a very prudent choice, if you have access to that array of hardware. It can also often help identify a drive that's going out of alignment -- before it's too late! 

Unfortunately, SLTs have to be written to tape (at least, for non-emulated HP 3000s). However, your drive will last years longer if you only write to it a few times a year.

You can find HP's VSTORE documentation page from that HP STORE command manual on the Web, (thanks to 3K Ranger for keeping all those those pages online).


Wayback: Security boosts as enhancements

Booster-seat
They weren't called enhancements at the time, but 13 years ago this month some security patches to MPE represented internal improvements that no company except HP could deliver to 3000s. Not at that time, anyway. This was the era when the 3000 community knew it needed lab-level work, but its independent support providers had no access to source code.

Just bringing FTP capability up to speed was a little evidence the vendor would continue to work on MPE/iX. For the next few years, at least; HP had halted OpenMPE's dreams to staff up a source code lab by delaying end of support until 2008. The vendor announced a couple more years of its support to 3000 customers.

In doing that, though, HP made an assignment for itself with the support extension, the first of two given to the 3000 before the MPE lab went dark in 2010. That assignment was just like the one facing today's remaining HP 3000 customers: figure out how to extend the lifespan of MPE expertise in a company.

FTP subsequently worked better in 2006 than it had in the years leading up to it. It's not an arbitrary subject. FTP was the focus of a wide-ranging online chat in May. Did you know, for example, that FTP has a timeout command on MPE/iX?

The connection time-out value indicates how long to wait for a message from the remote FTP server before giving up. The allowable range is 0 to 3000. A value from 1 to 3000 indicates a time-out value in seconds. A value of 0 means no time-out (i.e., wait forever). If num-secs is not specified, the current time-out value will be displayed. Otherwise, this command sets the connection time-out to num-secs seconds.

When an FTP job gets stuck, using timeout can help.

MPE/iX engineers and systems managers were working more often in 2006 than they do today. When anybody who uses MPE/iX finds a 3000 expert still available, they need to get in line for available work time. It remains one good reason to have a support resource on contract. A company relying on a 3000 shouldn't be thinking a mailing list or a Slack channel represents a genuine support asset. Even if that FTP tip did arrive via the 3000-L.

The resource of good answers for crucial questions gets ever more rare. The 3000-L mailing list has rarely been so quiet. There are information points out there, but gathering them and starting a discussion is more challenging than ever. File Transfer Protocol is pretty antique technology for data exchange. It turns out to be one of the most current standards the 3000 supports.

Continue reading "Wayback: Security boosts as enhancements" »


Make that 3000 release a printer grip

Fist-artwork
A printer connected to our HP 3000 received a "non-character" input and stopped printing. The spooler was told to stop in order for the queue to be closed and restarted. When we do a show command on that spooler, it reports " *STOP .......CLOSING CONN " How do I force a close on the connection? The HP 3000 is used so much it can't really be shut down any time soon.

Tracy Johnson says

If it is a network printer, just "create" another LDEV with the same IP. The 3000 doesn't care if you have more than one LDEV to the same IP (or DNS). Raise the outfence on the original LDEV. Once created, do a SPOOLF of any old spool files on that LDEV to the new LDEV. You can do it in a job that reschedules itself if it persists. The first spool file still in a print state will probably be stuck, but this technique should fix subsequent spool files. The situation probably won't go away until the next reboot.

We've had our full backup on Friday nights abort several times and are not really able to discern why; sometimes it works while other times it doesn't. As a test/fix, we're swapping out the “not very old DLT tape” for a brand new DLT tape to see if that makes a difference. Our daily, partial backups work just fine—each day has its own tape.

Mark Ranft says

Let's talk tapes. How old are these unused new tapes? From my experience, new tapes and old tapes both have issues. I would not call a tape that was manufactured years ago, but hasn't been used, "New." It is still an old tape. But an unused tape will have microscopic debris from the manufacturing process. It may work just fine, but be prepared for more frequent cleaning if you are using unused tapes.

Old tapes are tried and true. That is, until they start stretching and wearing from overuse. If it was my STORE that failed, I would start by cleaning the drive. And cleaning cartridges can only be used a specific number of times. That is why they come with the check off label. After the allowed number of cleanings, you can put them in the drive but they don't do anything.

I was told by a trusted CE friend that cleaning a drive three times is sometimes necessary to get it working again. I don't know the science behind it, but that process did seem to save my behind more than once. After cleaning, do a small test backup and a VSTORE. Try to read (VSTORE) an old tape.


Fine-Tune: Creating Store to Disc from tape

NewsWire Classic

I still have some 3000 information on a tape. I’d like to create a Store to Disc file with it — how do I do that?

Jack Connor replies:

There are several solutions. The first and easiest is to simply restore the info to a system (RESTORE *T;/;SHOW;CREATE;ACCOUNT=WORKSTOR) where WORKSTOR is an account you create to pull the data in.

Then a simple FILE D=REGSFILE;DEV=DISC and STORE /WORKSTOR/;*D; with whatever else should create the disc store.

The second method is to use FCOPY. You'll have to research the STORE format, but I believe it's FILE TAPEIN;DEV=TAPE;REC=8192,,U,BINARY.

The third (also easy, but you need the software) is to use Allegro's tool TAPECOPY, which moves from tape store to disc store and back.

John Pitman adds:

Do you mean copy it off tape to a disk store file? I’m not sure if that can be done, as in my experience of tapes, there is a file mark between files, and EOT is signified by multiple file marks in a row... but anything may be possible. If you do a file equate and FCOPY as shown below, you should be able to look at the raw data, and it should show separate files, after a file list at the front.

FILE TX;DEV=TAPE;REC=32767
FCOPY
FROM=*TX;TO=;CHAR;FILES=ALL

Here is our current store command, and the message it provokes. MAXTAPEBUF speeds it up somewhat

STORE  !INSTOREX.NEW.STOCK2K;*DDS777;
FILES=100000;DIRECTORY;MAXTAPEBUF


Making Directories Do Up To Date Duty

Last week we covered the details of making a good meal out of LDAP on an MPE system. Along the way we referred to an OpenLDAP port that made that directory service software useful to 3000 sites. The port was developed by Lars Appel, the engineer based in Germany whose work lifted many a 3000 system to new levels.

Appel is still working in 3000s, from time to time. We checked in with him to learn about the good health of LDAP under MPE/iX.

Is this port still out in the world for 3000 fans and developers to use?

Well, I don't recall if anyone ever used it (and I must admit that I don't recall of the top of my head, what drove me to build it for MPE/iX at that time... maybe just curiosity). However, the old 1.1 and 2.0.7 versions at still available at the website maintained by Michael Gueterman, who is still hosting my old pages there.

The versions are — of course — outdated compared to the current 2.4.x versions at openldap.org. But anyone with too much spare time on their hands could probably update the port.

But it's still useful?

Funny coincidence, though. Just yesterday, I had to use a few ldapsearch, ldapadd, and ldapmodify commands against our Linux mail server. If I had seen your mail two days ago, I could probably have looked up examples in my own help web pages, instead of digging up syntax in some old notes and man pages.

And you're still working in MPE?

I am still involved with Marxmeier and Eloquence, so it is more with former HP 3000 users that with current ones.


Making LDAP Do Directory Duty

DAP
Explore a 3000 feature to see how a little LDAP’ll do ya

NewsWire Classic

By Curtis Larsen

When you think of LDAP, what do you think of? You’ve probably heard about it — something to do with directories, right? — but you’re not quite sure. You’ve heard some industry buzz about it here and there, read a paper or two, but perhaps you still don’t quite know what it can do for you, or how it could work with an HP 3000. Hopefully this article will de-mystify it a bit for you, and spark some ways you could use it in your own organization.

MPE currently has limited support for LDAP, but the support is growing. Aside from the OpenLDAP source ported by Lars Appel, HP offers an LDAP “C” Software Development Kit for writing MPE/iX code to access directories, er, directly.

LDAP stands for “Lightweight Directory Access Protocol.” In a nutshell, it allows you to create directories of information similar to what you would see in a telephone book. Any information you want to store for later quick retrieval: names, telephone numbers, conference room capacities, addresses, directions — even picture or sound files. Using directories such as these is an incredible time-saver (can’t you think of company applications for one already?), but LDAP can do so much more. The directories you create are wholly up to you, so the sky’s the limit.

At this point you might be saying “Great, but why not use a database for this stuff?” That’s an excellent question, and in truth, there is some overlap in what you might want stored in a database versus being stored in a directory. The first and foremost difference between them is that a directory is designed for high-speed reading (and searching) — not writing.

The idea is that, generally speaking, a directory doesn’t change much, but quickly reading its information is a must. Understand that this doesn’t mean that directory writes are at all bad — they’re just not structurally designed to be as fast as reads are.

Databases also require more in the way of overhead: high-powered servers and disks, (usually) high-priced Database Management Systems — which one will be best for you? — and highly-skilled, highly-paid DBAs to keep it all happy. (Our DBA said I had to mention that part.)

LDAP directories are generally simpler and faster to set up and manage. LDAP is (also) a common client-server access standard across many different systems. You don’t have to deal with the outrageous slings of one DBMS, or the delightful syntax variations in SQL or ODBC implementations. LDAP directories can even be replicated. Copies of directories, or just sections of larger directories, can be stored on different servers and updated (or cross-updated) periodically. This can be done for security (“mirrored directories” — one here, one elsewhere), performance (all queries against local entries on a local server), or both.

Continue reading "Making LDAP Do Directory Duty" »


Samba, and making it dance on MPE/iX

Screen Shot 2019-03-24 at 5.11.14 PM
HP 3000 sites have the Samba file sharing system, a universal utility you find on nearly every computer.

Samba arrived because of two community coding kings: Lars Appel, who ported the Samba open source package to the 3000, and Mark Klein, who ported the bootstrap toolbox to make such ports possible. As John Burke said in the sunnier year of 1999:

Without Mark Klein’s initial porting of and continued attention to the Gnu C++ compiler and utilities on the HP 3000, there would be no Apache/iX, syslog/iX, sendmail/iX, bind/iX, etc. from Mark Bixby, and no Samba/iX from Lars Appel. And the HP 3000 would still be trying to hang on for dear life, rather than being a player in the new e-commerce arena.

So Samba is there on your HP 3000, so long as you've got an MPE version minted during the current century. Getting started with it might perplex a few managers, like one who asked how to get Samba up on its feet on his 3000. One superb addition is SWAT, the Samba administration tool. Yup, the 3000's got that, too.

Continue reading "Samba, and making it dance on MPE/iX" »


It may be later than you think, by Monday

Clock-face
Daylight Saving Time kicks off early on Sunday. By the time you're at work on Monday it might seem late for the amount of light coming in your window. If you're working at home and next to the window, it will amount to the same thing. We lose an hour this weekend.

This reset of our circadian rhythms isn't as automatic as in later-model devices. Like my new Chevy, which is so connected it changes its own clocks, based on its contact with the outer world. HP 3000s and MPE systems like those from Stromasys don't reach out like that on their own. The twice-a-year event demands that HP 3000 owners adjust their system clocks.

Programs can slowly change the 3000's clocks in March and November. You can get a good start with this article by John Burke from our net.digest archives.

The longer that MPE servers stay in on the job, the more their important date manipulations will be to its users. The server already hosts a lot of the longest-lived data in the industry. Not every platform in the business world is so well-tooled to accept changes in time. The AS/400s running older versions of OS400 struggled with this task.

You also need to be sure your 3000's timezone is set correctly. Shawn Gordon explained how his scheduled job takes care of that:

"You only have to change TIMEZONE. For SUNDAY in my job scheduler I have the following set up to automatically handle it:

IF HPMONTH = 3 AND HPDATE > [this year's DST] THEN
   ECHO We are going back to Standard Time
   SETCLOCK TIMEZONE = W8:00
ENDIF
IF HPMONTH = 11 AND HPDATE < [this year's ST] THEN
   ECHO Setting clock for Daylight Savings Time
   SETCLOCK TIMEZONE = W7:00
ENDIF

3000 customers say that HP's help text for SETCLOCK can be confusing:

SETCLOCK  {DATE= date spec; TIME= time spec [;GRADUAL | ;NOW]}
   {CORRECTION= correction spec [;GRADUAL | ;NOW]}
   {TIMEZONE= time zone spec}
   {;CANCEL}

Orbit Software's pocket guide for MPE/iX explains shows the correct syntax. In this case, ;GRADUAL and ;NOW may only be applied as modifiers to the DATE=; TIME= keywords, not to ;CORRECTION=.


There's more of this all the time, so dust

Vacuum-cleaner
Newswire Classic

By John Burke

As equipment gets older and as we neglect the maintenance habits we learned, we will see more messages like this.

Upon arrival this morning the console had locked up. I re-started the unit, but the SCSI drives do not seem to be powering up. The green lights flash on for a second after the power is applied, but that is it. The cooling fan does not turn either. I am able to boot, but get the following messages: LDEVS 5, 8, 4, 3, 2 are not available and FILE SYSTEM ERROR READING $STDIN (CIERR 1807).

When I try to log on as manager.sys, I must do so HIPRI, and get the following: Couldn’t open UDC directory file, COMMAND.PUB.SYS. (CIERR 1910) If I had to guess, I would say the SCSI drives are not working. Is there a quick fix, or are all the files lost? I should add that I just inherited this system. It has been neglected, but running, for close to two years. Is it time to pull the plug?

Tom Emerson responded

This sounds very familiar. I’d say the power supply on the drive cabinet is either going or gone [does the fan ‘not spin’ due to being gunked up with dust and grease, or just ‘no power’?] I’m thinking that the power supply is detecting a problem and shutting down moments after powering up [hence why you see a ‘momentary flicker’].

Tim Atwood added

"I concur. The power supply on the drive cabinet has probably gone bad. If this is an HP6000 series SCSI disc enclosure for two and four GB SCSI drives, move very quickly. Third-party hardware suppliers are having trouble getting these power supplies. I know the 4GB drives are near impossible to find. So, if it is an HP6000 series you may want to stock up on power supplies if you find them. Or take this opportunity to convert to another drive type that is supported.”

The person posting the original question replied, “Your post gave me the courage to open the box and the design is pretty straight forward. It appears to be the power supply. As I recall now, the cooling fan that is built into the supply was making noise last week. I will shop around for a replacement. I can’t believe the amount of dust inside!”

Which prompted Denys Beauchemin to respond

The dust inside the power supply probably contributed to its early demise. It is a good idea to get a couple of cans of compressed air and clean out the fans and power supplies every once in a while. That goes for PCs, desktops, servers, and other electronic equipment. The electrical current is a magnet for dust bunnies and other such putrid creatures.

Wayne Boyer of Cal-Logic had this to say; useful because supplies may be hard to locate

Fixing these power supplies should run around $75 to $100. Any modular power supply like these is relatively easy to service. I never understand reports of common and fairly recent equipment being in short supply. It is good advice to stock up on spares for older equipment. Just because it’s available somewhere and not too expensive doesn’t mean that you can afford to be down while fussing around with getting a spare shipped in.

The compressed air cans work, but to really do a good job on blowing out computer equipment, you need to use an air compressor and strip the covers off of the equipment. We run our air compressor at 100 PSI. Note that you want to do this blasting outside! Otherwise you will get the dust all over whereever you are working. This is especially important with printers, as you get paper dust, excess toner, etc. building up inside the equipment. I try and give our office equipment a blow out once a year or so. Good to do that if a system is powered down for some other reason.

Bob J. of Ideal Computer Services added

The truth sucks. There are support companies that don’t stock spare parts. The convenient excuse when a part is needed is to claim that ‘parts are tough to get.’ Next they start looking for a source for that part. One of my former employers always pulled that crap.

Unfortunately, quality companies get grouped with the bad apples. I always suggest system managers ask to visit the support supplier's local parts warehouse. The parts in their warehouse should resemble the units on support. No reason to assume the OEM has the most complete local stock either. Remember HP's snow job suggesting that 9x7 parts would become scarce and expensive? Different motive, but still nonsense.


Cautions of a SM broadsword for every user

Broadsword
NewsWire Classic

By Bob Green

Vladimir Volkh was doing MPE system and security consulting at a site. One of his regular steps is to run VESOFT’s Veaudit tool on the system. From this he learned that every user in the production account had System Manager (SM) capability.

Giving a regular user SM capability is a really bad thing. It means that the users can purge the entire system, look at any data on the system, insert nasty code into the system, etc. And this site had just passed their Sarbanes-Oxley audit.

Vladimir removed SM capability from the users and sat back to see what would happen. The first problem to occur was a job stream failure. The reason it failed was because the user did not have Read access to the STUSE group, which contained the Suprtool "Use" scripts. So, Suprtool aborted. 

Background Info

For those whose MPE security knowledge is a little rusty, or non-existent, we offer a a helpful excerpt from Vladimir’s son Eugene, from his article Burn Before Reading - HP3000 Security And You – available at www.adager.com/VeSoft/SecurityAndYou.html

<beginarticlequote>

When a user tries to open a file, MPE checks the account security matrix, the group security matrix, and the file security matrix to see if the user is allowed to access the file. If he is allowed by all three, the file is opened; if at least one security matrix forbids access by this user, the open fails.

For instance, if we try to open TESTFILE.JOHN.DEV when logged on to an account other than DEV and the security matrix of the group JOHN.DEV forbids access by users of other accounts, the open will fail (even though both TESTFILE’s and DEV’s security matrices permit access by users of other accounts).

Continue reading "Cautions of a SM broadsword for every user" »


Security advice for MPE appears flameproof

Burn-before-reading

Long ago, about 30 years or so, I got a contract to create an HP 3000 software manual. There was a big component of the job that involved making something called a desktop publishing file (quite novel in 1987). There was also the task of explaining the EnGarde/3000 security software to potential users. Yow, a technical writer without MPE hands-on experience, documenting MPE V software. 

Yes, it was so long ago that MPE/XL wasn't even in widespread use. Never mind MPE/iX, the 3.0 release of MPE/XL. All that didn't matter, because HP preserved the goodness of 3000 security from MPE V through XL and iX. My work was to make sense of this security as it related to privileges.

I'll admit it took yeoman help from Vicky Shoemaker at Taurus Software to get that manual correct. Afterward I found myself with an inherent understanding, however superficial, about security privileges on the HP 3000. I was far from the first to acquire this knowledge. Given another 17 years, security privileges popped up again in a NewsWire article. The article by Bob Green of Robelle chronicled the use of SM capability, pointed out by Vladimir Volokh of VEsoft.

Security is one of those things that MPE managers didn't take for granted at first, then became a little smug about once the Internet cracked open lots of business servers. Volokh's son Eugene wrote a blisteringly brilliant paper called Burn Before Reading that outlines the many ways a 3000 can be secured. For the company which is managing MPE/iX applications — even on a virtualized Charon server — this stuff is still important.

I give a hat-tip to our friends at Adager for hosting this wisdom on their website. Here's a recap of a portion of that paper's good security practices for MPE/iX look like.

Volokh’s technical advisory begins with a warning. “The user is the weakest link in the logon security system -- discourage a user from revealing passwords. Use techniques such as personal profile security or even reprimanding people who reveal passwords. Such mistakes seem innocent, but they can lose you millions."

Continue reading "Security advice for MPE appears flameproof" »


Scripting a Better UPS link to MPE/iX

In another article we talked about how HP dropped the ball on getting better communication between UPS units and the HP 3000. It was a promise that arrived at about the same time as HP's step-away from the 3000. The software upgrade to MPE/iX didn't make it out of the labs.
 
That didn’t stop Donna Hofmeister. About that time she was en route to a director's spot on OpenMPE. Later on she joined Allegro. We checked in to see if better links between Uninterrupted Power Supplies via MPE/iX was possible. Oh yes, provided you were adept at scripting and job stream creation. She was.
 
"I wrote a series of jobs and scripts that interrogate an APC UPS that is fully-connected to the network — meaning it has an IP address and can respond to  SNMP," she said. "These are the more expensive devices, for what it's worth."
 
"It worked beautifully when a hurricane hit Hawaii and my 3000 nicely shut itself down when power got low on the UPS. Sadly, the HP-UX systems went belly-up and were rather a pain to get running again."

What can a 3000 do to talk to a modern UPS?

SmartUPS
Michel Adam asks, "How can I install and configure a reasonably modern UPS with a 3000? I'd like to use something like an APC SmartUPS or BackUPS, for example. What type of signaling connection would be the easiest, network or serial?"

Jim Maher says

First you need to find out what model 3000. Listed on the back will be the power rating. Some of the older ones use 220V. Then you can match that with a proper UPS.

Michel Adam explains in reply

This HP 3000 is an emulator, i.e. a 9x8 equivalent or A-Class. I guess a regular "emulated" RS-232, or actual ethernet port would be the most likely type of connection. In that sense, the actual voltage is of no consequence; I only need to understand the means of communicating from the UPS to the virtual 3000.

Tracy Johnson reports

While we have three "modern" APC units each with battery racks four high, they also serve the rest of the racks in our computer room. Our HP 3000 is just a bigger server in one of those racks. Each APC services only one of the three power outlets on that N-Class. Their purpose is not to keep the servers "up" for extended periods, but to cover for the few seconds lapse before our building generator kicks in in case of a complete power loss.

As far as the UPS talking to our HP 3000 serial port, we didn't bother. Our APC units are on the network so they have more important things to do, like send emails to some triage guy in Mumbai should they kick in.

Enhanced, or not?

In the history department, Hewlett-Packard had its labbie heart in the right place just weeks before the vendor canceled its 3000 plans. We reported the following in October of 2001

HP 3000s will say more to UPS units

HP's 3000 labs will be enhancing the platform to better communicate with Uninterrupted Power Supply systems in the coming months. HP's Jeff Vance reports that the system will gain the ability to know the remaining time on the UPS, so system managers can know that the UPS will last long enough to shut down my applications and databases and let the system crash. Vance said that HP has scheduled to begin its work on this improvement—voted Number 8 on the last System Improvement Ballot—in late fall.

Late fall of 2001 was not a great time to be managing future enhancements for the 3000 and MPE/iX. The shortfall of hardware improvements and availability has been bridged by Charon. Adjustments to MPE/iX for UPS communication have not been confirmed.


Fine-tune: how to reinstate config files

Reinstate logo
I’m replacing my old Model 10 with a Model 20 on MPEXL_SYSTEM_VOLUME_SET. This will of course require a re-INSTALL. What’s the best way to reinstate my network config files? Just restore NMCONFIG and NPCONFIG? Can I use my old CSLT to re-add all my old non-Nike drives and mod the product IDs in Sysgen, or do I have to add them manually after using the Factory SLT?


Gilles Schipper replies:

Do the following steps:
- using your CSLT to install onto LDEV 1
- modify your i/o to reflect new/changed config.
- reboot
- use volutil to add non-LDEV1 volumes appropriately
- restore directory or directories from backup
- preform system reload from full backup - using the keep, create, olddate, partdb,show=offline options in the restore command
- reboot again
No need for separate restores of specific files.

We had another hard drive fail this weekend. It was in an enclosure of old 2GB drives that we really did not need, so I just unplugged them and rebuilt my volumes without them. However, when I boot up I get error messages that path 10/4/0.20-26 can’t be mounted. How do I get rid of these messages?

Gilles Schipper replies:
You can safely ignore the messages, but if you want them not to reappear, simply remove those devices from your IO configuration via SYSGEN, keep the new configuration to config.sys and reboot with a start norecovery. When you’re back up again, you should create a new slt tape.

Paul Edwards adds:
Use SYSGEN with DOIONOW or IOCONFIG to delete them. No reboot is required.


Fine Tune: Optimized Disaster Recovery

Disasters
By Gilles Schipper

While working with a customer on the design and implementation of disaster recovery (DR) plan for a large HP 3000 system, it became apparent the implementation had room for improvement.

In this specific example, the customer had a production N-Class HP 3000 and a backup HP 3000 Series 969 system in a location several hundred miles from the primary.

The process of implementing the DR was completed entirely from a remote location — thanks to VPNs and an HP Secure Web Console on the 969. One of the most labor-intensive aspects of the DR exercise was to rebuild the IO configuration of the DR machine (the 969) from the full backup tape of the production N-Class machine, which included an integrated system load tape (SLT) as part of the backup.

The ability to integrate the SLT on the same tape as the full backup is very convenient. It results in a simplified recovery procedure as well as the assurance that the SLT to be used will be as current as possible.

When rebuilding a system from scratch from a SLT/Backup tape, if the target system differs in architecture from the source system, it is usually necessary to modify all the device paths and device configuration specifications with SYSGEN and then rebooting the system in order to even be able to utilize the tape drive of the target system to restore any files at all.

(This would be apart from the files restored during the INSTALL process — which does not require proper configuration of any IO component at all).

Some would argue that this system re-configuration needs to be completed only once, since any future system rebuilds would require only a “data refresh” rather than a complete system re-INSTALL.

I say that this would be true only in very stable system environments where IO configurations — including network printer configurations — are static and where TurboIMAGE transaction logging is not utilized. Otherwise there could be unpleasant results and complications from using stale configurations in a real disaster recovery situation. In any case, there really is no reason to take any chances,

Continue reading "Fine Tune: Optimized Disaster Recovery" »


Even DTCs can spark memories for 3000s

DTC to 3000 N-Class config
The Distributed Terminal Controller was a networking device with intelligence that stood between an HP 3000 and a peripheral. We use the past tense to describe the DTC usage for many of the homesteading 3000 sites. In some places, DTCs continue to let 3000s shake hands with other devices.

At TE Connectivity in Hampton Roads, Va. the box works between an N-Class 3000 (the ultimate generation) and an impact printer (of considerably older peerage). Al Nizzardini makes the pair work for the company that employs 3000s across the globe, from North America to China.

"Our DTC 48 with 3-pin ports died on us," Nizzardini said. "We have an impact printer connected to the 48, the only thing that is hanging off that DTC." At first the solution to the blocked connection was to use an even older controller, the DTC16 with modem ports. That would've involved shorting out pins on the DTC 16.

Nizzardini asked and a few veterans answered. Francois Desrochers said Nizzardini would need pins 2, 3 and 7 (send, receive, ground). "You may have to short out 5 and 20," he added. Another combination from Gary Robillard suggested connecting 4 and 5 together and 6, 8, and 20 together. "We always had 2 and 3 crossed—2 to 3 and 3 to 2," he said.

It's been 20 years since HP last released a DTC, something that's still useful for older peripherals. The intel to keep one connected to the latest 3000s is still available in the 3000 community. Old doesn't mean dead when someone remembers the essentials. Nizzardini solved his problem without shorting out pins, just by locating another working DTC 48. MANMAN drives the workflow at TE Connectivity, but the real driver is pros like Nizzardini, helping one another remember.

 


Routers and switches and hubs, oh my!

Lions-and-tigers-and-bears
Editor's Note: Initial HP 3000 hardware networking can be like a trip down a Yellow Brick Road. Here's a primer for the administrator who's wondering if that HP 3000 can link to a network

By Curtis Larsen

Auntie MAU! Auntie MAU! A Twisted Pair! A Twisted Pair!

Once upon a time networks were as flat as the Kansas prairie, and computers on them were a lot like early prairie farmsteads: few and far between, pretty much speaking to each other only when they had to. (“Business looks good again this year.” “Yep.”) Most systems still used dumb terminals, and when speaking to anything outside the LAN, system-to-system modem connections were the way to do it.

A tornado named the Internet suddenly appeared in this landscape. It uprooted established standards and practices, swept aside protocols and speed limitations, and took us into a Technicolor networking landscape very different than what was there before.

Toto, I get the feeling our packets aren’t in Kansas anymore

Smaller companies were tossed before the tornado to eventually land and quickly begin growing again in the new environment. Large companies like IBM, HP, Digital, and Microsoft, who were rooted and established in their own proprietary standards (it sounds like an oxymoron, but it’s true) survived by generally ignoring the howling winds. Eventually, munchkin-like, they all came out to see what the general fuss was about, and found that a house-sized chunk of change (pun intended) had landed.

Networking, and the TCP/IP protocol had truly arrived in style, bringing strange new applications and markets. Serial connections and proprietary networking (“What do you mean we don’t need SNA to connect to the Wichita office anymore?”) gave way to a new kid on the block. And her little dog, too.

Follow the Yellow-Colored-Cable-and-Labeled-at-Both-Ends Road!

So then the HP 3000 managers found themselves sitting in a strange new networking land of strange new networking things. And for some of us, trying to understand the whole of it all — especially in relation to “legacy” system like the HP e3000 — was a little daunting. What are all these networking black boxes we plug the system into, and what do they all do? How can they make life better? (How can they make life worse?) If you’re not sure (or just plain curious) read on.

Continue reading "Routers and switches and hubs, oh my!" »


Memory and Disk Rules for Performance

Concentration
NewsWire Classic

By Jeff Kubler

You need to get management support for your efforts to keep your systems performing at their best. Memory and disk are two components of your performance picture under MPE/iX. Main Memory is the scratch pad for all the work that the CPU performs. Every item of data that the CPU needs to perform calculations on or updating to must be brought into Main Memory.

CPU used to manage Main Memory: The CPU must manage memory. It must cycle through the memory pages, marking some as Overlay Candidates (this means that new data from disk may be placed here), noting that some are in continued use, and swapping others out to virtual or what is called transient storage. Swapping to disk occurs when data is in continued use but a higher priority process needs room for its data. To accommodate this higher priority process and its need for memory space, the Memory Manager will swap the memory for the lower priority process out to disk. The more activity the Memory Manager performs, the more CPU it takes to do this. Therefore it is the percentage of CPU used to manage memory that we use as a measurement.

Page Faults per Second: A Page Fault occurs each time a memory object is not found in memory. The threshold for the number of Page Faults per second that can be incurred before a memory problem is indicated varies with the size and the power of the CPU. Larger machines can handle more Page Faults per second while a smaller box will encounter problems with far fewer.

An exceptional number of Page Faults should never be used as the sole indicator of memory problems but when observed should be tested with the memory manager percentage. If both agree, you have a memory shortage. There are some strange things that I have observed with Page Faults, so it does not stand alone as an indicator of memory shortage.

The number of Page Faults per second and the amount of CPU needed to manage Memory are always evaluated in conjunction with each other. That is to say the high Page Fault Rate will not be considered a problem if the Memory Manager Percentage is not above 4 percent.

Continue reading "Memory and Disk Rules for Performance" »


Fine-tune: 3000 support rescues, MPE/iX version matrix, network printer software

Rescued-boat-people
Steve Douglass of United Technologies Aerospace Systems writes, "We have an A-Class 400-100 machine that would only stay up about an hour before it autobooted. This machine was simply used for archived data lookup from an old ERP system. After trying simple fixes like reseating memory and checking connections we still had the same problem."

"We had no support agreement, and no one wanted to pay for a third-party support company to perform a diagnosis and fix, so we powered the system off. Of late there is interest in resurrecting this machine, and someone may be willing to foot the bill. We've researched and found Pivital Solutions and the Ideal Computer Services Group. Are there other recommendations?

John Clogg reports

We currently use Sherlock Services and are happy with the support they provide. I have also used Ideal Services and can recommend them with confidence.

Jim Maher of Saratoga Computers adds

We still service all of the HP 1000, 3000 and e3000 systems. Call anytime.

We replaced a printer recently and we can't get the new one to play nice with the 3000.  It's a LaserJet M608. When sending output to it, it prints a page or two and hangs. The spool file remains in a "print" state. The only way to reset it is to do a STOPSPOOL followed by a couple of ABORTIOs. The next time I start the spooler, the same thing happens, regardless of what I'm printing. What things should I check?

Tracy Johnson says

Try adding SNMP_SUPPORTED = FALSE (or TRUE)  You have a 50/50 chance either way. Sometimes you just have recalcitrant printers that won't cooperate with the HP3000. Consider getting Espul from Richard Corn or Minisoft's licensed version called Netprint.

Jim English adds

We use Netprint and eFormz from Minisoft. The eFormz is installed on a Windows server. Not all of our printers go through Netprint, just the ones that print forms or barcodes. We recently installed a newer HP printer and had the same issue you did. I set it up in Netprint and eFormz and it works great now.

Netprint by itself may solve your issue. I set up the printer in eFormz to print receipt travelers, which may have barcodes on them.

Is there a support matrix document that shows the HP 3000 boxes and what versions of MPE they can run? I'm trying to find all the 3000 boxes that support MPE/iX 6.0.

Donna Hofmeister reports

All 9x8, 9x7 and 99x boxes support 6.0. No A-Class or N-Class 3000s support 6.0.


Fine-Tune: Test for disasters in any season

Test-siren
NewsWire Classic

Editor's Note: In October of 2001 the world worked in the aftermath of 9/11 attacks. Our Worst Practices columnist Scott Hirsh wrote this advice about the need to test for disasters. Another crisis was going to rise up for 3000 owners just a few weeks after this article appeared, this one triggered by HP. Regardless of where your datacenter is focused, it's always a good practice to test.

This Is Not a Test

By Scott Hirsh

For those of us in the United States entrusted with a company’s information resources, the events of September 11 changed everything. Before our business continuity or disaster recovery plans were primarily concerned with so-called “acts of God.” But we must now plan for the most improbable human acts imaginable. Who among us, prior to September 11, had a plan that took into account multiple high-rise office buildings being destroyed within minutes of each other? As you read this, the insurance industry is revising its assumptions. Likewise, we must now reconsider our approach to managing and protecting the assets for which we are responsible. Never before has the probability of actually needing to execute our recovery plans been so great.

As of this writing there have already been numerous business continuity and disaster recovery articles in the computer press. By now we understand the distinction between keeping the business going – not just IT, but also the whole business – and recovering after some (hopefully minor) interruption. And we’ve covered the issue of risk, where all the trade-offs and costs are negotiated. This whole topic was explored anew in the last few months, but it is still worthwhile to emphasize some early lessons of the attacks, from which we are still recovering.

It Had Better Work

Worst Practice 1: Trying to Fake It — I was visiting a friend’s datacenter recently, where I was told about a recent audit. This friend’s company spent the whole time trying to fake all the audit criteria: disaster recovery preparedness, security, audit trails, etc. At the risk of sounding like your parents, whom does this behavior really hurt? An audit is an ideal opportunity to validate all the necessary hard work required to run a professional datacenter. And should you ever be subjected to attack, electronic or otherwise, you know that your datacenter will survive.

If you didn’t get it before, you’d better get it now: Faking it is unacceptable. Chances are, at some point you will be required to do a real, honest-to-goodness recovery. And if you think you’re safe just because there may not be very many hijacked planes running into buildings such as yours, think again. The threats to your datacenter are diverse and numerous. And, by the way, violent weather, earthquakes and other natural disasters are still there too.

Worst Practice 2: Not Testing — Once you’re serious about continuity and recovery, not only will you plan, but you’ll test that plan often. There are lots of reasons to test your recovery capability often. Among them are: the ability to react quickly in a crisis; catching changes in your environment since your last test; accommodating changes to staff since your last test. A real recovery is a terrible time to do discovery.

Worst Practice 3: Not Documenting — One of the biggest problems with disasters is no warning. That’s why so many tests are a waste of time. Anyone can recover when you know exactly when and how. The truly prepared can recover when caught by surprise. Since you won’t get any warning – except, perhaps, with some natural disasters – you’ll want to have current, updated procedures. Since you’ll probably be on vacation (or wish you were) when disaster strikes, make sure the recovery procedures are off-site and available. If you’re the only one who knows what to do, even if you never take a day off there still won’t be enough of you to go around at crunch time.

Continue reading "Fine-Tune: Test for disasters in any season" »


Fine-Tune: Ensure Logical Data Consistency

Database_design_concepts
NewsWire Classic

The MPE/iX Transaction Manager for IMAGE does not guarantee logical consistency of your data. How do you ensure logical consistency? Use DBXBEGIN and DBXEND calls around all the DBPUT, DBUPDATE and DBDELETE calls that you make for your logical transaction. Yes, the definition of a logical transaction is up to the programmer.

There can be a lot of confusion about logical consistency, mostly because IMAGE kept adding logging and recovery features over its years of development. Gavin Scott gives a clear explanation of the state of affairs.

It’s amazing how much superstition exists surrounding this kind of stuff, and how many unnecessary rituals and sacrifices are performed daily to appease the mythical pantheon of data integrity gods. Real broken chains are supposed to be impossible to achieve with IMAGE on MPE/iX, no matter what application programs do, or how they are aborted, or how many times the system crashes!

The Transaction Manager provides absolute protection against internal database inconsistencies, as long as there are no bugs in the system and as long as the hardware is not corrupting data. No action or configuration is required on the part of the user.

Logical inconsistencies (order detail without an associated order header record, for example) can easily be created by aborting an application that’s in the middle of performing a database update that spans multiple records. Of course, IMAGE doesn’t care whether your data is logically correct or not, that’s the job of application programmers.

Using DBBEGIN/DBEND will have no effect whatsoever on logical integrity, unless you actually run DBRECOV to roll forward or roll back the database to a consistent point every time you abort a program or suffer any other failure.

Continue reading "Fine-Tune: Ensure Logical Data Consistency" »


Command file tests 3000s for holidays

Holiday-Calendar-Pages
Holiday season is coming up. It's already upon us all at the grocery stores, where merchandising managers have cartons of Thanksgiving decorations waiting their turn. The Halloween stuff has to clear away first.

Community contributor Dave Powell has improved upon a command file created by Tracy Pierce to deliver a streamlined way to tell an HP 3000 about upcoming holidays. Datetest tells whether a day is a holiday. "I finally needed something like that," Powell says, "but I wanted the following main changes:

1:  Boolean function syntax, so I could say :if  holiday()  then instead of

:xeq datetest
:if WhichVariableName = DontRememberWhatValue then

and also because I just think user-functions are cool.

2. Much easier to add or disable specific holidays according to site-specific policies or even other countries’ rules. (Then disable Veterans Day, Presidents Day and MLK Day, because my company doesn’t take them.)

3. Make it easy to add special one-off holidays like the day before/after Christmas at the last minute when the company announces them.

Along the way, I also added midnight-protection and partial input date-checking, and made it more readable, at least to me.

Continue reading "Command file tests 3000s for holidays" »


Fine-Tune: Get the right time for a battery

CMOS-clock-battery
Two weeks from now the world will manage the loss of an hour, as Daylight Saving time ends. The HP 3000 does time shifting of its system clock automatically, thanks to patches HP built during 2007. But what about the internal clock of a computer that might be 20 years old? Components fail after awhile.

The 3000's internal time is preserved using a small battery, according to the experts out on the 3000 newsgroup. This came to light in a discussion about fixing a clock gone slow. A few MPE/iX commands and a trip to Radio Shack can maintain a 3000's sense of time.

"I thought the internal clock could not be altered," said Paul English. "Our server was powered off for many months, and maybe the CMOS battery went flat." The result was that English's 3000 showed Greenwich Mean Time as being four years off reality. CTIME reported for his server:

* Greenwich Mean Time : THU, JUN 17, 2004, 11:30 AM   *
* GMT/MPE offset      : +-19670:30:00                 *
* MPE System Time     : THU, SEP 10, 2009,  2:00 PM   *

Yup, that's a bad battery, said Pro 3k consultant Mark Ranft. "It is cheap at a specialty battery store," he said, "and can be replaced easily, if you have some hardware skills and a grounding strap." Radio Shack offers the needed battery.

But you can also alter the 3000's clock which tracks GMT, he added.

Continue reading "Fine-Tune: Get the right time for a battery" »


Friday Fine-Tune: Speeding up backups

Spinning-wheels
We have a DLT tape drive. Lately it wants to take 6-7 hours to do a backup instead of its usual two or less.  But not every night,  and not on the same night every week.  I have been putting in new tapes now, but it still occurs randomly. I have cleaned it. I can restore from the tapes no problem. It doesn’t appear to be fighting some nightly process for CPU cycles. Any ideas on what gives?

Giles Schipper replies

Something that may be causing extended backup time is excessive IO retries, as the result of deteriorating tapes or tape drive.

One way to know is to add the ;STATISTICS option to your STORE command. This will show you the number of IO retries as well as the actual IO rate and actual volume of data output.

Another possibilty is that your machine is experiencing other physical problems resulting in excessive logging activity and abnormal CPU interrupt activity — which is depleting your system resources resulting in extended backup times.

Check out the following files in the following Posix directories:

/var/stm/logs/os/*
/var/stm/logs/sys/*

If they are very large, you indeed may have a hardware problem — one that is not "breaking" your machine, but simply "bending" it.


Fine Tune: Storing in Parallel and to Tapes

Does the MPE/iX Store-to-Disc option allow for a ‘parallel store,’ analogous to a parallel store to tape? For example, when a parallel store to tape is performed, the store writes to two or more tape drives at the same time. Is there a parallel store-to-disc option that allows for the store to write to two or more disc files at the same time (as opposed to running multiple store-to-disc jobs)?

Gavin Scott and Joe Taylor reply

Yes, the same syntax for parallel stores works for disk files as well as tape files. I really don’t know if you would get any benefit from this, but if you went to the trouble of building your STD files on specific disks, then it might be worthwhile.

What is the recommended life or max usage of DLT tapes?

Half a million passes is the commonly used number for DLT III. One thing to remember is that when they talk about the number of passes (500,000 passes), it does not mean number of tape mounts.

For SuperDLT tapes, the tape is divided into 448 physical tracks of 8 channels each giving 56 logical tracks. This means that when you write a SuperDLT tape completely you will have just completed 56 passes. If you read the tape completely, you will have done another 56 passes.

The DLTIV tapes (DLT7000/8000) have a smaller number of physical and logical tracks, but the principle is the same. The number of passes for DLTIIIXT and DLT IV tapes is 1,000,000. The shelf life is 30 years for the DLT III XT and DLT IV tapes and 20 for the DLT III.

Our DDS drive gets cleaned regularly. Our tapes in rotation are fairly old, too. However, we are receiving this error even when we use brand new tapes. 

STORE ENCOUNTERED MEDIA WRITE ERROR ON LDEV 7 (S/R 1454)

The new tapes are Fuji media, not HP like our old ones.

John Burke replies:

Replace that drive. DDS drives are notorious for failing. Also, the drive cannot tell whether or not you are using branded tapes. I’ve used Fuji DDS tapes and have found them to be just as good as HP-branded tapes (note that HP did not actually manufacture the tapes). I have also gotten into the habit of replacing DDS tapes after about 25 uses. When compared to the value of a backup, this is a small expense to pay.


Use Command Interpreter to program fast

NewsWire Classic

By Ken Robertson

An overworked, understaffed data processing department is all too common in today’s ever belt-tightening, down-sizing and de-staffing companies.

Running-shoesAn ad-hoc request may come to the harried data processing manager. She may throw her hands up in despair and say, “It can’t be done. Not within the time frame that you need it in.” Of course, every computer-literate person knows deep down in his heart that every programming request can be fulfilled, if the programmer has enough hours to code, debug, test, document and implement the new program. The informed DP manager knows that programming the Command Interpreter (CI) can sometimes reduce that time, changing the “impossible deadline” into something more achievable.

Getting Data Into and Out of Files

So you want to keep some data around for a while? Use a file! Well, you knew that already, I’ll bet. What you probably didn’t know is that you can get data into and out of files fairly easily, using IO re-direction and the print command. IO re-direction allows input or output to be directed to a file instead of to your terminal. IO re-direction uses the symbols ">", ">>" and "<". Use ">" to re-direct output to a temporary file. (You can make the file permanent if you use a file command.) Use ">>" to append output to the file. Finally, use "<" to re-direct input from a file:

echo Value 96 > myfile
echo This is the second line >> myfile
input my_var < myfile
setvar mynum_var str("!my_var",7,2)
setvar mynum_var_2 !mynum_var - (6 * 9 )
echo The answer to the meaning of life, the universe
echo and everything is !mynum_var_2.

After executing the above command file, the file Myfile will contain two lines, “Value 42” and “This is the second line.” (Without quotes, of course.) The Input command uses IO re-direction to read the first record of the file, and assigns the value to the variable my_var. The first Setvar extracts the number from the middle of the string, and proceeds to use the value in an important calculation in the next line.

How can you assign the data in the second and consequent lines of a file to variables? You use the Print command to select the record that you want from the file, sending the output to a new file:

print myfile;start=2;end=2 > myfile2

You can then use the Input command to extract the string from the second file.

Continue reading "Use Command Interpreter to program fast" »


Queue up those 3000 jobs with MPE tools

NewsWire Classic

By Shawn Gordon

A powerful feature of MPE is the concept of user-defined job queues. You can use these JOBQ commands to exert granular job control that is tightly coupled with MPE/iX. HP first introduced the commands in the 6.0 release.

For example, you only want one datacomm job to log on at a time, but there are 100 that need to run. At the same time you need to let users run their reports, and you want to allow only two compile jobs to run at a time. Normally you would set your job limit down to 1, then manually shuffle job priorities around and let jobs go. In the new multiple job queue controlled environment, you can define a DATACOMM job queue whose limit was 1, an ENDUSER job queue whose limit was 6 (for example), and a COMPILE job queue whose limit was 2. You could also set a total job limit of 20 to accommodate your other jobs that may need to run.

Three commands accommodate the job queue feature:

NEWJOBQ qname [;limit=n]
PURGEJOBQ qname
LISTJOBQ

The commands LIMIT, ALTJOB, JOB and STREAM all include the parameter ;JOBQ=.

As an example, I am going to create a new job queue called SHOWTIME that has a job limit of 1. You will notice the job card of the sample job has a JOBQ parameter at the end to specify what queue it is to execute in.

Continue reading "Queue up those 3000 jobs with MPE tools" »


SFTP and the points where transfers may fail

RFC-transfer-card-coverEarlier in August a 3000 manager who relies on the Stromasys virtualized 3000 was searching for failures. Well, he was asking about the causes for failures. He wanted to know more about failures of SFTP transfers on his MPE/iX system. (We'd call it a 3000 but there's no more HP iron there at Ray Legault's shop). He gave the rundown on the problems with MPE/iX.

We send about 40 files each day most of these in the early morning. Sometimes we would have zero to fives connection failures each morning. I noticed that these failures seem to occur when two SFTP jobs ran at the same minute. I then added a "JOBQ=FINLOG" to the job card of every SFTP job I had and set the job limit to 1. This was two weeks ago and we have not had a failure yet.

Brian Edminster, who still hosts open source software for MPE/iX, checked in to offer an answer to why those SFTP jobs were failing.

I'd be willing to bet that Ray's issue at Boeing with SFTP connect failures is due to the Entropy Generator running dry. Connections take lots of entropy data — and the one that comes 'out of the box' with the SFTP client doesn't generate very much without some modifications.

If you need to make more than one connection a minute (job limit 1), this modification will likely become necessary. Let me know if you'd like some pointers on how to do this. It will require some revisions to the SFTP software. The Entropy Gathering Daemon which Mark Klein's SFTP port uses is written in Perl. It is not terribly difficult to modify to include new data sources to "stir into the pool" that is drawn from by the SFTP client.

Edminster's MPE-Opensource.org website has an SFTP quickstart bundle of all packages required to install OpenSSH on MPE/iX including SFTP, scp, and keygen.


Kept Promises for Open Source on MPE/iX

OpensourceOpen source software developed a reputation for keeping HP 3000s online and productive, even in the face of industry requirement changes and new government regulations. Applied Technologies founder Brian Edminster has shared reports of a 3000 installation processing Point of Sale transactions, a customer which faced new PCI compliance demands. He was tasked with finding a solution to the new credit card compliance rules late in one December — with a January deadline.

“What we were struggling with was not that uncommon,” he explained. “The solution of choice was a version of the package OpenSSH, an open source implementation of a secure shell.”

OpenSSH offers publicly exchanged authentication, encrypted communication for secure file transfers, a secure shell command line, port forwarding. “It’s amazing how much you get," Edminster said, "and it’s available for many operating systems.” He's got a website devoted to the open source tools for the 3000.

Continue reading "Kept Promises for Open Source on MPE/iX" »


Following Job Lines in Emulated 3000 Life

Queueing
The Stromasys Charon software is a fact of life in the homesteading community by this year, after almost six years of field service. Lately the emulator users have been offering insights on how they're using their servers.

It's a lot like any HP 3000 has been used for the last 44 years, in some ways. Transferring files. Queueing up jobs. A few of the emulators shared their advisories not long ago.

Ray Legault at Boeing talked about his experiences with file transfers, especially an SFTP client and the SFTP "Connection refused" errors. As the Charon developers like to say, if the MPE/iX software behaves the same on the emulator as it does on 3000 hardware, even if MPE registers an error, then Charon is doing its faithful emulation job.

"We are running on a Stromasys Charon A500-200 and a A500-100 virtual machine which executes on a HP ProLiant DL 380 Gen8 3.59 GHZ CPU, with 6 cores and 64 gig of memory," Legault said.

We send about 40 files each day most of these in the early morning. Sometimes we would have zero to fives connection failures each morning. I noticed that these failures seem to occur when two SFTP jobs ran at the same minute. I then added a "JOBQ=FINLOG" to the job card of every SFTP job I had and set the job limit to 1. This was two weeks ago and we have not had a failure yet.

Another emulator user, Tony Summers of Smith & Williamson in the UK, shared queueing advice and a massive job checker (HOWMANY) that's working well for him.

Continue reading "Following Job Lines in Emulated 3000 Life" »


Nike Arrays 101

Hard-DriveJust a few weeks ago, a 3000 manager using an A-Class server checked in on how he might connect the SC-10 arrays from Hewlett-Packard to his A500. As a West Coast service provider carried the manager toward that hardware (it can be done) it seems like a good time to review the use of storage arrays with MPE/iX systems.

Our founding net.digest editor John Burke covered this ground in the years after HP announced it was cutting off its 3000 operations. While the HP label is still anathema to some, the hardware prices are sometimes too compelling. Here's Nike Arrays 101, advice still worthy on the day you're moving around arrays connected to a 3000.

By John Burke
Newswire Classic

Many 3000 homesteaders are picking up used HP Nike Model 20 disk arrays. The interest comes from the fact that there is a glut of these devices on the market — meaning they are inexpensive — and they work with older models of HP 3000s. However, there is a lot of misinformation floating around about how and when to use them. For example, one company posted the following to 3000-L:

We’re upgrading from a Model 10 to a Model 20 Nike array. I’m in the middle of deciding whether to keep it in hardware RAID configuration or to switch to MPE/iX mirroring, since I can now do it on the system volume set. It wasn’t in place when the system was first bought, so we stayed with the Nike hardware RAID. We’re considering the performance issue of keeping it Nike hardware RAID versus the safety of MPE Mirroring. You can use the 2nd Fast-Wide card on the array when using MPE mirroring, but you can’t when using Model 20 hardware RAID.

So, with hardware RAID, you have to consider the single point of failure of the controller card. If we ‘split the bus’ on the array mechanism into two separate groups of drives, and then connect a separate controller to the other half of the bus, you can’t have the hardware mirrored drive on the other controller. It must be on the same path as the ‘master’ drive because MPE sees them as a single device.

Using software mirroring you can do this because both drives are independently configured in MPE. Software mirroring adds overhead to the CPU, but it’s a tradeoff you have to decide to make. We are evaluating the options, looking for the best (in our situation) combination of efficiency, performance, fault tolerance and cost.

First of all, as a number of people pointed out, Mirrored Disk/iX does not support mirroring of the System Volume Set – never did and never will. Secondly, you most certainly can use a second FWSCSI card with a Model 20 attached to an HP 3000

Continue reading "Nike Arrays 101" »


HPCALENDAR joins 3000 intrinsics hits

Newswire Classic

Greatest-HitsTwenty years ago HP took steps forward, into the realm beyond 2028, when it released a set of COBOL-related MPE/iX intrinsics. The community is now looking into the next decade and seeing a possibility of hurdling the Dec. 31, 2027 date handling roadblock. In this Inside COBOL column from the late 1990s, Shawn Gordon took readers on a quick tour of the new intrinsics — new to 1998, at least — that would make the 3000 easier to program for the future. He even wrote a sample program employing the improved data handling.

In 2018 the information might seem more history lesson than operational instruction guide. But when a long-running mission critical app needs repairs, knowing the full set of date capabilities might help. Gordon even mentions that using the official intrinsics will help maintain programs written 20 years earlier. Enough time has passed by now that any new programs at the time of the article would be 20 years old.

3000 managers have always had a sharp focus on coding for long life of applications. 

By Shawn Gordon

Since Year 2000 is rapidly approaching, I'll review the date intrinsics that HP gave us in MPE/iX 5.5 starting with PowerPatch 4.

As I've done a lot of Y2K consulting it seems everyone has written their own date routines. Most I have seen will break by Y2K. My goal in my consulting was to implement an HP-supplied solution, making it easier to support YYMMDD as well as YYYYMMDD date functions during the conversion process.

My only negative comment about these intrinsics is that I wish they had been created with the introduction of the Spectrum series of HP 3000s (PA-RISC systems). I could have used them then, too.

Six new intrinsics are available. All of the parameters for all new intrinsics are now 32-bit. This means they will work for as long as anyone reading this will ever care. I feel it’s important to standardize on these new HP-supplied intrinsics. They will make it a lot safer than trying to maintain some piece of code that was probably written 20 years ago. With code that old, it’s likely that nobody remembers how it works.

Here’s the lineup of intrinsics:

1. HPDATECONVERT: converts dates from one supported format to another 
2. HPDATEFORMAT: converts a date into a display type (I usually use this instead of HPDATECONVERT)
3. HPDATEDIFF: returns the number of days between to given dates 
4. HPDATEOFFSET: returns a date that is plus or minus the number of days from the source date
5. HPDATEVALIDATE: verifies that the date conforms to a supported date format
6. A new 32-bit HPCALENDAR format (HPCALENDAR, HPFMTCALENDAR).

Continue reading "HPCALENDAR joins 3000 intrinsics hits" »


Worst Practices: Shouldn't Happen to a Dog

By Scott Hirsh

Chaplin-A-Dogs-LifeThere is a saying in Washington about Washington: “If you want a friend, get a dog.” Ha! We system managers should be so lucky. We can’t even be our own best friend.

It’s sad but true: we system managers won’t cut ourselves any slack. We repeatedly put ourselves in jeopardy, often making the same mistakes time after time. We even break all the rules we impose on others. Don’t believe me? See if you recognize any of these examples.

1. Hand crafted system management

Ah yes, the good old days. Peace, love and tear gas (I never inhaled). But here’s a news flash, sunshine: for system managers, the ’60s are dead. Predictable, repeatable tasks can and should be automated. If you can script it, you can schedule it. And if you can schedule it, you can automate it. So what are you waiting for? Do you like (take your pick): streaming jobs by hand; adjusting fences and priorities by hand; reading $STDLISTs; staring at the console waiting for that one important message? For this you went to college?

And yet, we (or our management) come up with lots of lame excuses for running a stone-age operation. Can’t afford the automation products, don’t trust automation, can’t trap every error, blah blah blah. Those excuses may fly when you’re small, but suddenly you have more systems, bigger systems and manual management turns your shop into burn-out central. Now there’s turnover costs, downtime costs, opportunity costs.

Oh, and by the way, it’s much more expensive to implement automated management in a large, busy environment than it is to grow automated management from a smaller environment. Perhaps some of us are just adrenaline junkies, or we fear not being needed. Get over it and automate already.

2. The disappearing act

A close personal friend of mine — okay, it was me — once made a change to Security/3000’s SECURCON file, then left for an all-day meeting about 40 miles away. Guess what? None of the application users could log on after my change. Way back then, my pager almost vibrated off my belt from that one. And it made for some interesting meetings when I got back.

I have seen lots of cases where a system manager made a configuration change, installed a patch, or fussed with SYSSTART or UDCs, then immediately went home. Big mistake. If you’re lucky, you live near your data center and can zip right back to repair the carnage that was discovered right away. If you’re not lucky, first you don’t discover your mistake until the worst possible moment — say, around the heaviest usage period the next day — and then you’re forced to take the system down to fix the problem. Ouch.

3. A lack of planning on my part does constitute an emergency on your part

A variation on No. 1. We are the eternal optimists. No matter how invasive the procedure, everything will work out perfectly, right? How many PowerPatches must we install before we realize we must leave adequate time for testing the patched system and perhaps back that sucka out? No really, this time HP (or your favorite vendor) has learned from past mistakes and has a bullet-proof update. No need to leave a cushion for collateral damage. Right.

Every decent system administration book offers the same advice: Don’t do anything you can’t undo. Make a backup copy of whatever you’re changing. Keep track of the steps you followed. Be prepared to back out whatever you’re doing. Because that contingency time can inflate your update schedule by hours, it’s unlikely you can safely make a system change at any time other than weekends or holidays. 

4. I’ve got a secret

You make changes but don’t tell anyone about them. Let’s be charitable and say your changes worked as planned. Unfortunately, nobody knew you were going to make the change. I have seen a change as innocuous as modifying the system prompt have unintended consequences (Reflection scripts looked for the old prompt and now wouldn’t work). The term “system” implies interrelationships. Anything we do has a ripple effect. When we don’t tell others that we’re about to make a change — “they wouldn’t let me do it if I told them!” — we don’t do ourselves any favors. I would love to hear other war stories under this category (hint, hint).

Continue reading "Worst Practices: Shouldn't Happen to a Dog" »