Previous month:
February 2020
Next month:
April 2020

March 2020

MPE file equations and Unix equivalents

Blackboard equation
HP 3000s, as well as MPE, employ a unique tool to define the attributes of a file. That tool is file equations, a 3000 speciality. Robelle calls these "commands that redefine the attributes of a file, including perhaps the actual filename."

In any migration away from HP 3000s (an ill-advised move at the moment, considering the COVID-19 Crisis) managers must ensure they don't lose functionality. Unix doesn't have file equations. Customers need to learn how to make Unix's symbolic links report the information that 3000s deliver from a LISTEQ command.

3000 managers are used to checking file equations when something mysterious happens with an MPE file. Dave Oksner of 3000 application vendor Computer And Software Enterprises (CASE) offered the Unix find command as a substitute for file equations. You need to tell find to only process files of type "symbolic link."

Oksner's example of substituting find for LISTEQ:

find /tmp/ -type l -exec ls -l {} \;

which would start from the /tmp directory, look for symbolic links, and execute “ls -l” on the filenames it finds. You could, of course, eliminate the last part if you only wanted to know what the filenames were and get

find /tmp/ -type l

(I believe it’s the same as using ‘-print’ instead of ‘-exec [command]’)

Beware of output to stderr (if you don’t have permission to read a directory, you’ll get errors) getting interspersed.

Jeff Vance added that the command interpreter in MPE also can deliver file information through a listfile command:

Don't forget the CI where you can do:

:listfile @,2;seleq=[object=symlink]

:help listfile all shows other options.

Our former Inside COBOL columnist and product reviewer Shawn Gordon offers his own MPE vs. Unix paper, and Robelle's experts wrote a NewsWire column contrasting Unix shell scripts with MPE tools.

Image by Gerd Altmann from Pixabay


TE turnoff date for 3000s could be 2021

Light switch
For a long time, TE Connectivity has been one of the biggest users of MANMAN software running on HP 3000s. That might be on the way to a permanent quieting — sometime next year.

Terry Simpkins at the firm was patching a Series 918 not long ago. It gave him a chance to check in with the remaining 371 subscribers to the 3000 mailing list, once he got the know-how he needed to patch.

"The force is once again calm," he said. Simpkins found a converter that allowed him to replace his old single-ended 2 Gb disk drives with the newer 36 Gb LVD drives. "I now have more disk space than my little 918 will ever need, plus a few spare drives to ensure I’ll never have a disk fail. Now to dust off the old CSL tapes and see what I want to restore."

TE measures its 3000 footprint by the number of databases online. "We’re down to four active MANMAN databases, from a high of 22. Three will convert to SAP at the end of June, so the last five-plus months will be a single MANMAN DB in Germany. I suspect we are going to be extremely bored at that point."

As to the shutdown date of 3000 operations, Simpkins said, "Right now that looks like somewhere between November 2020 and March 2021." 

Photo by twinsfisch on Unsplash


Preparing for a new kind of disaster

Paris coffin
A virus that kills in record numbers is circling the globe. Some who are felled by the disease might be 70 or older. That's the same age range as several HP 3000 experts who still support MPE computing. It might be a disaster if there's no one left at a company who knows details about a 3000 that's still running.

Avoiding that full stop is the business of disaster recovery. Add global fatalities to this list from 3000 consultant Paul Edwards's 2004 disaster recovery white paper. Sixteen years ago he ticked off a big list. "The top ten types of disasters, which have caused the most damage in recent years, are power outage, storm damage, flood, hardware error/failure, bombing, hurricane, fire, software error, power surge/spike, and earthquake."

The full paper remains on Edwards' website at Paul Edwards & Associates. Although, as Edwards notes, the paper doesn't go into the details of writing a disaster recovery plan, it discusses the main points to consider. Another Edwards paper, Homesteading: Plan for the Future, details what should be in a good Systems Manager Notebook. 

Every site should have one because it contains critical hardcopy information to back up the information contained in the system. It is part of the Disaster Recovery Plan that should be in place and is used to manually recreate your environment. You can’t have too much information inside it.

Twenty years ago this month, the 3000 community was already experienced at recovery from the disaster of losing a key staffer.

At that time, a 3000 manager read about a Florida site "where the system manager passed away without much notice. It sounds like documentation is pretty important in that kind of crisis. What do you recommend as a minimum?"

Paul Edwards replied:

The contents of a System Manager Notebook include hardware and software information that is vital to recovering your system in any type of disaster. The rest of the company’s business operating procedures must be combined with the IS plan to form a comprehensive corporate disaster recovery contingency plan.

The Notebook contains hardware model and serial numbers; license agreements for all software and hardware; a copy of all current maintenance agreements, equipment warranty information, complete applications documentation of program logic; data file layouts and system interaction, along with system operator run books and any other appropriate documentation. There is a wealth of information contained in each HP 3000 that can be printed and stored offsite that is critical to a recovery effort.

Image by Hans Rohmann from Pixabay


Making CI variables more Unix-like

By John Burke

Newswire Classic

How can you make CI variables behave more Unix-like?

For those of us who grew up on plain old MPE, CI variables were a godsend. We were so caught up in the excitement of what we could do with CI variables and command files, it took most of us a while to realize the inadequacy of the implementation. For those coming to MPE/iX from a Unix perspective, CI variables seem woefully inadequate. There were two separate questions from people with such a Unix perspective that highlighted different “problems” with the implementation of CI variables.

But first, how do CI variables work in MPE/iX? Tom Emerson gave a good, concise explanation.

“SETVAR is the MPE/iX command for setting a job/session (local) variable. I use ‘local’ somewhat loosely here because these variables are ‘global’ to your entire job or session and, by extension, are automatically available to any sub-processes within your process tree. There are some more-or-less ‘global’ variables, better known as SYSTEM variables, such as HPSUSAN, HPCPU, etc.”

The first questioner was looking for something like user-definable system variables that could be used to pass information among separate jobs/sessions. Unfortunately, no such animal exists. At least not yet, and probably not for some time if ever.

There is, however, a workaround in the form of UDCs created by John Krussel that implement system, account and user-level variables. The UDCs make use of the hierarchical file system (HFS) to create and maintain “variables.”

The second questioner was looking for something comparable to shell variables which are not automatically available at all levels. You have to export shell variables for them to be available at lower levels. Thus, there is a certain locality to shell variables.

It was at this point that Jeff Vance, HP’s principal CI Architect at CSY, noted that he had worked on a project to provide both true system-wide and local CI variables (in fact, the design was complete and some coding was done). Jeff offered a suggestion for achieving locality.

Variable names can be created using CI depth, PIN, etc. to try to create uniqueness. E.g.,

setvar foo!hppin value

setvar foo!hpcidepth value1

Mark Bixby noted that CI variables are always job/session in scope, while shell variables are local, but inherited by children if the variables have been exported. He suggested that if, working in the CI, some level of locality could be achieved by “making your CI script use unique variable names. If I’m writing a CI script called FOO, all of my variable references will start with FOO also, i.e.

SETVAR FOO_XX “temp value”

SETVAR FOO_YY “another value”

...

DELETEVAR FOO_@

“That way FOO’s variables won’t conflict with any variables in any parent scripts.”

HP has a formally documented recommendation for creating “local-ness.” 

MPE: How to create CI variables with local (command file) scope

Problem Description: I have separate command files that use the same variable names in them. If one of the command files calls the other, then they both affect the same global variable with undesirable results. Is there the concept of a CI variable with its scope local to the command file?

Solution: No. All user-defined CI variables have global (JOB/SESSION) scope. Some HP Defined CI variables (HPFILE, HPPIN, HPUSERCMDEPTH) return a different value depending on the context within the JOB/SESSION when they are called.

HPFILE returns the fully qualified filename of the command file.

HPPIN returns the PIN of the calling process.

HPUSERCMDEPTH returns the command file call nesting level.

To get the effect of local scope using global variables, you need a naming convention to prevent name collisions. There are several cases to consider.

Command file CMDA calls CMDB, both using varname VAR1.

• Use a hardcode prefix in each command file.

In CMDA use: SETVAR CMDA_VAR1 1

In CMDB use: SETVAR CMDB_VAR1 2

• Use HPFILE.

SETVAR ![FINFO(HPFILE,”FNAME”)]_VAR1 1

• Use HPUSERCMDEPTH.

SETVAR D!”HPUSERCMDEPTH”_VAR1 1 (Note: need a leading non digit)

Command file CMDA calls itself, uses varname VAR1.

• Same answer as case 1, the third solution: use HPUSERCMDEPTH.

There are two son processes. Each one calls CMDA which calls CMDB at the same nesting level.

• Same answer as case 1, the third solution: use HPUSERCMDEPTH. Not sure if this will work since not sure if HPUSERCMDEPTH is reset at JSMAIN, CI, or user process level.

• Use HPPIN and HPUSERCMDEPTH.

SETVAR P!”HPPIN”_!”HPUSERCMDEPTH”_VAR1 1

•Use HPPIN, HPUSERCMDEPTH and HPFILE (guaranteed unique, hard to read)

SETVAR P!”HPPIN”_!”HPUSERCMDEPTH”_![FINFO(HPFILE,”FN AME”)]_![FINFO(HPFILE,

“GROUP “)]_![FINFO(HPFILE,”ACCT”)]_VAR1 1

Again, there is no true local scope, only global scope for CI variables within any one session/job. The techniques presented above do provide at least a reasonable workaround for both system-wide and process-local variables.


Deep pockets? Maybe not for MPE positions

Pants pocket
Even in the earliest days of 2020, consultants and programmers are hunting down chances to earn money servicing the 3000. When Doug Hagy looked into joining the LinkedIn HP 3000 Community, he wanted to see if the group was a source of related work opportunities. "I developed on the HP 3000 continuously from 1981 to 1999," he said. "At its peak of popularity, it was a pretty solid platform. Companies who chose HP 3000 usually had deep pockets."

Hagy isn't wrong altogether. An HP 3000 investment can be traced to Fortune 100 corporations like Boeing, or a part of the L'Oreal beauty empire. It's far more likely to see an MPE/iX server running as a place like a Texas title insurance company, or a manufacturer of saw blades.

We had to reply that we didn't know of new work opportunities for 3000 experts. Certainly, his 18 years of development experience qualifies Hagy as one of those. From time to time, opportunities surface in places like the 3000-L mailing list. Fresche Legacy has a stable of 3000 people who help in migrations as well as perform some system maintenance for 3000s.

FM Global was looking to hire a Powerhouse developer on a contract basis in January. Pay was $50 an hour for the job in Rhode Island, on a six-month set of contracts "extended for years to come." The company even advised applicants of a $62 a night rate it had for contractors' lodging at Extended Stay America. It didn't look like the lodging was fully compensated, but there was a $23 a day per diem.

Just this week we heard from Birket Foster, whose MB Foster firm is still assisting in migrations of data from 3000s. Some of those are contract-driven, while others can be sometimes project-specific engagements.

LinkedIn has a 3000 Community with 674 members, which makes it twice the size of today's 3000-L membership list. LinkedIn's groups used to have an attached Jobs feature, but by now the Jobs are spread across all of the site's resources. With that said, a post for Programmer/Analyst for the City of Lawton, Oklahoma was listed in November. MPE/iX was among the requirements.

Hagy was enthusiastic. "If I found someone wishing to migrate an app from a 3000, that could be interesting. IMAGE, V3000, COBOL could be an interesting project. Lots was developed for this platform."

The deep pockets are mostly gone from 3000 enterprises. Migrating Image, VPlus, and COBOL II was a project for the previous decade. Companies are migrating data for use with cloud computing and alternatives involving Linux. Archival 3000 systems are running, and some others are managing production on a timeframe to allow companies to migrate.

Hagy, who operates Twin Lakes Consulting, "a nimble micro business" based in Greensburg PA, says he last touched MPE around 1999 or so. "I was doing Y2K prep and providing backfill support for businesses moving to new platforms," he said. I wasn't thinking there'd be any HP 3000 action out there now, until I saw the group you moderate. I'd make time to assist if someone wanted to port a novel app from MPE and needed someone to dissect its inner workings. Before MPE I did RTE work on HP 1000s."

Some companies will need a programmer or consultant whose experience goes back to the days of real-time systems with HP badges on the front of the server. They emerge from the shadows of an era where reliable service, on an unhackable server, that simply worked, could be enough.

Image by ds_30 from Pixabay


3000-L newsgroup heads for new future

Jeff Kell shutdown
IT staff at the University of Tennessee at Chattanooga in 2014, switching off their last HP 3000

The manager of the university datacenter that hosts the 3000-L mailing list and newsgroup has told members the list will be moved in some way during the months to come. Without sharing a timeline for the changes, technical director Greg Jackson of the University of Tennessee at Chattanooga (UTC) Information Technology said, "UTC will stop support of the list in its present form, as we move to a different delivery method."

"In the fall [of 2019]," Jackson said, "I contacted the moderator of the hp3000 list and let them know that at some undetermined point in the future, UTC will stop supporting the listserv in its present form as we move to a different delivery method. Since this list is still active, when the time comes, we will work with the group to ensure a smooth transition."

News of the movement and rehosting of the biggest archive of 3000 community messaging surfaced after UK users couldn't access the archives. Robert Mills said that when he contacted Jackson about being locked out of the archives, he was told "Over the past few months, there have been several attacks on the listserv that originated from IP addresses outside the US. Therefore, we decided to restrict the ListServ to the US only."

The list's membership count stands at 371. Donna Hofmeister, a support engineer at Allegro and one of the list's moderators, said the community should decide now how the message service and 3000 knowledge archive can move forward.

"Due to the looming changes at UTC," she said in a message, "hp3000-l needs to do something." The archives of the list, which date back to 1994, will "somehow be made available for searching."

"As one of the said moderators," she added, "I think it's only appropriate to ask — do we want hp3000-l (in whatever form it might take) to continue? The amount of traffic (which is around five messages/month) makes it a question that should be asked."

Hofmeister said that "having access to the archives has real value. So my plans are to <somehow> make the archives available for searching. So what do you say? Keep HP3000-L active in some form (I'm leaning towards making it a Google Group) or let it go away when UTC takes down the listserv? In either case, the archive will be available."

Fourteen list members responded quickly to vote for keeping the list alive. Two alternatives emerged as options when the UTC hosting ends as it exists, using the LISTSERV software. Rehosting on groups.io was suggested by Tracy Johnson, while Keven Miller proposed a free version of another listserve program.

"The Lsoft Lite free version supports up to 10 lists, 500 members each," Miller said. "There are a few other lists [whose UTC] archives might be nice [to preserve]. HP9000-L, OpenMPE, and maybe a few hidden lists. I would think that Lsoft Lite would be the easiest to move archives to. But I'm sure there are other open source solutions."


Large Disk patch delivers 3000 jumbo limits

Marshmallows
As the HP 3000 was winding its way out of the HP product lineup, it gained a greater footprint. Storage capabilities grew with the rollout of Large Disk. The effort was undertaken because HP's disk module sizes were doubling in size approximately every two years: 4 GB to 9, 18, 36, 73, 146, and then 300 GB.

The disk project might have never seen its limited release without OpenMPE. The advocacy group that was formed after HP's exit announcement saw the same disk size trend. OpenMPE drove the initiative of "Support future large disks" in the Interex 2003 Systems Improvement Ballot.

Just two years later, Interex was dead. The directive from the 3000 community to HP labs lived onward, though. HP said its investigation found the need for more work to be done to support large disk configurations.

The MPE/iX 6.5 Large File enhancement allowed bigger Files. 6.5 also permitted more disk space in each MPE Group and Account. But several CI commands and utilities were limited in their ability to work with the resulting larger Groups and Accounts. All of these inputs were assessed during the Large Disk investigation and as many as possible were addressed by the Large Disk patches.

So what does Large Disk deliver? The patches provide the following enhancements for MPE/iX 7.5:

• Large Disk includes the ability to attach and use SYSGEN to configure any sized SCSI-2 compliant Disk. MPE/iX uses SCSI-2 protocol to connect to SE, HVD and LVD SCSI Disks as well as Fibre Channel over SCSI. The SCSI-2 standard allows for disks of up to 2 Terabytes. SCSI-3 disks may be larger but will only report up to 2 Terabytes of storage for SCSI-2 format inquiries.

• Large Disk includes the ability to initialize an MPE/iX Disk Volume of up to 512 Gigabytes on SCSI-2 compliant disks. SCSI-2 Disks that are larger than 512 GB are truncated at the 512 GB limit. No matter how big the disk, HP reported, the space beyond 512 GB will not be usable by the MPE/iX or any applications.

There are limits to how much Large Disk is available. And MPE/iX disk volume includes disk-resident OS data structures that use some disk space, so no more than 511 GB of user file space should be expected.

• Large Disk includes a number of opportunistic enhancements to MPE Command Interpreter commands and utility programs to 'smooth' user experience when dealing with large disks, large groups and large accounts. These commands and utilities are REPORT, :[ALT|LIST|NEW][GROUP|ACCT], FSCHECK, and DISCFREE.

HP strongly advises installing all of these patches at the same time using Patch/iX. The Large Disk Patches are:

• MPEMXX8(A) -- FSCHECK.MPEXL.TELESUP
• MPEMXU3(B) -- [ALT|LIST|NEW][ACCT|GROUP]
• MPEMXT3 -- SCSI Disk Driver Update
• MPEMXT4 -- SSM Optimization (>87 GB)
• MPEMXT7 -- DISCFREE.PUB.SYS
• MPEMXU3 -- REPORT
• MPEMXV2(A) -- CATALOG.PUB.SYS
• MPEMXW9(A) CIERR.PUB.SYS, CICATERR.PUB.SYS

Image by pixel1 from Pixabay