March 17, 2015

Tips to Reinstall Posix, DLT/LTO Tape Drives

What is the patch that installs Posix? I seem to have a corrupt version of Posix.

Donna Garverick of Allegro replies:

These are your instructions for MPE/iX 5.5 and 6.0.

Load the 5.5 or 6.0 FOS tape on a tape drive. For this example, tape drive on ldev# 7 is used. Log on as MANAGER.SYS


Please note:

  • HP36431 is the master product number of the Posix 2 Shell.
  • I0036431.USL.SYS is the installation file.
  • When launched, the job I0036431 should run for less than 5 minutes. When it is done, the Posix environment is re-installed.

[Gilles Schipper notes the process for 7.5 is the same, working from the MPE/iX 7.5 FOS tape.]

I have access to a Tandberg Data Ultrium LTO 3 tape drive. It has a SCSI Ultra160 interface. Would I have any luck hooking one up to an N-Class?

Chad Lester of MPE Support Group replies:

It's worth trying. You might have issues with the dual-port SCSI cards. Also, make sure the firmware is the latest on the single SCSI U160 card.

We recently upgraded our customers to the hot-swappable LTO drives designed for the TA-5300 Array. The array is $350 with an SCSI Cable. Two Q1540A LTO 3s are $1,350, for a grand total of $1,700. That includes phone support from us for installation. 

I have a DLT4000 that will connect to an HP 3000 on path 32.2.0. How do I set the 12 dip switches on the back of the DLT for this path?

Mark Ranft of Pro3K replies:

Off of Google, I’ve found this:

For HP 7980S emulation, 

     1,2 - OFF

     3,4,5 - ON

     6,7,8,9 - OFF

     10,11,12 - SCSI ID (suggest all OFF)

So 10 is the 4's place
And 11 is the 2's place
And 12 is the 1's place

If 10, 11 and 12 are off (down), the SCSI ID will be zero.

Gilles Schipper of GSA adds:

I believe there are two different versions of DLT4000. One has a single-ended SCSI interface and the other has a FW-SCSI interface. I think the one Mark described is for the former.

Posted by Ron Seybold at 05:18 PM in Hidden Value, Homesteading | Permalink | Comments (0)

Get e-mail notice when the NewsWire blog gets a new entry. Just say "Blog Me" in a message to

February 16, 2015

Classic MPE tips: Tar, kills, and job advice

How do I use the tar utility to put data onto tape on an HP 3000?

1) Create a tape node

:MKNOD “/dev/tape c 0 7”

2) Enter posix shell

:SH -L

3) Mount a blank tape and enter the tar command

shell/ix>tar -cvf /dev/tape /ACCOUNT/GROUP/FILENAME

How can I determine the validity of an SLT tape?


What is the command to abort a hung session? I tried ABORTJOB #s3456. I seem to remember there is a command that will do more.

You can use =SHUTDOWN. But seriously, there is a chance that if it is a network connection, NSCONTROL KILLSESS=#S3456 will work. If it is a serial DTC connection, ABORTIO on the LDEV should work. Finally, depending upon what level of the OS you are on, look into the ABORTPROC command. This might help as a last resort.

I recently had a perfect application for the NEWJOBQ feature. We have two groups of users. One group submits jobs that take about 30 seconds each. Typical jobs for the other group take about 5 minutes each. So I thought I’d give the second group of users their own job queue.


When I submit a long job into the ALTJOBQ queue, and a quick job into the default job queue, the second job goes into the WAIT state. Why?

Your NEWJOBQ statement is correct, but your second statement didn’t do what you thought. To put a limit of one on the HPSYSJQ job queue, your statement should read 

By saying :LIMIT 1, you are changing the total job limit on the system to one. Since the total limit is one, and the long job in ALTJOBQ is still running, the second job waits even though he is the only job in his queue.

What does HPSWINFO.PUB.SYS show? All software or only installed software? How do I find out what HP software is installed?

Generally speaking, HPSWINFO.PUB.SYS is a record of system software level and patching activity. If you want information on HP software installed then you want to run psswinvp.

How can I sync the time on my 3000 with my Windows network? The PC side does regular, automated sync to NIST.

First, ensure your timezone is absolutely correct (:setclock/:showclock) and you have a system logon UDC to setvar TZ to the correct timezone.

Install NTP and use the ‘ntpdate’ function to sync your clock to the PC servers. Do this in a batch job that issues the ntpdate command, and then :STREAMs itself;IN=xx to periodically perform the synchronization.

Posted by Ron Seybold at 11:15 PM in Hidden Value, Homesteading | Permalink | Comments (0)

February 10, 2015

Multiple Parallel Tapes on 3000 Backups

Editor's note: When I saw a request this week for a copy of HP patch MPEMX85A (a patch to STORE that enables Store-To-Disk) for older MPE/iX releases, it brought a storage procedure request to mind.

I'm dealing with some MPE storage processes and need assistance. You would think after storing files on tapes after 10-plus years, we would have found a better way to do this. We use TurboStore with four tape drives and need to find a way to validate the backup. Vstore appears to only have the ability to use one tape drive. Currently I have some empty files scattered through the system and use a separate job to delete them, remount the tapes and restore, trying to access all four drives. 

When using vestore:

vstore [vstorefile] [;filesetlist]

It seems that vstorefile is looking for a file equation similar to:

File t; dev=tape
vstore *t;@.@.@; show

This is why it appears that I can't use more than one tape drive, unless they are in serial, while we want to use four drives in parallel. What method or software should I be using?

Mark Ranft of Pro3K replies:

We always found that DLT 8000 tapes worked well in parallel. When the backup got so big that it wouldn't fit on two DLT 8000 tapes, we split the backup, putting the databases on two tapes in parallel and everything else on a third tape. Keep in mind, we didn't have a backup strategy. We had a recovery strategy and backups were a part of that. We found, for us, organizing backups in this manner allowed us to speed recovery — which was far more important than anything else.

You can achieve good times doing Store-to-Disk backups. But then what? Do you back up the STD to tape and send it offsite? FTP it somewhere? The recovery times on getting this back are too slow.

Tracy Johnson adds

I think you can use VSTORE to read multiple tape drives in parallel or series using the ;RESTORESET parameter.

So you make four file equations.

Drop the beginning file single backreference to a equation (like we learned in olden times), and put the four new ones with the ;RESTORESET= parameter instead. It is one of those things that fooled me first time I saw it, and it took about 10 minutes getting used to seeing it.

The parenthesis around the file equations are placed differently:


 ;RESTORESET = (*tape1,*tape2,*tape3,*tape4)



But if the tapes were not also created in parallel, it may not help in the latter case.

Ray Legault adds

I use three DLT8000's and run a Vstore every week.

! setvar _drive "(*p1),(*p2),(*p3)"
!vstore ;@.@.@;restoreset=!_drive;show;progress=5;nodecompress

Posted by Ron Seybold at 09:35 PM in Hidden Value, Homesteading | Permalink | Comments (0)

January 26, 2015

How to Use MPE/iX Byte Stream Files

Back when HP still had a lab for the HP 3000, its engineers helped the community. In those days, system architect and former community liaison Craig Fairchild explained how to use byte stream files on the 3000. Thanks to the memory of the Web, his advice remains long after the lab has gone dark.

Mountain-streamThese fundamental files are a lot like those used in Windows and Linux and Unix, Fairchild said. HP has engineered "emulation type managers" into MPE/iX, an addition that became important once the 3000 gained an understanding of Posix. In 1994, MPE/XL became MPE/iX when HP added this Unix-style namespace.

Understanding the 3000 at this level can be important to the customer who wants independent support companies to take on uptime responsibility and integration of systems. Fairchild explained the basics of this basic file type.

Byte stream files are the most basic of all file types. They are simply a collection of bytes of data without any structure placed on them by the file system. This is the standard file model that is used in every Unix, Linux and even Windows systems.
MPE's file system has always been a structured file system, which means that the file system maintains a certain organization to the data stored in a file. The MPE file system understands things like logical records, and depending on the file type, performs interesting actions on the data (for example, Circular files, Message files, KSAM files and so on).

Fairchild detailed how HP has given bytestream files the knowledge of "organization of data" for applications.

To bridge the gap between standard byte stream file behavior (only the application knows the organization of data) and traditional MPE file type behavior (the file system knows what data belongs to what records), emulation type managers were created. To an MPE application, a byte stream file looks and behaves like a variable record file, even though the data is stored in a way that would allow any Posix application to also read the same data. (Posix applications also have emulator type managers that allow them to read fixed, variable and spool files in addition to plain byte stream files.)

The way that the byte stream emulator detects record boundaries is through the use of the newline (\n) character, which is used, by convention, to separate data in ASCII text files on Unix-based systems.

The underlying properties of a byte stream file are that each byte is considered its own record. In MPE file system terms, a record is the smallest unit of IO that can be performed on a file. (You can write a partial record fixed length record, but the file system will pad it to a full record.) Since the smallest unit of IO that can be performed on a byte stream file is a single byte, that becomes its MPE record size.
In the MPE file system, the EOF tracks the number of records that are in a file. Since the record size of a byte stream file is one byte, the EOF of a byte stream file is also equal to the number of bytes in the file. This is why one 4-byte variable sized record is equal to 5 byte stream records (4 bytes of data + 1 \n character).

It's also worth noting that any file can be in any directory location and will behave the same way. (Well, almost. CM KSAM files are restricted to the MPE namespace. And of course the special files (that you don't normally see) that make up the file system root, accounts and groups are also restricted: one root, accounts as children of the root, groups as children of accounts. And lockwords aren't allowed outside the MPE namespace. But other than that, the opening sentence is true.) 

The general model that we had in architecting the whole Posix addition was that behavior of a file does change regardless of where it is located. This was summed up in the saying, "A file is a file." So there are no such things as "MPE files" and "Posix files". There's just files.

What does change is the way you name that file. Files in the MPE namespace can be named either through the MPE syntax (FILE.GROUP.ACCOUNT), or through the HFS syntax (/ACCOUNT/GROUP/FILE). You can also use symbolic links to create alternate names to the same file. This was summed up as a corollary to the first saying, "But a name is not a name."

Posted by Ron Seybold at 09:47 PM in Hidden Value, Homesteading | Permalink | Comments (0)

January 06, 2015

Essential Steps for Volume Reloads

When a 3000 drive goes dead, especially after a power outage, it often has to be reloaded. For example, when an LDEV2 has to be replaced. For a cheat sheet on reloading a volume, we turned to our Homesteading Editor Gilles Schipper.

By Gilles Schipper

Assuming your backup includes the ;directory option, as well as the SLT:

1. Boot from alternate path and choose INSTALL (assuming alternate path is your tape drive) 
2. After INSTALL completes, boot from primary path and perform START NORECOVERY. 
3. Use VOLUTIL to add ldev 2 to MPEXL_SYSTEM_VOLUME_SET. 
4. Restore directory from backup (:restore *t;;directory) 
5. openq lp
6. Perform a full restore with the following commands
:file t;dev=7(?)
:restore *t;/;keep;show=offline;olddate;create;partdb;progress=5 7.


I would suggest setting permanent and transient space each equal to 100 percent on ldev 2. The 75 percent default on ldev 1 is fine as long as you don’t need the space. And if you did, your solution shouldn’t really be trying to squeeze the little extra you’d get by increasing the default maximum limits.

The reason for limiting ldev1 to 75 percent is to minimize the otherwise already heavy traffic on ldev 1, since the system directory must reside there, as well as many other high traffic “system” files.

You won't want to omit the ;CREATE and ;PARTDB options from the restore command. Doing so will certainly get the job done -- but perhaps not to your satisfaction. If any file that exists on your backup was created by a user that no longer exists, that file (or files) will NOT be restored.

Similarly, if you omit the ;PARTDB option, any file that comprises a TurboIMAGE database whose corresponding root file does not exist, will also not be restored.

I suppose it may be a matter of personal preference, but I would rather have all files that existed on my disks prior to disk crash also exist after the post disk-crash RELOAD. I could then easily choose to re-delete the users that created those files -- as well as the files themselves.

Another reason why the ;SHOW=OFFLINE option is used is so that one can quickly see the users that were re-created as the result of the ;CREATE option. Purging the "orphan" datasets would be slightly more difficult, since they don’t so easily stand out on the stdlist.

Finally, it’s critical that a second START NORECOVERY be performed. Otherwise, you cannot successfully start up your network.

Posted by Ron Seybold at 10:37 PM in Hidden Value, Homesteading | Permalink | Comments (0)

December 10, 2014

Getting Macro Help With COBOL II

GnuCOBOL An experienced 3000 developer and manager asked his cohorts about the COBOL II macro preprocessor. There's an alternative to this very-MPE feature: "COPY...REPLACING and REPLACE statements. Which would you choose and why?"

Scott Gates: COPY...REPLACING because I understand it better.  But the Macro preprocessor has its supporters. Personally, I prefer the older "cut and paste" method using a decent programmer's editor to replace the text I need. Makes things more readable.

Donna Hofmeister: I'm not sure I'm qualified to comment on this any longer, but it seems to me that macros were very efficient (and as I recall) very flexible (depending on how they were written, of course). It also seems to me that the "power of macros" made porting challenging. So if your hidden agenda involves porting, then I think you'd want to do the copy thing.

There was even porting advice from a developer who no longer works with a 3000, post-migration.

Tony Summers: When we migrated in 2008 we chose Acucobol partly because of its HP 3000 compatibility, including macro support. However had we gone down a different route, I had already proved that I could pre-process the raw code myself and expand the macros before calling the compiler.

Robert Mills, who started the discussion on the 3000-L, said in reply to Donna, “I admit that I do have a hidden agenda, but the main reason does not involve porting.”

For many years I have used macros to make my life easier. When I left the e3000, back in 2008, and did some work on other platforms I found I missed them. I'm now in semi-retirement and have been using the free version of Micro Focus COBOL (a couple of years) and GnuCOBOL (this last year) to write software for friends, family and my own use.

A couple of times since 2008 I had thought of writing by own macro preprocessor to emulate the one on the e3000. A few months ago I decided to do it and release it as open source under the GNU GPL. The development of preprocessor, using GnuCOBOL, is now completed and in final Beta Testing and I'm writing the manual. Was hoping that I could some additional reasons, from others, as to why you would use macros instead of the copy...replacing and replace statements.

Because a port of GnuCOBOL is a available on several platforms, and my preprocessor is written in GnuCOBOL, I see no problem in taking my macros with me nearly every wherever I go. If I end up doing work on a platform that does not support a feature that it is using it shouldn't be to difficult to develop a workaround.

As it turns out, GnuCOBOL is a newer version of OpenCOBOL — a compiler that Donna says bears a close resemblence to COBOL II. (OpenCOBOL has been ported into a commercial product, too, called IT-COBOL.) Adding that she obviously thinks macros are cool, she explained.

Do my mis-firing neurons recall that GnuCOBOL was formerly OpenCobol... which was actually very close to MPE’s COBOL?  (or something like that?)

I inherited a outstanding collection of macros at one job. Many of them were 'toolbox' functions.  Want to center a string and the overall length of the string doesn't matter?  Got a macro for that.  Want to use a 'db' call? Got a macro for that.  These went far beyond modifying code at compile time -- and that's what made them so valuable (at least to me).

Posted by Ron Seybold at 06:10 PM in Hidden Value, Homesteading, User Reports | Permalink | Comments (0)

December 08, 2014

IMAGE data schemas get visualized

Is there any program that will show the network of a TurboIMAGE database? I want to output the relationships among sets and items.

CFAWireframeIn 2011, Connie Sellitto researched the above question, while aiding new programmers who were charged with moving a pet organization's operations to a non-MPE system. Understanding the design of the database was important to this team. Sellitto mentioned a popular tool for PCs, but one not as essential as an IT pro's explanations.

You might try Microsoft's Visio, and you may need to have an ODBC connection to your IMAGE database as well. This produces a graphical view with search paths shown, and so on. However, there is still nothing like a detailed verbal description provided by someone who actually knows the interaction between datasets.

To sum up, we can refer to ScreenJet founder's Alan Yeo's testing of that Visio-IMAGE interplay

Taking a reasonably well-formed database into Visio and reverse engineering, you do get the tables and items. It will show you what the indexes in the tables are, but as far as I can see it doesn't show that a detail is linked to a particular master. Automasters are missing anyway, as they are really only for IMAGE.

My conclusion: if you have done all the work to load the databases in the SQL/DBE and done all the data type mappings, then importing in Visio might be a reasonable start to documenting the databases, as all you would have to do is add the linkages between the sets.

If you don't have everything in the SQL/DBE, then I would say we are back where we started.

ScreenJet knows quite a bit about moving 3000 engineering into new formats. It built the EZ View modernization kit for 3000 user screens that are still in VPlus. Yeo said the ubiquitous Visio might be overkill for explaining relationships.

If you have Adager, Flexibase, or DBGeneral -- or already have a good schema file for the databases -- just generate the schema files and import them into Word or Excel and give them to [your migrators]. If they can't put together the data structure from that, no amount of time you can spend with Visio is going to impart any more information.

Visio has free and open source competition, software which HP support veteran Lars Appel pointed out. "Perhaps Visio has similar 'database graph' features, such as the free or open source tools like dbVisualizer or SquirrelSQL."

Barry Lake of Allegro pointed out that users "may want to take a look at Allegro's DBHTML product, which creates a browser viewable HTML file documenting the structure of an IMAGE database." Allegro's site has an example DBHTML output on its website, although it doesn't draw pretty pictures.

At a more fundamental OS level, Michael Anderson points out to understand the structure of a TurboIMAGE database, "you could use QUERY.PUB.SYS, then issue the command FO ALL, or FO SETS."

Posted by Ron Seybold at 10:31 PM in Hidden Value, Homesteading | Permalink | Comments (0)

November 25, 2014

Open source SED manages 3000 streams

Open source resources make it possible to use SED, a stream editor built in the open source community. Since 2001 SED has worked on the HP 3000, thanks to Lars Appel, a former HP support engineer who ported Samba to the platform in the 1990s.

SED's main MPE page is on a page of Appel's. SED is an at your own risk download, but support is available through the 3000 community.

Dan Barnes, working on a problem he had to solve in his 3000 environment, asked:

The issue is incoming data from another platform that is being fed into MM 3000. This data occasionally has some unprintable characters, which of  course wrecks havoc on the MM application when it is encountered. To address this, the user, using a cygwin (Unix-like) environment on their Windows PC, developed a SED script. When they test the script in the cgywin environment it works just fine. But when done on the target HP 3000 it gets an undesirable result.

Barnes added that "The user thought that because MPE/iX is Posix-compliant, that this should work." He explained his user created the expression

sed -e 's/[\x7F-\xFE]/*/g' < COMSHD > COMSHD1

But Appel noted that hex 7F thru hex FE portion of the expression isn't supported on the MPE/iX version of SED. It's a limitation of MPE/iX, but there's a workaround.

Not sure if the regular expression usage here matches Posix or GNU specs, but my guess is the "\xNN" format, that seems to indicate a char by hex code, doesn't work.

How about something like using the command sed -e 's/[^ -~]/*/g' instead, i.e. map the characters outside the range space through tilde?

Posted by Ron Seybold at 10:05 AM in Hidden Value, Homesteading | Permalink | Comments (0)

November 17, 2014

HP's 3000 power supply persists in failure

Amid a migration project, Michael Anderson was facing a failure. Not of his project, but a failure of his HP 3000 to start up on a bad morning. HP's original hardware is in line for replacement at customers using the 3000 for a server. Some of these computers are more than 15 years old. But the HP grade of components and engineering is still exemplary.

"I was working with a HP 3000 Series 969, and one morning it was down," he reported. "All power was on, but the system was not running; I got no response from the console. So I power-cycled it, and the display panel (above the key switch) reported the following."

Proceeding to turn DC on

On the console it displayed garbage when the power was turned on, but the message on the display remained. I wasn’t sure what to replace. I was thinking the power supply — but all of the power was on. As it turned out, even in the middle of a power supply failure the 3000 was working to get out a message. The back side, the core I/O, FW SCSI, and so on, all appeared to have power. That is why I found it hard to believe that the power supply was the problem.

Anderson explained that Charles Johnson of Surety Systems replaced the power supply for the system.
He explained that (back in the day) HP engineered some of the best power supplies in world, lots of checks and verifications. Even though the power supply had actually supplied DC power to the various components, it was not able to verify it.

So the message "Proceeding to turn on DC power" remained on the front panel display, meanwhile the boot process on the console would hang, and if you do a <cntrl-B>, RS it would time-out with a msg:

"FATAL ERROR: System held in reset. POW_ON never came back (APERR 21)"

"Waiting until it's reasserted......"

Bill & Dave's Excellent Machine — even with a power supply failure, it still manages to get a message out (in plain English) attempting to explain the failure.

Posted by Ron Seybold at 08:41 PM in Hidden Value, Homesteading, User Reports | Permalink | Comments (0)

November 13, 2014

Thursday Throwback: IMAGE vs. Relational

As a precocious 18-year-old, Eugene Volokh wrote deep technical papers for HP 3000 users who were two or three times his age. While we pointed to the distinctions between IMAGE master and automatic datasets recently, Eugene's dad Vladimir reminded us about a Eugene paper. It was published in the fall of 1986, a time when debate was raging over the genuine value of relational databases.

While the relational database is as certain in our current firmament as the position of any planet, the concept was pushing aside proven technology 28 years ago. IMAGE, created by Fred White and Jon Bale at HP, was not relational. Or was it? Eugene offered the paper below to explore what all the relative fuss was about. Vladimir pointed us to the page on the fine Adager website where the paper lives in its original formatting.

COBO HallThe relationships between master and automatic and detail datasets pointed the way to how IMAGE would remain viable even during the onslaught of relational databases. Soon enough, even Structured Query Language would enter the toolbox of IMAGE. But even in the year this paper emerged, while the 3000 still didn't have a PA-RISC model or MPE/XL to drive it, there was a correlation between relational DBs and IMAGE. Relational databases rely on indexes, "which is what most relational systems use in the same way that IMAGE uses automatic masters," Eugene wrote in his paper presented at COBO Hall in Detroit (above). QUERY/3000 was a relational query language, he added, albeit one less easy to use.

Vladimir admits that very few IT professionals are building IMAGE/SQL databases anymore. "But they do look at them, and they should know what they're looking at," he explained.

Relational Databases Vs. IMAGE:
What The Fuss Is All About

By Eugene Volokh, VESOFT

What are "relational databases" anyway? Are they more powerful than IMAGE? Less powerful? Faster? Slower? Slogans abound, but facts are hard to come by. It seems like HP will finally have its own relational system out for Spectrum (or whatever they call it these days). I hope that this paper will clear up some of the confusion that surrounds relational databases, and will point out the substantive advantages and disadvantages that relational databases have over network systems like IMAGE.

What is a relational database? Let's think for a while about a database design problem.

We want to build a parts requisition system. We have many possible suppliers, and many different parts. Each supplier can sell us several kinds of parts, and each part can be bought from one of several suppliers.

Easy, right? We just have a supplier master, a parts master, and a supplier/parts cross-reference detail:

Relational-IMAGE Fig 1Every supplier has a record in the SUPPLIERS master, every part has a record in the PARTS master, and each (supplier, part-supplied) pair has a record in the SUPPLIER-XREF dataset.

Now, why did we set things up this way? We could have, for instance, made the SUPPLIER-XREF dataset a master, with a key of SUPPLIERS#+PART#.  Or,  we  could have made all three datasets stand-alone details, with no masters at all. The point is that the proof of a database is in the using. The design we showed -- two masters and a detail -- allows us to very efficiently do the following things:

  • Look up supplier information by the unique supplier #.
  • Look up parts information by the unique part #.
  • For each part, look up all its suppliers (by using the cross-reference detail dataset).
  • For each supplier, look up all the parts it sells (by using the cross-reference detail dataset).

This is what IMAGE is good at -- allowing quick retrieval from a master using the master's unique key and allowing quick retrieval from a detail chain using one of the detail's search items. 

However, let’s take a closer look at the parts dataset. It actually looks kind of like this:

PART# <-- unique key item

What if we want to find all the suppliers that can sell us a "framastat"? A "framastat", you see, is not a part number -- it's a part description. We want to be able to look up parts not only by their part number, but also by their descriptions. The functions supported by our design are:

  • Look up PART by PART#.
  • Look up PARTs by SUPPLIERS#.
  • Look up SUPPLIERs by PART#.

What we want is the ability to

  • Look up PART by DESCRIPTION.

The sad thing is that the PARTS dataset is a master, and a master dataset supports lookup by ONLY ONE FIELD (the key). We can't make DESCRIPTION the key item, since we want PART# to be the key item; we can't make DESCRIPTION a search item, since PARTS isn't a detail. By making PARTS a master, we got fast lookup by PART# (on the order of 1 or 2 I/Os to do the DBGET), but we forfeited any power to look things up quickly by any other item.

And so, dispirited and dejected, we get drunk and go to bed. And, deep in the night, a dream comes. "Make it a detail!" the voice shouts. "Make it a detail, and then you can have as many paths as you want to."

We awaken elated! This is it! Make PARTS a detail dataset, and then have two search items, PART# and DESCRIPTION. Each search item can have an automatic master dataset hanging off of it, to wit:

Relational-IMAGE Fig 2

What's more, if we ever, say, want to find all the parts of a certain color or shape, we can easily add a new search item to the PARTS dataset. Sure, it may be a bit slower (to get a part we need to first find it in PART#S and then follow the chain to PARTS, two IOs instead of one), and also the uniqueness of part numbers isn't enforced; still, the flexibility advantages are pretty nice.

So, now we can put any number of search items in PARTS. What about SUPPLIERS? What if we want to find a supplier by his name, or city, or any other field? Again, if we use master datasets, we're locked into having only one key item per dataset. Just like we restructured PARTS, we can restructure SUPPLIES, and come up with:

Relational-IMAGE Fig 3Note what we have done in our quest for flexibility. All the real data has been put into detail datasets; every data item which we're likely to retrieve on has an automatic master attached to it.

Believe it or not, this is a relational database.

If this is a relational database, I'm a Hottentot

Surely, you say, there is more to a relational database than just an IMAGE database without any master datasets. Isn't there? Of course, there is. But all the wonderful things you've been hearing about relational databases may have more to do with the features of a specific system that happens to be relational than with the virtues of relational as a whole.

Consider for a moment network databases. IMAGE is one example, in fact an example of a rather restricted kind of network database (having only two levels, master and detail). Let's look at some of the major features of IMAGE:

  • IMAGE supports unique-key MASTERS and non-unique-key DETAILS.
  • IMAGE does HASHING on master dataset records.
  • IMAGE has QUERY, an interactive query language.

Which of these features are actually network database features? In other words, which features would be present in any network database, and which are specific to the IMAGE implementation? Of the three listed above, only the first -- masters and details -- must actually be present in all databases that want to call themselves "network." On the other hand, a network database might very well use B-trees or ISAM as its access method instead of hashing; or, it might not provide an interactive query language. It would still be a network database -- it just wouldn't be IMAGE.

Why is all this relevant? Well, let's say that somebody said "Network databases are bad because they use hashing instead of B-trees." This statement is wrong because the network database model is silent on the question of B-trees vs. hashing. It is incorrect to generalize from the fact that IMAGE happens to use hashing to the theory that all network databases use hashing. If we get into the habit of making such generalizations, we are liable to get very inaccurate ideas about network databases in general or other network implementations in particular.

The same goes for relational databases. The reason that so many people are so keen on relational databases isn't because they have any particularly novel form of data representation (actually, it's much like  a  bunch  of old-fashioned KSAM/ISAM-like files with the possibility of multiple keys); nor is it because of some fancy new access methods (hashing, B-trees, and ISAM are all that relational databases support). Rather, it's because the designers of many of the modern relational databases did a good job in providing people with lots of useful features (ones that might have been just as handy in network databases).

What are relational databases: functionality

The major reason for many of the differences between relational databases and network databases is simple: age. Remember the good old days when people hacked FORTRAN code, spending days or weeks on optimizing out an instruction or two, or saving 1000 bytes of memory (they had only 8K back then) ? Well, those are the days in which many of today's network databases were first designed; maximum effort was placed on making slow hardware run as fast as possible and getting the most out of every byte of disk.

Relational databases, children of the late '70s and early '80s had the benefit of perspective. Their designers saw that much desirable functionality and flexibility was missing in the older systems, and they were willing to include it in relational databases even if it meant some wasted storage and performance slow-down. The bad part of this is that, to some extent, modern relational databases are still hurting from slightly decreased performance; however, this seems to be at most a temporary problem, and the functionality and flexibility advantages are quite great.

For even more IMAGE education, like the advantages of IMAGE over relational databases, and a tour of the flexibility that automatic masters provide, see the remainder of the paper on the Adager website.

Posted by Ron Seybold at 01:48 PM in Hidden Value, History, Homesteading | Permalink | Comments (0)

November 07, 2014

Manual and Automatic Masters, Detailed

A few days ago we included a Hidden Value question about how manual and automatic masters work in TurboIMAGE. Our ally and friend Vladimir Volokh called to note that in part of the question, the system manager had found "one detail data set that has thousands of entries which do not appear to be connected to any master.

It wasn't exactly a question, but in a reply on the 3000-L mailing list and newsgroup, Roy Brown gave a fine tutorial on how these features do their jobs for MPE and the 3000 -- as well as how a detail dataset might have zero key fields.

Manual masters can contain data which you define, like Detail sets can, along with a single Key field. Automatic masters contain only the Key field.

In both cases, there can be only one record for a given key value in a Master dataset.

A Detail dataset contains data fields plus zero, one, or many key fields. There can be as many records as you like for a given key value, and these form a chain accessible from the Master record key value. This chain may be sorted, or it may just be in chronological order of adding the records.

Zero key fields in a Detail dataset would be unusual, but is permissible.

Brown explained that "Where there are keys, referential integrity demands that there are no Detail record entries with a key field that is not found in either a Manual or Automatic master, both Key name and Key value. So a Detail data set with Key fields that are not present in a Master record would be a sign of a seriously corrupted database."

However, I doubt this is the case, and when you do a QUERY FORM command, you will see which fields in Detail datasets are Keys, which fields are used to establish Sort orders, and which fields are data pure and simple.

From the Key name, you can determine which Master set links the keys.

As I said above, it is possible to have a Detail dataset with no keys, but these usually contain only a very few records, since direct access to them without keys is cumbersome, and you would otherwise have to trawl right through one to find any given entry.

So a Detail dataset with thousands of unconnected entries would be very unlikely.

The FORM output will allow you to check how the Detail dataset that you think might have unconnected entries is actually linked in.

Posted by Ron Seybold at 09:55 PM in Hidden Value, Homesteading | Permalink | Comments (0)

November 04, 2014

Tips for Listing SCHEMAs, and FTP Listings

From my existing TurboIMAGE database, I want to generate a listing of data sets, data item names, and their relationships (master, detail). One detail data set has thousands of entries which do not appear to be connected to any master. 

Oh, and I cannot remember the difference between manual and automatic masters.

Francois Desrochers first replies, "Use Query's FORM command."

PASSWORD = >> password
MODE = >> 5

Manual masters: programs have to explicitly add entries before you can add related entries in detail sets. Programs have to explicitly delete entries when there are no related detail entries left. In other words, you have to do master dataset maintenance.

Automatic masters: entries are automatically created when a related detail set entry is created. Entries in the master are automatically removed when the last related detail entry is deleted. IMAGE takes care of the maintenance.

Consultant Ron Horner adds, "If you have a tool like Adager or DBGeneral, it can create a file of the database schema. The other way is by using QUERY to get a listing."

Horner also said, "The difference between a manual master and auto master is the following:

1. You have to add records to the manual master that contain the key data for any detail datasets that are linked to the master.

2. When working with automatic masters, you don't have to write data to them at all. IMAGE takes care of populating the master.

Ray Legault suggested using Allegro's free XSCHEMA, "A stripped down version of our commercial product DBHTML, this utility simply reads an Image root file and produces a DBSCHEMA-compatible text file as output."

Krikor Gullekian also noted that "With QUERY you can check the databases as long as you know the password. [Ed. note: Password advice is true, except when you're the database owner. No password is required then, just a semicolon.] FO SETS will give you a lot of details."

How can I tell the HP 3000's FTP server to use a standard 'ls -l', and not 'LISTFILE ,2' ?

Allegro's Donna Hofmeister said, The trick is to turn 'Posix' on. 

Turning Posix On

Keven Miller of 3K Ranger added "check out FTPDOC.ARPA.SYS, mainly the POSIX section. It mentions how to set the default to ON."

Posted by Ron Seybold at 04:19 PM in Hidden Value, Homesteading | Permalink | Comments (0)

October 28, 2014

Strategies for Redirecting App Spoolfiles

An HP 3000 manager wrote that a 24x7 application at his shop is stable and never goes offline unless it's required. But the everyday management had to include aborting the app once a week.

We take that application offline to close out the spoolfile that the application generates. Is there a way to keep the application running, and just redirect the output to a new spoolfile? We're using an N-Class server.

Robert Schlosser of Global Business Systems replied: Short of closing and reopening the application after n number of pages, you could have the application read (without wait and checking status codes) a message file. It could then close and open the output file on demand, and possibly even close down the application gracefully (no abort).

Our Homesteading Editor Gilles Schipper replied: I think the only way you could do that would be to actually modify the application program to periodically (say, for example, every 10 pages or every 100 pages) close then re-open the print file.

Olav Kappert of IOMIT International added:

If the program can be slightly modified, then I would suggest creating a message file as a conduit to the application. The program would do a read of the message file with the nowait option every once and awhile.  If the application encounters a keyword indicating a new spoolfile, then the program would close the spoolfile and reopen it.

An alternate method would involve the application being modified to close and open the file at a particular day and time during the week.

Posted by Ron Seybold at 05:34 PM in Hidden Value, Homesteading | Permalink | Comments (0)

September 23, 2014

Pre-Migration Cleanup Techniques

Migrations are inevitable. The Yolo County Office of Education is on its way to a Windows-based system, after many years of HP 3000 reliance. Ernie Newton of the Information and Technology Services arm of the organization is moving his 3000 data. He's doing a clean-up, a great practice even if you're not heading off of MPE.

I am cleaning up our IMAGE databases for the inevitable move to Microsoft’s SQL Server. One thing I've encountered is that Suprtool does not like null characters where there should be numbers.

I know that I have invalid characters, (non-numeric), in a field called ITEM-NUMBER.  But when I try to find those records, Suprtool chokes and abruptly stops the search. Here's what I get...


Error:  Illegal ascii digit encountered. Please check all data sources
Input record number: 1

Is there a way to run Suprtool to help it find these records? Query finds them just fine, but Query doesn't have to ability to do what I want to do. 

After being reminded that "Nulls are not numbers," by Olav Kappert, and "try to use a byte string to compare (like < "a" or > "z") or something like that," Robelle's Neil Armstrong weighed in.

You can find any character you want by using the Clean, $findclean and $clean feature. The first issue to deal with is to re-define the item-number as a byte type in order to use the function.

Armstrong explained, "The following shows how to find the fields with 'invalid' zoned-decimal characters. Remember that zoned fields use characters to indicate the sign. It's likely that you don't have negative Item numbers but they are valid.

get somedataset
def item-x,item-number,byte
{Setup what characters to look valid characters are 0-9, JKLMNOPQRST, ABCDEFGHI and the curly braces }
{ so Clean characters should be not the above }
clean "^0:^47","^58:^64","^85:^122","^126:^255"
if $findclean(item-x)
ext item-x
list st

"Note the clean command defines the 'decimal' characters to look for."

Posted by Ron Seybold at 05:50 PM in Hidden Value, Migration | Permalink | Comments (0)

September 22, 2014

Ways to Create PDFs from 3000 Output

Years ago -- okay, seven -- we reported the abilities of the Sanface Software solution to create PDF files out of HP 3000 output. But there are other ways and tools to do this, a task that's essential to sharing data reports between HP 3000s and the rest of the world's computers.

On the HP 3000 newsgroup, a veteran 3000 developer has asked,

Has anyone got any experience involving taking a file in an output queue and creating a PDF version of it?

"We use text2pdf v1.1 and have not had any problems since we installed it in October 2001," said Robert Mills of Pinnacle Entertainment. "I have e-mailed a copy of this utility and our command file to 27 people. Never knew that so many sites wanted to generate PDFs from their 3000s."

The program is a good example of 3000 source code solutions. This one was created as far back as the days of MPE/iX 6.0, a system release which HP has not supported since 2005.

Lars Appel, the former HP support engineer who built such things on his own time while working at HP Support in Germany -- and now works with Marxmeier on its Eloquence product -- has source code and a compiled copy of the utility.

Such solutions, and many more, are hosted on the Web server at 3K Associates, Check the Applications Ported to MPE/iX section of the Public Domain Software area at 3K's Web site.

You'll also find a link to GhostPCL up at the site, another Appel creation, one which he describes as

A program that reads PCL input files and converts them to a variety of output formats, including PDF or JPEG, for example. Combined with my little FakeLP Java program, you might even use it to capture MPE/iX network spooler output and generate PDF or JPEG from an MPE/iX spoolfile.

Open source solutions like these have been an HP 3000 community tradition. Way back in 2000, we reported in the print 3000 NewsWire about that FakeLP Java program, helpful in getting text2pdf to do its PDF magic.

A roadblock to using the text2pdf program: the spoolfiles had to be in text file format to work with it. But Lars Appel offered a free solution to make 3000 spoolfiles that don't rely on CCTLs ready for their PDF closeups:

"I have a small Java program that listens to a given port, for example 9100, and 'pretends to be a network printer' i.e. gets all the data sent and writes it to a flat file. This might be a start, as OUTSPTJ.PUB.SYS should have converted CCTL to plain PCL when sending to a JetDirect printer. However, this little Java program is just a quick and dirty experiment. Use at your own risk; it worked on my 3000, but your mileage may vary."

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ cut here _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

// FakeLP pretends network printer to capture spooler PCL output


class FakeLP {

public static void main( String args[] ) throws Exception {

int port = 9100;
int next = 1;

if (args.length > 0) port = Integer.parseInt(args[0]);
if (args.length > 1) next = Integer.parseInt(args[1]);

ServerSocket serv = new ServerSocket( port );

while (true) {

System.out.println("FakeLP listener ready");

Socket sock = serv.accept();
byte[] buf = new byte[4096];
String name = "F" + (next++);

System.out.println("Capturing spoolfile to " + name);

InputStream si = sock.getInputStream();
OutputStream fo = new FileOutputStream(name);

for (;;)
int got =;

if (got != -1)
fo.write(buf, 0, got);


Posted by Ron Seybold at 06:43 PM in Hidden Value, Homesteading | Permalink | Comments (0)

September 19, 2014

Passing FTP Capabilities to MPE

Ws-FTP ProHP 3000s do lots of duty with data from outside the server. The 3000's FTP services sit ready to handle transfers from the world of Windows, as well as other systems, and PCs far outnumber the non-Windows computers networked to 3000s. Several good, low-cost FTP clients on Windows communicate with the 3000, even though MPE/iX still has some unique "features" in its FTP server.

Our former columnist John Burke once reported that his HP 3000 emitted a second line of text during an FTP session that could confuse the open source FTP client FileZilla:

FileZilla issues the PWD command to get the working directory information. On every other system I've tried, the result is something like 257 "home/openmpe" is the current working directory However, MPE responds with something like 257-"/SYSADMIN/PUB" is the current directory. 257 "MGR.SYSADMIN,PUB" is the current session. The second line appears to be confusing FileZilla because it reports the current directory as /MGR.SYSADMIN,PUB/, which of course does not work.

Back when it was a freeware, Craig Lalley took note of a worthy solution, WS-FTP from IP Switch. The product is now for sale but its client is not costly. And an MPE setting can remove the problems that can choke up FileZilla.

Lalley, who runs the 3000 consultancy Echo Tech, once offered this advice about WS-FTP. "I have used it for several years, without any problems. I also have used Bullet FTP and CuteFTP." About the built-in FTP in browsers, as far back as Internet Explorer, he added, "Don't go there."

Chris Thompson of The Internet Agency, another 3000-friendly vendor, echoed the praise of WS-FTP. Thompson also sells MPE software, the MPE/iX Enterprise Client. Alas, he noted that the much-praised Whisper Technology, now defunct, also had a laudable FTP product

WS-FTP is a really good product. Also, try FTP Surfer, which is freeware from Whisper Technology Limited. Usually we use this product to FTP to our 937. It's always worked well.

But as might be expected, there's a way to make HP's FTP behave in less unique and more compliant way. Lars Appel, who ported Samba to the HP 3000 before he left HP's support team, delivered the answer that makes FileZilla work with the 3000

Try the "SITE POSIX ON" command in your FTP session already (or the respective POSIX=ON setting in the SETPARMS.ARPA.SYS config file to change the default, in case the FileZilla session cannot issue "site"

Burke once reported that "POSIX = ON in the SETPARMS file did the trick, eliminating the message that confused FileZilla. I've been using FileZilla for all my ad hoc FTP needs for some time now — works great to all manner of Unix, Windows and Linux systems."

HP's James Hofmeister, who's led the effort to keep FTP up to date on the 3000, took issue with claims that the 3000 doesn't play well with Web-based FTP clients.

Lots of work went into an implementation of the FTPSRVR to support web access to the 3000... The "SITE POSIX ON" command can be sent by a FTP client and the 3000 FTPSRVR will emit Posix "standard" FTP output and will react like a Posix host (including file naming conventions).

It also is possible as documented to specify "POSIX=ON" mode in the file and achieve this functionality system-wide for all non-3000 client to 3000 FTPSRVR connections; again the FTPSRVR will emit Posix "standard" FTP output and will react like a Posix host (including file naming conventions).

Warning:  Before you specify "POSIX=ON" mode in the file, make sure you read the FTPDOC file closely; as you are warned that MPE file syntax will "no longer" work; The 3000 FTPSRVR is acting in Posix mode.

Posted by Ron Seybold at 04:05 PM in Hidden Value, Homesteading | Permalink | Comments (0)

September 16, 2014

Advice on uptime, net gateways and sockets

Is there a way to use the 3000's networking to check how long your system has been up?

James Hofmeister replies:

If you have SNMP running, a query to check system uptime is:

: snmpget public system.sysUpTime.0
Name: system.sysUpTime.0
Timeticks: (418638300) 48 days, 10:53:03

I get no awards for 48 days uptime, but I use my machines to duplicate, beta test and verify repair of customer network problems.

Is there a way to scan all the ports on my HP 3000 Series 996: How many are being used, and how many are available?

Mark Bixby replies:

SOCKINFO.NET.SYS can tell you which programs have opened which sockets.

NETTOOL.NET.SYS STATUS,TCPSTAT and STATUS,UDPSTAT can also give you useful information about sockets, particularly STATUS,TCPSTAT and CONNTABLE.

Or you can run an external tool and do a port scan against your 3000.  This is not recommended during production hours, since such port cans can sometimes confuse network applications.

When I try to configure a  on our MPE/iX 7.5 system, I get the following error when I try to validate my new NMMGR gateway configuration.

Searching for subsystem validation routine VALIDATENETXPOR

There are no other gateways configured so the CONFIGURED GATEWAYS (1) value look okay to me — so how can I increase the IPU MAX GATES value?

James Hofmeister replies:

In MPE/iX 5.5 and 6.0 (unpatched) the limit was 14 gateways. This was increased to 255 gateways with patches, and was included in base 6.5 and 7.x.

The fact that validate says “IPU MAX GATES (0)” would indicate to me that you have corruption of your configuration file in “at least” the field that holds this value.

I would suggest that you want to first keep a copy of this config file, then purge NMCONFIG.PUB.SYS and then rebuild your configuration with guided config.

Note:  You could do a copy subtree of the NETXPORT.PROT.IPU field from NMAUX1.PUB.SYS to NMCONFIG.PUB.SYS to update this field — but at this point I would expect problems in this config file with more than just this one field.

Posted by Ron Seybold at 06:27 PM in Hidden Value | Permalink | Comments (0)

August 26, 2014

See how perl's strings still swing for MPE

PerlheartThe HP 3000 has a healthy range of open source tools in its ecosystem. One of the best ways to begin looking at open source software opportunity is to visit the MPE Open Source website operated by Applied Technologies. If you're keeping a 3000 in vital service during the post-HP era, you might find perl a useful tool for interfacing with data via web access.

The 3000 community has chronicled and documented the use of this programming language, with the advice coming from some of the best pedigreed sources. Allegro Consultants has a tar-ball of the compiler, available as a 38MB download from Allegro's website. (You'll find many other useful papers and tools at that Allegro Papers and Books webpage, too.)

Bob Green of Robelle wrote a great primer on the use of perl in the MPE/iX environment. We were fortunate to be the first to publish Bob's paper, run in the 3000 NewsWire when the Robelle Tech long-running column made a hit on our paper pages.

You could grab a little love for your 3000, too. Cast a string of perls starting with the downloads and advice. One of HP's best and brightest -- well, a former HP wizard -- has a detailed slide set on perl, too.

The official website has great instructions on Perl for MPE/iX installation and an update on the last revision to the language for the 3000. First ported by Ken Hirsch in 2000, the language was brought to the 5.9.3 release in 2006.

An extensive PowerPoint presentation on perl by the legendary porter Mark Bixby will deliver detailed insights on how to introduce perl to your programming mix. Bixby, who left HP to work for the 3000 software vendor QSS, brings the spirit of open source advocacy to his advice on how to use this foundational web tool.

As an example, Bixby notes that "it's now possible to write MPE applications that look like web browsers, to perform simple HTTP GET requests, or even complicated HTTP POST requests to fill out remote web forms." It's no box of Godiva, or even the classic blue box from Tiffany's, but perl might be something you love to use, to show that 3000 isn't a tired old minicomputer -- just a great sweetheart of a partner in your mission-critical work.

Posted by Ron Seybold at 10:35 PM in Hidden Value, Homesteading | Permalink | Comments (0)

August 15, 2014

The 3000's got network printing, so use it

Ten years ago this summer, HP's 3000 lab engineers were told that 3000 users wanted networked printing. By 2005 it was ready for beta testing. This was one of the last enhancements demanded as Number 1 by a wide swath of the 3000 community, and then delivered by HP. The venerable Systems Improvement Ballot of 2004 ranked networked printing No. 1 among users' needs.

MPEMXU1A is the patch that enables networked printing, pushed into General Release in Fall, 2005. In releasing this patch's functionality, HP gave the community a rather generic, OS-level substitute for much better third party software from RAC Consulting (ESPUL). It might have been the last time that an independent software tool got nudged by HP development.

HP M1522N printerThe HP 3000 has the ability to send jobs to non-HP printers over a standard network as a result of the enhancement. The RAC third party package ties printers to 3000 with fewer blind spots than the MPEMXU1A patch. HP's offering won't let Windows-hosted printers participate in the 3000 network printing enhancement. There's a Windows-only, server-based net printing driver by now, of course, downloadable from the Web. The HP Universal Print Driver Series for Windows embraces Windows Server 2012, 2008, and 2003.

Networked printing for MPE/iX had the last classic lifespan that we can recall for a 3000 enhancement. The engineering was ready to test less than a year after the request. This software moved out of beta test by November, a relatively brief five-month jaunt to general release. If you're homesteading on 3000s, and you don't need PCL sequences at the beginning and end of a spool file, you should use it. Commemorate the era when the system's creator was at least building best-effort improvements.

MPE/iX 6.5 was still being patched when networked printing rolled out. That's a release still in some use at  homesteading shops. In contrast, plenty of later patches were only created and tested for the 7.0 and 7.5 PowerPatch kits.

Deep inside the Web is a white paper that former HP staffer Jeff Vance wrote, a guide he called "Communicator-like" after the classic HP technical documents. HP's taken down its Jazz repository of tech papers where NWPrinting.html once was available. But our open source software expert Brian Edminster tracked down that gem at the Client Systems website -- the company which was one of two to license HP's tech papers. You could check in with your independent support provider, to see if they've got the paper.

Networked printing was never as comprehensive as the indie solutions for the 3000, but at least it was delivered on the OS level via patches. The vendor still warned that adding new printers was going to be an uneven process.

HP will support this enhancement on a "best-effort" basis, meaning we will attempt to duplicate and resolve specific spooler problems -- but we cannot guarantee that all ASCII based printers are supported by this enhancement.

Of course, HP's support is long gone. But while best-effort might sound like a show-stopper so many years later, you'd be surprised how many printers of that 6.5 era are still attached at homesteading 3000 sites.

Where do you get the patch? That's where HP's still doing its work. These MPE/iX patches were given special dispensation from the pay-for-patches edict of 2010. They're still free by calling HP. That non-Windows printer and MPE might seem like old technology. But HP's still using telephones to enable the delivery of patches, so there's that Throwback -- and one you can find on days which are not Thursdays, too.

Posted by Ron Seybold at 03:12 PM in Hidden Value, Homesteading, Web Resources | Permalink | Comments (0)

August 08, 2014

Classic Advice: Adding a DLT to an HP 3000

I'm trying to add a DLT to a my HP 3000 939KS and it keeps reporting media as bad. I can FCOPY but not run an Orbit or MPE store. It does mount the tape normally. The MPE store gives the following error:


The server which this drive is being added to has DDS-3s on it, but we are adding another disk array, so we are going to outgrow what we have very quickly.

DLT4000DLT8000s have not been manufactured in perhaps 10 years. Even five-year-old drives are SDLTII or DLT VS160, or some form of LTO. Also, using HVD-SCSI is so last century. At any rate, the heads on the DLT drives do get used up depending on the media used. Try another DLT drive, if possible.

Unfortunately, this is the exact issue facing homesteaders and others who are delaying their migration off the HP 3000, especially if they have pre-PCI machines like the 939. The hardware to run with it can be difficult to find, but it's out there, although it can be of varying level of readiness. You have many options open to you, but as time goes by they will more difficult to implement.

1. Look for another DLT8000 or a DLT7000. Either of these tape drives will work, and you will not get any performance benefit from either one over the DDS-3, but you will get more storage on one tape. You might make sure it has HP-branded firmware; there have been a painful set of System Aborts, due to semi-random walks through driver state machines — initiated by non-certified firmware.

2. Instead of a DLT, consider getting more DDS-3 drives. One medium size N-Class can have 12 DAT24 drives -- they do either a 4x3 or 3x4 parallel storeset. No messing with "reel" switches.

3. Consider getting an HVD to SE/LVD SCSI converter and then trying a DDS-4 device. DAT40 with DAT24 media has worked well for some sites, but DAT40 with DAT40 media is only supported on A/N-Class. To get technical on you here, you may only configure the DLT (scsi_tape2_dm) driver "under" the NIO F/W SCSI HBA (fwscsi_dam).

4. Move to a PCI HP 3000 (the A-Class or a small N-Class,) then use the newer LVD devices. PCI systems will at least enable the usage of much newer used equipment — and even some new stuff, if you want to buy a XP10000/12000.

Posted by Ron Seybold at 03:31 PM in Hidden Value, Homesteading | Permalink | Comments (0)

July 30, 2014

Find :HELP for what you don't know exists

Last week we presented a reprise of advice about using the VSTORE command while making backups. It's good practice; you can read about the details of why and a little bit of how-to in articles here, and also here.

But since VSTORE is an MPE command, our article elicited a friendly call from Vesoft's Vladimir Volokh. He was able to make me see that a great deal of what drives MPE/iX and MPE's powers can remain hidden -- the attribute we ascribed to VSTORE. "Hidden, to some managers running HP 3000s, is the VSTORE command of MPE/iX to employ in system backup verification." We even have a category here on the blog called Hidden Value. It's been one of our features since our first issue, almost 19 years ago.

MPE commands exampleFinding help for commands is a straightforward search, if those commands are related to the commands you know. But how deep are the relationships that are charted by the MPE help system? To put it another way, it's not easy to go looking for something that you don't know is there. Take VSTORE, for example. HP's HELP files include a VSTORE command entry. But you'll only find that command if you know it's there in the operating environment. The "related commands" part of the entry of STORE, identifying the existence of VSTORE, is at the very bottom of the file.

Vladimir said, "Yes, at the bottom. And nobody reads to the bottom." He's also of the belief that fewer people than ever are reading anything today. I agree, but I'd add we're failing in our habits to read in the long form, all the way beyond a few paragraphs. The Millennial Generation even has an acronymn for this poor habit: TLDR, for Too Long, Didn't Read. It's a byproduct of life in the Web era.

But finding help on VSTORE is also a matter of a search across the Web, where you'll find archived manuals on the 5.0 MPE/iX where it was last documented. There's where the Web connects us better than ever. What's more, the power of the Internet now gives us the means to ask Vladimir about MPE's commands and the MPEX improvements. Vladimir reads and uses email from his personal email address. It's not a new outlet, but it's a place to ask for help that you don't know exists. That's because like his product MPEX, Vladimir's help can be conceptual.

Hold down the right-most or left-most mouse button and you'll see contextual help in plenty of applications. MPE commands don't have this feature, and while they don't seem to need it, conceptual help is missing, too. There's :HELP for many subjects, but conceptual help involves skipping over those TLDR habits.

Our original article about VSTORE used the command in context with a primer on when to create a System Load Tape. Do a VSTORE when you make an SLT, said Vladimir as well as our ally Brian Edminster. Creating context is high-order programming, something we can do more easily with our wetware than with software. It's about seeing relationships, connecting the dots.

"You can't ask for help for something you don't know exists," is how Vladimir posed the problem of contextual help in the MPE interface. Go to the %HELP of MPEX and you'll get related commands right away. For example, typing %HELP STORE will allows you to choose from the following topics:

1. %MPEXSTORE, MPEX command
2. MPE's :RESTORE help text
3. MPE's :STORE help text
4. MPE's :VSTORE help text
5. STORED, a file attribute variable

In comparison, you might not be aware of VSTORE's relationship to backups by using HP's :HELP files.

How did we learn about those %HELP options? The Internet led us to a 19-year-old technical paper written by Paul Taffel while he was in the Vesoft stables. The paper, hosted at Gainsborough Software, details the improvements to MPEX as a result of integrating the (then-new) Posix interface of MPE. Two-thirds of the way through an article of 2,800 lines, there's that %HELP information. (There's even a little joke about typing %HELP SENTENCE, and another about %HELP DELI in MPEX.)

It's all out there, somewhere, these opportunities to learn what you even don't know exists, but need to know. And you'd want to learn about efficient and effective use of MPE because? Well, because an HP 3000 might be a key part of your datacenter longer than expected -- and your best expert has already typed his final :BYE. In that 19-year-old article, Taffel expressed Vesoft's ideal about questions from the community.

We at VESOFT really encourage you to contact us with your favorite "I'd like to do this but I can't" problem.  MPEX has evolved largely as a result of the continued suggestions of our many thousands of users, and we hope to continue this process as long as you continue to come up with new problems.

After that message, there's a contact phone number for Vesoft, the one that still reaches the company's offices, unchanged after decades. But there's also current email to follow by this year for contextual help, by dropping a note into Vladimir's inbox. Your reply might include a call, a sample of MPE help that's so well hidden you don't know you need it.

Posted by Ron Seybold at 06:58 PM in Hidden Value, History, Homesteading | Permalink | Comments (1)

July 24, 2014

Using VSTORE to Verify 3000 Backups

Card VerificationHidden, to some managers running HP 3000s, is the VSTORE command of MPE/iX to employ in system backup verification. It's good standard practice to include VSTORE in every backup job's command process. If your MPE references come from Google searches instead of reading your NewsWire, you might find it a bit harder to locate HP's documentation for VSTORE. You won't find what you'd expect inside a MPE/iX 7.5 manual. HP introduced VSTORE in MPE/iX 5.0, so that edition of the manual is where its details reside.

For your illumination, here's some tips from Brian Edminster, HP 3000 and MPE consultant at Applied Technologies and the curator of the MPE Open Source repository,

If possible, do your VSTOREs on a different (but compatible model) of tape drive than the one the tape was created on. Why? DDS tape drives (especially DDS-2 and DDS-3 models) slowly go out of alignment as they wear.

In other words, it's possible to write a backup tape, and have it successfully VSTORE on the same drive. But if you have to take that same tape to a different server with a new and in-alignment drive, you could have it not be readable! Trust me on this -- I've had it happen.

If you'll only ever need to read tapes on the same drive as you wrote them, you're still not safe. What happens if you write a tape on a worn drive, have the drive fail at some later date -- and that replacement drive cannot read old backup tapes? Yikes!

Using the 'two-drive' method to validate backup (and even SLT) tapes is a very prudent choice, if you have access to that array of hardware. It can also often help identify a drive that's going out of alignment -- before it's too late! 

Unfortunately, SLTs have to be written to tape (at least, for non-emulated HP 3000s). However, your drive will last years longer if you only write to it a few times a year.

You can find HP's VSTORE documentation page from that 5.0 command manual on the Web, (thanks to MM Support for keeping all those those pages online).

Posted by Ron Seybold at 07:30 PM in Hidden Value, Homesteading | Permalink | Comments (0)

July 21, 2014

Maximum Disc Replacement for Series 9x7s

Software vendors, as well as in-house developers, keep Series 9x7 servers available for startup to test software revisions. There are not very many revisions to MPE software anymore, but we continue to see some of these oldest PA-RISC servers churning along in work environments.

9x7s, you may ask -- they're retired long ago, aren't they? Less than one year ago, one reseller was offering a trio for between $1,800 (a Series 947) and $3,200. Five years ago this week, tech experts were examining how to modernize the drives in these venerable beasts. One developer figured in 2009 they'd need their 9x7s for at least five more years. For the record, 9x7s are going to be from the early 1990s, so figure that some of them are beyond 20 years old now.

"They are great for testing how things actually work," one developer reported, "as opposed to what the documentation says, a detail we very much need to know when writing migration software. Also, to this day, if you write and compile software on 6.0, you can just about guarantee that it will run on 6.0, 6.5, 7.0 and 7.5 MPE/iX."

BarracudaSome of the most vulnerable elements of machines from that epoch include those disk drives. 4GB units are installed inside most of them. Could something else replace these internal drives? It's a valid question for any 3000 that runs with these wee disks, but it becomes even more of an issue with the 9x7s. MPE/iX 7.0 and 7.5 are not operational on that segment of 3000 hardware.

Even though the LDEV1 drive will only support 4GB of space visible to MPE/iX 6.0 and 6.5, there's always LDEV2. You can use virtually any SCSI (SE SCSI or FW SCSI) drive, as long as you have the right interface and connector.

There's a Seagate disk drive that will stand in for something much older that's bearing an HP model number. The ST318416N 18GB Barracuda model -- which was once reported at $75, but now seems to be available for about $200 or so -- is in the 9x7's IOFDATA list of recognized devices, so they should just configure straight in. Even though that Seagate device is only available as refurbished equipment, it's still going to arrive with a one-year warranty. A lot longer than the one on any HP-original 9x7 disks still working in the community.

One developer quipped to the community, five years ago this week, "On the disc front at least that Seagate drive should keep those 3000s running, probably longer than HP remains a Computer Manufacturer."

But much like the 9x7 being offered for sale this year, five years later HP is still manufacturing computers, including its Unix and Linux replacement systems for any 3000 migrating users. 

So to refresh drives on the 9x7s, configure these Barracuda replacement drives in LDEV1 as the ST318416N -- it will automatically use 4GB (its max visible capacity) on reboot.

As for the LDEV2 drives, there are no real logical size limits, so anything under 300GB would work fine -- 300GB was the limit for MPE/iX drives until HP released its "Large Disk" patches for MPE/iX, MPEMXT2/T3. But that's a patch that wasn't written for the 9x7s, as they don't use 7.5.

Larger drives were not tested for these servers because of a power and heat dissipation issue. Some advice from the community indicates you'd do better to not greatly increase the power draw above what those original equipment drives require. The specs for those HP internal drives may be a part of your in-house equipment documentation. Seagate offers a technical manual for the 18GB Barracuda drive at its website, for power comparisons.

Posted by Ron Seybold at 07:51 PM in Hidden Value, Homesteading, Migration, User Reports | Permalink | Comments (2)

July 09, 2014

How to Employ SFTP on Today's MPE

Is anyone using SFTP on the HP 3000?

Gavin Scott, a developer and a veteran of decades on MPE/iX, says he got it to work reliably at one customer a year or so ago. "We exchanged SSL keys with the partner company," Scott said, "and so I don't think we had to provide a password as part of the SFTP connection initiation."

At least in my environment, the trick to not having it fail randomly around 300KB in transfers (in batch) was to explicitly disable progress reporting -- which was compiled into the 3000 SFTP client as defaulting to "on" for some reason. I forget the exact command that needed to be included in the SFTP command stream (probably "progress <mumble>" or something like that), but without that, it would try to display the SFTP progress bar. This caused it to whomp its stack or something similarly bad when done in a batch job, due to the lack of any terminal to talk to.

As SFTP is a pure Posix program, I ended up making Posix-named byte-stream files for stdin and stdout, and generally did all the SFTP stuff from the Posix shell. The MPE job ended up being a bunch of invocations of SH -c to execute an echo command to make the stdin file, and then another SH -c to run SFTP with a ;callci setvar varname -- $? or something like that -- on the end to capture the Posix process exit code back into the CI.

I also parsed/grepped the stdout file after the SFTP completed/exited, in order to test for seeing the actual file transferring message. I also wanted to make sure that all of the stdin content had been processed, so I could detect unexpected early termination or other problems that might not show up in $?.

That's all from memory, as I don't have access to the scripts any longer. In the end, SFTP was completely reliable, after working through all of its little issues.

Posted by Ron Seybold at 08:50 PM in Hidden Value, Homesteading | Permalink | Comments (0)

June 18, 2014

The Long and Short of Copying Tape

Is there a way in MPE to copy a tape from one drive to another drive?

Stan Sieler, co-founder of Allegro Consultants, gives both long and short answers to this fundamental question. (Turns out one of the answers is to look to Allegro for its TapeDisk product, which includes a program called TapeTape.)

Short answer: It’s easy to copy a tape, for free, if you don’t care about accuracy/completeness.

Longer answer: There are two “gotchas” in copying tapes ... on any platform.

Gotcha #1: Long tape records

You have to tell a tape drive how long a record you with to read.  If the record is larger, you will silently lose the extra data.

Thus, for any computer platform, one always wants to ask for at least one byte more than the expected maximum record — and if you get that extra byte, strongly warn the user that they may be losing data.  (The application should then have the internal buffer increased, and the attempted read size increased, and the copy tried again.)

One factor complicates this on MPE: the file system limits the size of a tape record you can read.  STORE, on the other hand, generally bypasses the file system when writing to tape and it is willing to write larger records (particularly if you specify the MAXTAPEBUF option).

In short, STORE is capable of writing tapes with records too long to read via file system access. The free programs such as TAPECOPY use the file system; thus, there are tapes they cannot  correctly copy.

Gotcha #2: Setmarks on DDS tapes

Some software creates DDS tapes and writes “setmarks” (think of them as super-EOFs). Normal file system access on the 3000 will not see setmarks, nor be able to write them.

Our TapeDisk product for MPE/iX (which includes TapeTape) solves both of the above problems. As far as I know, it’s the only program that can safely and correctly copy arbitrary tapes on an HP 3000.

Posted by Ron Seybold at 04:53 PM in Hidden Value, Migration | Permalink | Comments (0)

June 03, 2014

Paper clips play a role in 3000's guardian

The HP 3000 was designed for satisfactory remote access, but there are times when the system hardware needs to be in front of you. Such was the case for a system analyst who was adding a disk drive to a A-Class HP 3000.

BentpaperclipCentral to this process is the 3000's Guardian Service Processor (GSP). This portion of the A-Class and N-Class Multifunction IO card gives system managers basic console operations to control the hardware before MPE/iX is booted, as well as providing connectivity to manage the system. Functions supported by the GSP include displaying self-test chassis codes, executing boot commands, and determining installed hardware. (You can also read it as a speedometer for how fact your system is executing.)

The GSP was the answer to the following question.

I need to configure some additional disk drives and I believe reboot the server. The GSP is connected to a IP switch and I have the IP address for it, but it is not responding. I believe I need to enable it from the console. Can this be done from the soft console, using a PC as the console with a console # command?

A paper clip can reset the GSP and enable access, says EchoTech's Craig Lalley.

Lalley added that a GSP reset is an annual maintenance step for him.
Look on the back of the CPU and you will see a small hole labeled GSP RESET.  You need your favorite techie paper clip. Just insert the paper clip, and you will feel it depress. It takes about a minute to reset. Don't worry, it only reboots the GSP, and will not affect the HP 3000.

I find it is necessary to reset the GSP about once a year.  It seems to correlate to when you really need to get access, and you can't get physical access to the box. Good old Murphy's law.

Lalley calls the GSP, which HP introduced with its final generation of 3000s, one of the most useful things in the A-Class and N-Class boxes.

The GSP is a small computer that is always powered on when the plug has power. With it, it is possible to telnet to and be the console. While multiple admins can telnet in and watch, only one has the keyboard.

It is possible to reboot, do memory dumps and even fully power down the HP 3000 from the GSP.  Use the command PC OFF to power down. The GSP is probably the best feature of the N-Class and A-Class boxes.

Allegro's Stan Sieler has a fine white paper online about MPE/iX system failure and hang recovery that includes GSP tips.

Posted by Ron Seybold at 11:03 PM in Hidden Value | Permalink | Comments (0)

May 30, 2014

Deleting 3000 System Disks That Go Bad

Hard-disk-headAs Hewlett-Packard's 3000s age, their disks go bad. It's the fate of any component with moving parts, but it's especially notable now that an emulated 3000 is a reality. The newest HP-built 3000 is at least 11 years old by now. Disks that boot these servers might be newer, but most of them are as old as the computer itself.

A CHARON-based 3000 will have newer drives in it, because it's a modern Intel server with current-day storage devices. However, for the nearly-total majority of the 3000 system managers without a CHARON HPA/3000, the drives in their 3000s are spinning -- ever-quicker -- to that day when they fail to answer the bell.

Even after replacing a faulty 3000 drive — which is not expensive at today's prices — there are a few software steps to perform. And thus, our tale of the failed system (bootup) disk.

Our disk was a MEMBER in MPEXL_SYSTEM_VOLUME_SET. I am trying to delete the disk off the system.  Upon startup of the machine is says that LDEV 4 is not available.  When going into SYSGEN, then IO, then DDEV 4 it gives me a warning that it is part of the system volume set — cannot be deleted.  I have done an INSTALL from tape (because some of the system files were on that device), which worked successfully. How do I get rid of this disk?

Gilles Schipper of GSA said that the INSTALL is something to watch while resetting 3000 system disks.

Sounds like the install did not leave you with only a single MPEXL_SYSTEM_VOLUME_SET disk. Could it be that you have more than one system volume after INSTALL because other, non-LDEV 1 volumes were added with the AVOL command of SYSGEN -- instead of the more traditional way of adding system volumes via the VOLUTIL utility?

You can check as follows:


If the resulting output shows more than one volume, that's the answer.

Schipper offered a repair solution, as well. 

Schipper's solution would use these steps:

1. Reboot with:


2. With SYSGEN, perform a DVOL for all non-LDEV1 volumes


4. Create a new System Load Tape (SLT)

5. Perform an INSTALL from the newly-created SLT

6. Add any non-LDEV1 system volumes with VOLUTIL. This will avoid such problems in future.

Those SLTs are also a crucial component to making serious backups of HP 3000s. VeSoft's Vladimir Volokh told us he saw a commonplace habit at one shop: Neglecting to read the advice they'd received.

"I don't know exactly what to do about my SLT," the manager told him. "HP built my first one using a CD. Do I need that CD?"

His answer was no, because HP was only using the most stable media to build that 3000's first SLT. But Vladimir had a question in reply. Do you read the NewsWire? "Yes, I get it in my email, and my mailbox," she said. But just like other tech resources, ours hadn't been consulted to advise on such procedures, even though we'd run an article about 10 days earlier that explained how to make CSLTs. That tape's rules are the same as SLT rules. Create one each time something changes in your configuration for your 3000.

Other managers figure they'd better be creating an SLT with every backup. Not needed, but there's one step that gets skipped in the process.

"I always say, 'Do and Check,' " Vladimir reports. The checking of your SLT for an error-free tape can be done with the 3000's included utilities. The venerable TELESUP account, which HP deployed to help its support engineers, has CHECKSLT for you to run and do the checking.

There's also the VSTORE command of MPE/iX to employ in 3000 checking. If your MPE references come from Google searches instead of reading your Newswire, you might find it a bit harder to locate HP's documentation for VSTORE. You won't find what you'd expect in a 7.5 manual. HP introduced VSTORE in MPE/iX 5.0, so that edition of the manual is where its details reside.  (Thanks to Digital Innovations' HP MM Support website for its enduring MPE/iX manual archives).

It's also standard practice to include VSTORE in every backup job's command process.

There's another kind of manager who won't be doing SLTs. That's the one who knows how, but doesn't do the maintenance. You can't make this kind of administrator do their job, not any more than you can make a subscriber read an article. There's lots to be gained by learning skills that keep that 3000 stable and available, even in the event of a disk crash.

Posted by Ron Seybold at 12:40 PM in Hidden Value, Homesteading | Permalink | Comments (0)

May 15, 2014

Techniques for file copying, compressions

I need to submit a file to from an HP 3000 to my credit card processor, a file that is an 80-byte file. Before I submit it, I need to zip the file. I’m using the Posix shell and its zip program. I SFTP’d the file, but my vendor is not processing the file because it is supposedly 96 bytes long. If I unzip the file that I zipped, it becomes a bytestream file. I then check — by doing an FCOPY FROM=MYFILE;TO=;HEX;CHAR — and I see that no record exceeds 80 bytes. Why do they think it is an 96-byte file?

Barry Lake of Allegro replies

I would convert it to a bytestream file before zipping it 

:tobyte.hpbin.sys "-at /SG2VER/PUB/LCAUTHOT /SOME/NEW/FILE"

Mark Ranft adds

I would try copying the file to an intermediate server. Zip it. And SFTP it. See if that provides better results.

Tony Summers suggests there is good background instruction, to understand how MPE/iX files are different than those in Unix, at Robelle's MPE for Unix Users article.

I thought there was an option to FCOPY part of a record. If the record contains TODAY IS MONDAY and you want only columns 10-12, I thought there was an FCOPY subset option-- one that would result in just the characters in those positions (MON). Am I halucinating?

Francois Desrochiers replies

The SUBSET option is used to select records by record numbers or strings in certain columns, but you cannot select parts of records. It always works on complete records. You have to use other tools such as Suprtool, Qedit, Editor, Quad, or the Posix "cut" to extract columns.

Olav Kappert adds

You can also pipe the record into a variable and the parse whatever you want out.

Posted by Ron Seybold at 12:53 PM in Hidden Value, Homesteading | Permalink | Comments (0)

May 07, 2014

MPE automates (some) password security

IE-hackIt only took a matter of weeks to create an unpatched security threat to the world's single-most installed vendor operating system, Windows XP. At about a 30 percent penetration of all PCs, XP is still running on hundreds of millions of systems. A zero-day Internet Explorer bug got patched this month, however, reluctantly by Microsoft. Once it cut its software loose -- just like HP stopped all MPE patches at the end of 2008 -- Microsoft's XP became vulnerable in just 20 days.

MPE, on the other hand, makes a backup file of its account structure that will defy an attempt to steal its critical contents. HP 3000 users can count on the work of an anonymous developer of MPE, even more than five years after patch creation ceased.

The automated protection of MPE's passwords comes through jobstreams from a key backup program. These files, created by using the BULDACCT program, are jobstreams that can only be read by 3000 users with CR (the jobstream's CReator, who might be an operator) or SM (System Manager) privileges, according to Jon Diercks' MPE/iX System Administration Handbook. Diercks advises his readers, "Even if your backup software stores the system directory, you may want to use BULDACCT as an extra precaution, in case any problems interfere with your ability to restore the directory data normally." However, he adds, the BULDJOB files are powerful enough to warrant extra care. After all, they contain "every password for every user, group and account, and lockwords for UDC files where necessary."

Note: the jobstream files you build on your own -- not these BULDJOBs -- can be secured on your own. But you must do that explicitly. These user-created streams' protection is not automatic.

In any case, you should use BULDACCT every day, according to Vesoft's Vladimir Volokh, not just as an optional extra precaution. "Do it before -- well, before it happens," he says. What can happen is a messy manure of a failure of an LDEV, one that scrambles the system directory. 

Put the BULDACCT option into your backup's stream file, so its jobstreams are created before your backups. Daily backups, of course. You're doing daily backups, right? And then storing that tape someplace other than the top of the HP 3000. You'd be surprised, said Volokh, how many 3000 sites use that storage location for a backup tape.

The BULDACCT option includes the jobstreams in the backup tape. After your backup is complete, you should PURGE these two streams from your 3000's disk.

Those BULDACCT jobstreams (BULDJOB1 and BULDJOB2) are automatically secured at the file level. This protects BULDACCT streams  from hackers' pry-bars, a very good thing -- because this stream contains all system information including passwords.

You can then RESTORE these streams if you still have a disk error that leaves files intact, but ruins the directory structure. BULDJOB1 contains the instructions to rebuild directory structure, a job that runs before you RESTORE files. BULDJOB2 contains the SETCATALOG commands needed for to reassign all user, account and system UDCs, according to Diercks' fine book. Still available, by the way, online via O'Reilly's Safari e-book service.

Volokh says that if any of the above still seems unclear, 3000 managers can call him at Vesoft and he'll walk managers through the process. "For details, just call us. Don't chase the horse after the barn door has been opened."

Posted by Ron Seybold at 09:41 PM in Hidden Value, Homesteading | Permalink | Comments (0)

May 05, 2014

File ID errors mean a reach for BULDACCT

My system crashed. Now when I bring it back up it starts to behave strangely, indicating several system files cannot be accessed. I can sign on, as MANAGER.SYS, but most of the accounts that used to be on the system cannot be found. When I do a LISTF of PUB.SYS, most of the files have a message associated with them that reads as follows.


I believe the system disk experienced some “difficulties” at some point, and I’m not sure what happened or if it’s repairable. Of course I have a SYSGEN tape. But never having had to use one, I need to know if it contains the SYS account files necessary for me to begin reconstruction and reloading of accounts.

Paul Courry replies:

Bad UFID is a bad Universal File IDentifier. In other words, your file system is corrupted. You can try running FSCHECK.MPEXL.TELESUP (run with extreme care, reading the FSCHECK manual first). But considering the extent of the damage you probably will not be able to recover everything.

John Clogg replies:

Files, groups, and accounts on private volume sets are still there, but you will need to recreate the system directory entries for those accounts and groups. If you have BULDACCT output, that will make the job easier. It’s always a good idea to run BULDACCT periodically and store the result for just this eventuality.

Since you have missing accounts as well as the UFID problem, it seems your system directory is damaged. I think it’s a safe bet that your system volume set is clobbered. You need to do an INSTALL from your SLT. This will re-install your operating system and give you a brand new directory

You will also need to restore the contents of your system volume set. Make sure you use the KEEP option so you won’t lose any files created by the INSTALL. You might want to purge or rename COMMAND.PUB.SYS before the restore, so you get your SETCATALOG definitions restored along with the files.

Larry Barnes notes:

Your SYSGEN tape may or may not have the SYS account on it. It depends on how the tape was created. You can generate a SYSGEN tape and have it include certain accounts. I usually include sys and TELESUP on the tape.


Posted by Ron Seybold at 11:42 PM in Hidden Value | Permalink | Comments (0)

April 29, 2014

Foolproof Purges on the HP 3000

Cheshire_catThe software vendors most likely to sell products for a flat rate -- with no license upgrade fees -- have been the system utility and administration providers. Products such as VEsoft's MPEX, Robelle's Suprtool, Adager's product of the same name -- came in one, or perhaps two versions, at most. The software was sold as the start of a relationship, and so it focused on the understanding the product provided for people responsible for HP 3000s.

That kind of understanding might reveal a Lewis Carroll Cheshire Cat's smile inside many an HP 3000. The smile is possible if the 3000 uses UDC files, and the manager uses only MPE to do a file PURGE. There is a more complete way to remove things from a 3000's storage devices. And you take care about this because eliminating UDCs with only MPE can leave a user unable to use the server. That grin is the UDC's filename. 

To begin, we assume your users have User Defined Commands. User Defined Commands are a powerful timesaver for 3000 users, but they have administrative overhead that can become foolproof with the right tools. These UDCs need to be maintained, and as users drop off and come on to the 3000, their UDCs come and go. There's even a chance that a UDC file could be deleted, but that file's name could remain in the filesystem's UDC master catalog. When that happens, any other UDCs associated with the user will fail, too. It might include some crucial commands; you can put a wide range of operations into a UDC.

When you add a third party tool to your administrator's box, you can make a purge of such files foolproof. You can erase the Cheshire Cat's grin as well as the cat. It's important because that grin of a filename, noted above, can keep valid users from getting work done on the server with UDCs. This is not the reputation anybody expects from a 3000.

First you have to find all of your UDCs on a system, and MPE doesn't make that as straightforward as you might think. Using SHOWCATALOG is the standard, included tool for this. But it has its limitations. It can display the system-level UDC files of all users in all accounts. But that's not all the UDCs on a 3000.

MPE, after all, cannot select to show a complete set files by attributes such as program capability. Or for that matter, by last accessed time, or file size, or file security. It's a long list of things that MPE makes an administrator do on their own. Missing something might be the path to looking foolish.

Employing a couple of third party tools from VEsoft, VEAudit and MPEX, lets you root out UDCs and do a foolproof purge, including file names. VEAudit will list all of the UDCs on a server, regardless of user -- not just the ones associated with the user who's logged in and looking for UDCs. The list VEAudit creates can be inverted so the filename is the first item on each line. Then MPEX will go to work to do a PURGE. Not MPE's, but a user-defined purge that looks for attributes, then warns you about which ones you want to delete, or would rather not.

By using MPEX -- the X stands for extended functionality -- you can groom your own PURGE command to look out for files that have been recently used, not just recently created. MPE doesn't check if a purged file is a UDC file. 

Such 3000 utilities provided the server and its managers with abilities that went far beyond what HP had built into MPE and its IMAGE database. Now that MPE is moving on, beyond HP's hardware, knowing these third party tools will transfer without extra upgrade fees is like ensuring that a foolproof MPE will be running on any virtualized HP 3000.

They're an extra-cost item, but how much they're worth depends on a manager's desire to maintain a good reputation.

In the earliest days of the sale of these tools, vendors were known for selling them for the price of the support contract alone. That's usually about 20 percent annually of the purchase price. If a $4,000 package got sold that way, the vendor billed for just $800 at first. It made the purchases easier to pass through a budget, since support at the manager-tool level was an easier sell. Think about it. Such third parties passed up $3,200 per sale in revenues in the earliest days. They also established relationships that were ongoing and growing. They were selling understanding of MPE, not just software.

As we wrote yesterday, this kind of practice would be useful for the community's remaining software vendors. This is not the time to be raising prices to sustain MPE computing, simply because there's a way to extend the life of the hardware that runs MPE. As the number of MPE experts declines, the vendors will be expected to fill in the gaps in understanding. Those who can do this via support fees stand the best chance of moving into the virtualized future of 3000 computing.

Posted by Ron Seybold at 04:04 PM in Hidden Value, History, Homesteading | Permalink | Comments (0)

April 16, 2014

How to tell which failed drive is which LDEV

I have someone at a remote site that may need a drive replaced.  How can I tell which drive is a certain LDEV?

Keven Miller, who at describes himself as "a software guy with a screwdriver," answers the question -- for those that don't have the benefit of seeing an amber light on a failed drive.

Well, for me, I run SYSINFO.PRVXL.TELESUP first. Then you have a map of LDEV# to SCSI path. Next, you have to follow your SCSI path via SYSINFO.PRVXL.TELESUP.


From the example above, on my 928, 56/52 is the built-in SCSI path. Each disk has a hardware selection via jumpers to set the address of 0 to 6. (7 is the controller). You would have to inspect each drive, which could be one of the two internal ones, or any external ones.

On an A-Class, you have the two internal drives

0/0/1/1.15 (intscsia.15) (I think top drive)
0/0/2/1.15 (intscsib.15) (I think bottom drive)

Plus an external, Ultra2 wide on 0/0/1/0
Narrow single ended on 0/0/2/0
slot-1 on 0/2/0
slot-2 on 0/4/0
slot-3 on 0/6/2
slot-4 on 0/6/0

Then, depending how the externals are housed, it could be just an address switch on the back of the housing case. Not sure about an N-Class, or a 9x7, or a 9x9. But the processes are the same. If you're running anything more complex, like RAID, a hardware guy will help.

Hardware guy Jack Connor of Abtech adds

There's the 12H, NIKE, VA family, and XP disc frames that are the common arrays.

Or, if it's not an array, but something like a Jamaica disc enclosure, you can look at SYSGEN>IO>LD to determine what all discs should be present, then do a :DSTAT ALL to see who's missing and record that  path including the SCSI address.

You then would go the card that has the major path, such as 0/2/0/0, and then follow that cable to the Jamaica enclosure. Look at the back to determine from the dip switch setting what each slot's SCSI address is.  That would be the failed drive.

Also, often times with a Jamaica enclosure the drive will have either a solid green light on or, alternatively, be totally dark while all the other drives see activity (with flashing green lights).

Posted by Ron Seybold at 06:03 PM in Hidden Value, Homesteading | Permalink | Comments (0)

March 28, 2014

MPE's dates stay at home on their range

2028 is considered the afterlife for MPE/iX, and MPE in general, based on misunderstanding of the CALENDAR intrinsic. The operating system was created in 1971 and its builders at the time used 16 bits, very state of the art design. Vladimir Volokh of VESOFT called to remind us that the choice of the number of bits for date representation probably seemed more than generous to a '71 programmer.

"What could anyone want with a computer from today, more than 50 years from now?" he imagined the designers saying in a meeting. "Everything will only last five years anyway." The same kind of choices led everybody in the computer industry to represent the year in applications with only two digits. And so the entire industry worked to overcome that limitation before Y2K appeared on calendars.

YogiberraThis is the same kind of thinking that added eight games to the Major League Baseball schedule more than 50 years ago. Now these games can be played on snowy baseball fields, because March 29th weather can be nothing like the weather of, say, April 8 in northern ballparks.

Testing the MPE/iX system (whether on HP's iron, or an emulator like CHARON) will be a quick failure if you simply SETCLOCK to January 1, 2028. MPE replies, "OUT OF RANGE" and won't set your 3000 into that afterlife. However, you can still experience and experiment with the afterlife by coming just to the brink of 2028. Vladimir says you can SETCLOCK to 11:59 PM on December 31, 2027, then just watch the system roll into that afterlife.

It goes on living, and MPE doesn't say that it's out of range, out of dates, or anything else. It rolls itself back to 1900, the base-year those '71 designers chose for the system's calendar. And while 1900 isn't an accurate date to use in 2028, 1900 has something in common with Y2K -- the last year that computers and their users pushed through a date barrier.

The days of the week are exactly the same for dates in 1900 as for the year 2000, Vladimir says. "It's ironic that we'll be back to Y2K, no?" he asked. VESOFT's MPEX has a calendar function to check such similarities, he added.

The MPE/iX system will continue to run in 2028, but reports which rely on dates will print incorrectly. That's probably a euphemism, that printing, 14 years from now. But it's hard to say what will survive, and for how long. Or as Vladimir reminded us, using a quote from Yankee baseball great Yogi Berra, "It's tough to make predictions, especially about the future."

The year 2028 was 67 years into the future when the initial MPE designers chose the number of bits to represent CALENDAR dates. Who'd believe it might matter to anyone? "Will Stromasys continue to run after 2028?" asked one ERP expert a few years back during a demo. "Just as well as MPE will run," came the reply, because CHARON is just a hardware virtualization. The operating system remains the same, regardless of its hosting.

And as we pointed out yesterday, one key element of futuristic computing will be having its own date crisis 10 years after MPE's. Linux has a 2038 deadline (about mid-January) for its dates to stop being accurate. Linux-based systems, such as the Intel servers that cradle CHARON, will continue to run past that afterlife deadline. And like the Y2K days of the week that'll seem familiar in MPE's 2028, an extension for Linux date-handling is likely to appear in time to push the afterlife forward.

Perhaps in time we can say about that push-it-forward moment, "You could look it up." Another quote often misunderstood, like the 2028 MPE date, because people think Berra said that one, too. It's not him, or the other famous king of malapropisms Casey Stengel. You Could Look It Up was a James Thurber short story, about a midget who batted in a major league game. Fiction that became fact years later, when a team owner used the stunt in a St. Louis Browns ballgame by batting Eddie Gaedel. You never know what part of a fantasy could come true, given enough time. Thurber's story only preceded the real major-league stunt by 10 years. We've still got more than 13 years left before MPE's CALENDAR tries to go Out of Range.

Posted by Ron Seybold at 02:47 PM in Hidden Value, Homesteading, Newsmakers | Permalink | Comments (0)

March 27, 2014

Beyond 3000's summit, will it keep running?

ClimbersGuy Paul (left) and Craig Lalley atop Mt. Adams, with their next peak to ascend (Mt. Hood) on the horizon.

If you consider the last 40 years and counting to be a steady rise in reputation elevation for the HP 3000 and MPE -- what computer's been serving business longer, after all? -- then 2027 might be the 3000's summit. A couple of 3000 experts have climbed a summit together, as the photo of Guy Paul and Craig Lalley above proves. What a 3000 might do up there in 20 years prompted some talk about 2027 and what it means.

MountAdamsThe two 3000 veterans were climbing Washington state's second highest mountain, Mt. Adams, whose summit is at 12,280 feet. On their way up, Paul and his 14 year old grandson had just made the summit and ran into Lalley, and his 14 year old son, on their way to the top.  

The trek was announced on the 3000 newsgroup last year. At the time, some of the group's members joked that a 3000 could climb to that elevation if somebody could haul one up there. "Guy is a hiking stud," said his fellow hiker Lalley. "Rumor has it that Guy had a small Series 989 in his back pack. I wasn't impressed until I heard about the UPS."

After some discussion about solar-powered computing, someone else said that if it was started up there on Mt. Adams with solar power, the 3000 would still be running 20 years later.

Then a 3000 veteran asked, "But won't it stop running in 2027?" That's an important year for the MPE/iX operating system, but not really a date of demise. Such a 3000 -- any MPE/iX system -- can be running in 20 years, but it will use the wrong dates. Unless someone rethinks date handling before then.

Jeff Kell, whose HP 3000s stopped running at the University of Tennessee at Chattanooga in December, because of a shutdown post-migration, added some wisdom to this future of date-handling.

"Well, by 2027, we may be used to employing mm/dd/yy with a 27 on the end, and you could always go back to 1927. And the programs that only did "two-digit" years would be all set. Did you convert all of 'em for Y2K? Did you keep the old source?"

Kell added that "Our major Y2K issue was dealing with a "semester" which was YY01 for fall, YY02 for spring, and so forth. We converted that over to go from 9901 (Fall 1999) to A001 (Fall 2000), so we were good for another 259 years on that part. Real calendar dates used 4-digit years (32-bit integers, yyyymmdd)."

At that summit, Paul said that two climbers "talked for a few minutes we made tentative plans to climb Oregon's tallest mountain, Mt. Hood, pictured in the background. We have since set a date of May 16th."

We've written before on the effects of 2027's final month on the suitability of the 3000 for business practice. Kell's ideas have merit. I believe there's still enough wizardry in the community to take the operating system even further upward. The HP iron, perhaps not so much. By the year 2028, even the newest servers will still be 25 years old. Try to imagine a 3000 that was built in 1989, running today.

Better yet, please report to us if you have such a machine, hooked up in your shop.

Why do people climb mountains? The legend is that the climber George Mallory replied, "because it is there." 2028 is still there, waiting for MPE to arrive. Probably on the back of some Intel-based server, bearing Linux -- unless neither of those survives another 14 years. For Intel, this year marks 15 years of service for the Xeon processor, currently on the Haswell generation. Another 25 years, and Xeon will have done as much service as MPE has today.

There is no betting line on the odds of survival for Xeon into the year 2039. By that date, even Unix will have a had its own date-handling issue. The feeling in the Linux community is that a date solution will arrive in time.

Posted by Ron Seybold at 04:44 PM in Hidden Value, Homesteading, User Reports | Permalink | Comments (0)

March 25, 2014

How to Delete All But the Last 5 Files

On our Series 937 I need a routine that will delete all but the last five files in a group that begins with certain values and have a certain pattern to the file names.

Example: We keep old copies of our PowerHouse dictionaries, but only need the last five. I can not do it by date like other groups of files, since it does not get changed everyday. Sometimes we'll go weeks, even months before we make a change. 

I have a routine for other groups of files (interface files) that get created every day and keep only the last 31 days. This is done very easily with VESOFT’s MPEX by simply checking the create date. I was wondering if anyone has a routine either in JCL or MPEX that will keep the last 5 instances of these files. The two file-naming conventions are PT###### and PL######. The ###### represent MMDDHH (month, day, hour).

A wide range of solutions emerged from HP 3000 experts, veterans and consultants.

Francois Desrochers replies

How about doing a LISTF and use PRINT to select all but the last 5 into another file (PTPURGE):


You could massage PTPURGE and turn each line into a PURGE. It has been a while since I used MPEX, but maybe it has an indirect file function e.g. %PURGE ^PTPURGE.

Of course MPEX has such a function. Vladimir Volokh of VESOFT supplied an elegant solution involving a circular file, a feature added to MPE/iX more than 15 years back. 

First, build an MPE circular file (to do this, look at HELP BUILD ALL). The nearly 1,000 lines that will follow include an explanation of the CIR parameter. We use logfiles below in our example.

LISTF LOG####, 6;*V

(MPE's asterisk, by the way, can be used in about 19 different ways, Vladimir adds.)

Your result in V can be your last five names. Now you purge, using MPEX -- because purging something minus something is an MPEX-only function. (Using the caret sign is a way to signal all the files mentioned in the file V.)


There are other solutions available that don't require a third-party gem like MPEX. 

Olav Kappert replied

This is easy enough to do. Here are the steps:

Do a listf into a file 'foo'

Set 'count' = end-of-file count
Set 'index' to 1
Set 'maxindex' to 'count' - 5
Read 'foo'
Increment 'index' by 1
If 'index' < 'maxindex' then
Purge file
Loop to read 'foo'

The exact syntax is up to you and MPE.

Barry Lake adds

Very simple if you're willing to use the Posix shell. If this needs to be done with CI scripting, it's certainly possible, but way more complicated. Someone else may chime in with an "entry point" command file to do this in "pure" MPE. But here's the shell method:

Posix Shell Delete Last 5

So... move the last 5 out of the way, delete whatever's left, then move the 5 back into place.

Posted by Ron Seybold at 05:39 PM in Hidden Value, Homesteading | Permalink | Comments (0)

March 11, 2014

FIXCLOCK alters software, hardware clocks

My HP 3000 system was still on EST, so I wanted to change it during startup. I answered "N" to the date/time setting at end of startup, and it refused my entry of 03/09/14; it returned a question mark. After several quick CR, it set the clock back to 1 Jan 85, which is where it is now waiting.

Gilles Schipper of GSA responds:

While the system is up and running, you could try (while the system is up and running):

:setclock ;date=mm/dd/yyyy;time=hh:mm
:setclock  ;cancel 
:setclock ;timezone=w5:00 (for example)
:setclock ;cancel (again)

Brian Edminster of Applied Technologies notes:

I'd been quite surprised by how many small 'single machine' shops don't properly set the hardware clock to GMT with the software clock offset by 'timezone' Instead, they have their hardware and software clock set to the same time, use the 'setclock correction=' and then give either a +3600 or -3600, for spring or fall time changes. 

Allegro's got a simple command file called FIXCLOCK, on their Free Allegro Software page, that allows fixing the hardware clock AND properly setting the time-offset for the software clock -- all without having to take the system down.

Here's the jobstream code for both the spring and fall time changes. You can use this and modify it for your specific needs. Note that it's set up for the Eastern US time zone. (That's the TIMEZONE = W5:00 -- meaning the number of hours different than GMT -- and TIMEZONE = W4:00 lines.)  Modify these lines as necessary for your timezone.

!JOB TIMECHG,MANAGER/user-passwd.SYS/acct-passwd;hipri;PRI=CS;OUTCLASS=,1
!setvar Sunday,    1
!setvar March,     3
!setvar November, 11
!if hpday = Sunday and &
!   hpmonth = November and &
!   hpdate < 8 then
!   comment (first Sunday of November)
!   TELLOP ********************************************
!   TELLOP Changing the system clock to STANDARD TIME.
!   TELLOP The clock will S L O W   D O W N  until
!   TELLOP we have fallen back one hour.
!   TELLOP ********************************************
!elseif hpday = Sunday and &
!       hpmonth = March and &
!       hpdate > 7 and hpdate < 15 then
!   comment (second Sunday of March)
!   TELLOP *********************************************
!   TELLOP Changing the system clock to DAYLIGHT SAVINGS
!   TELLOP TIME.  The clock jumped ahead one hour.
!   TELLOP *********************************************
!   comment (no changes today!)
!   TELLOP *********************************************
!   TELLOP No Standard/Daylight Savings Time Chgs Req'd
!   TELLOP *********************************************
!comment - to avoid 'looping' on fast CPU's pause long enough for
!comment - local clock time to be > 2:00a, even in fall...
!while hphour = 2 and hpminute = 0
!   TELLOP Pausing 1 minute... waiting to pass 2am
!   TELLOP Current Date/Time: !HPDATEF - !HPTIMEF
!   showtime
!   pause 60
!stream timechg.jcl.sys;day=sunday;at=02:00

Do a showclock to confirm results. Careful, though, of any existing running jobs or sessions that may be clock-dependent.

Posted by Ron Seybold at 10:00 PM in Hidden Value | Permalink | Comments (0)

March 10, 2014

Getting 3000 clocks up to speed, always

ClockgoingforwardThe US rolled its clocks forward by one hour this past weekend. There are usually  questions in this season about keeping 3000 clocks in sync, for anyone who hasn't figured this out over the last several years. US law has altered our clock-changing weekends during that time, but the process to do so is proven.

Donna Hofmeister, whose firm Allegro Consultants hosts the free nettime utility, explains how time checks on a regular basis keep your clocks, well, regular.

This past Sunday, when using SETCLOCK to set the time ahead one hour, should the timezone be advanced one hour as well?

The cure is to run a clock setting job every Sunday and not go running about twice a year. You'll gain the benefit of regular scheduling and a mostly time-sync'd system.

In step a-1 of the job supplied below you'll find the following line:

    !/NTP/CURRENT/bin/ntpdate "-B"

Clearly, this needs to be changed.

If for some dreadful reason you're not running NTP, you might want to check out 'nettime'. And while you're there, pick up a copy of 'bigdirs' and run it -- please!

Also, this job depends on the variable TZ being set -- which is easily done in your system logon udc:


Adapt as needed. And don't forget -- if your tztab file is out of date, just grab a copy from another system. It's just a file.

This job below was adapted from logic developed by Paul Christidis:

!# from the help text for setclock....
!# Results of the Time Zone Form
!#   If the change in time zone is to a later time (a change to Daylight
!#   Savings Time or an "Eastern" geographic movement), both local time
!#   and the time zone offset are changed immediately.
!#   The effect is that users of local system time will see an immediate
!#   jump forward to the new time zone, while users of Universal Time
!#   will see no change.
!#   If the change in time zone is to an earlier time (a change from
!#   Daylight Savings to Standard Time or a "Western" geographic
!#   movement), the time zone offset is changed immediately.  Then the
!#   local time slows down until the system time corresponds to the
!#   time in the new time zone.
!#   The effect is that users of local system time will see a gradual
!#   slowdown to match the new time zone, while users of Universal Time
!#   will see an immediate forward jump, then a slowdown until the
!#   system time again matches "real" Universal Time.
!#   This method of changing time zones ensures that no out-of-sequence
!#   time stamps will occur either in local time or in Universal Time.
!showjob job=@j
!TELLOP =====================================  SETTIME   A-1
!/NTP/CURRENT/bin/ntpdate "-B"
!if hpcierr <> 0
!  echo hpcierr !hpcierr (!hpcierrmsg)
!  showvar
!  tellop NTPDATE problem
!tellop SETTIME -- Pausing for time adjustment to complete....
!pause 60
!TELLOP =====================================  SETTIME   B-1
!setvar FallPoint &
!   (hpyyyy<=2006 AND (hpmonth = 10 AND hpdate > 24)) OR &
!   (hpyyyy>=2007 AND (hpmonth = 11 AND hpdate < 8))
!setvar SpringPoint &
!   (hpyyyy<=2006 AND (hpmonth =  4 AND hpdate< 8)) OR &
!   (hpyyyy>=2007 AND (hpmonth =  3 AND (hpdate > 7 AND hpdate < 15)))
!# TZ should always be found
! if hpday = 1
!    if SpringPoint
!# switch to daylight savings time
!      setvar _tz_offset   ![rht(lft(TZ,4),1)]-1
!      setclock timezone=w![_tz_offset]:00
!    elseif FallPoint
!# switch to standard time
!      setvar _tz_offset   ![rht(lft(TZ,4),1)]
!      setclock timezone=w![_tz_offset]:00
!    endif
!  endif
!TELLOP =====================================  SETTIME   C-1

Mark Ranft of 3k Pro added some experience with international clocks on the 3000.

If international time conversion is important to you, there are two additional things to do.

1) Set a system-wide UDC to set the TZ variable. (And perhaps account UDCs if accounts are for different locations)

:showvar tz

2) There is also a tztab.lib.sys that needs to be updated when countries change when or if they do DST.

:l tztab.lib.sys
ACCOUNT=  SYS         GROUP=  LIB     

FILENAME  CODE  ------------LOGICAL RECORD-----------  ----SPACE----
                 SIZE  TYP        EOF      LIMIT R/B  SECTORS #X MX

TZTAB            1276B  VA         681        681   1       96  1  8

:print tztab.lib
# @(#) HP C/iX Library A.75.03  2008-02-26

# Mitteleuropaeische Zeit, Mitteleuropaeische Sommerzeit
0 3 25-31 3  1983-2038 0   MESZ-2
0 2 24-30 9  1983-1995 0   MEZ-1
0 2 25-31 10 1996-2038 0   MEZ-1

# Middle European Time, Middle European Time Daylight Savings Time 
<< snipped >>

Posted by Ron Seybold at 10:17 PM in Hidden Value, Homesteading | Permalink | Comments (0)

March 05, 2014

What does a performance index represent?

I know this may be a tough question to answer, but thought I'd at least give it a try.

SpeedometerI'm doing an analysis to possibly upgrade our production 959KS/100 system to a 979KS/200, and I see the Hewlett-Packard performance metric chart that tells me we go from a 4.6 to 14.6. What does that increase represent? For instance, does each whole number (like 4.0 to 5.0) represent a general percentage increase in performance? I know it varies from one shop to another, so I'm just looking for a general guideline or personal experience -- like a job that used to take 10 hours to run now only takes 7 hours. The "personal experience" part of this may not even be appropriate, in that the upgrades may not be close to the metrics I am looking at.

Peter Eggers offers this reply, still worthy after several years

Those performance numbers are multiples of a popular system way back when, based on an average application mix as determined by HP after monitoring some systems and probably some system logs of loads on customer systems. No information here as to where you are on the many performance bell curves. The idea is to balance your system resources to match your application load, with enough of a margin to get you through to the next hardware upgrade.

Confchartpic (1)People mention system and application tuning. You have to weigh time spent tuning and expected resource savings against the cost of an upgrade with the system and applications as is.  Sometimes you can gain amazing savings with minor changes and little time spent.  Don't forget to add in time to test, QA, and admin time for change management.

There are a many things to consider: CPU speed and any on chip caching; memory cache(s) size and speed; main memory size and speed; number of I/O channels and bandwidth; online communication topography, bandwidth, and strategy; online vs. batch priorities, and respective time slices; database and file design, access, locking, and cache hit strategies; application efficiency, tightening loops to fit memory caches, and compiler optimizations; and system load leveling.

Since you didn't understand the performance numbers, you might hire a good performance consultant that knows the HP 3000. Of course, look for the "low hanging fruit" fruit first for the biggest bang for the buck, and continue "up the tree" until you lose a net positive return on time invested.

You'll also hear it mentioned that adding memory won't help if the system is IO-bound. That is typically not the case, as more memory means more caching which can help eliminate IOs by retrieving data from cache, sometimes with dramatic improvements. This highlights the need for a good performance guru -- as it is easy to get lost in the details, or not be able to see "the big picture" and how it all fits together.

Aside from Eggers' advice, we take note of the last time HP rated its 3000 line.

At HP World in 2002, it announced the final new 3000 systems, all based upon the PA-8700 processors. At the high end, HP announced a new N-Class system based upon the 750 MHz PA-8700 processor. The new N4000-400-750 was the first HP e3000 to achieve an MPE/iX Relative Performance Units (MRPU) rating of 100; the Series 918 has an MRPU of 1.

HP contends that the MRPU is the only valid way to measure the relative performance of MPE systems. In particular, they maintain that the MHz rating is not a valid measure of relative performance, though they continue to use virtual MHz numbers for systems with software-crippled processors. For example, there are no 380 MHz or 500 MHz PA-RISC processors. Unfortunately, the MRPU does not allow for the comparison of the HP e3000 with other systems, even the HP 9000.

HP has changed the way it rates systems three times over the life of the HP 3000. During the middle years, the Series 918 was the standard with a rating of 1. In 1998, HP devised a new measurement standard for the systems it was introducing that no longer had the Series 918 at 1. It is under this new system that the N4000-400-750 is rated at 100. Applying a correction factor, AICS Research has rated the N4000-400-750 at 76.8 relative to the Series 918’s rating of 1.

Posted by Ron Seybold at 08:36 PM in Hidden Value, Homesteading, News Outta HP | Permalink | Comments (0)

March 04, 2014

Experts show how to use shell from MPE

I am attempting to convert a string into a number for use in timing computations inside an MPEiX job stream. In the Posix shell I can do this:

/SYS/PUB $ echo "21 + 21" | bc

But from the MPE command line this returns blank:

run sh.hpbin.sys;info='-c echo "21 + 21" | bc'

But why? I would like to calculate a formula with containing factors of arbitrary decimal precision and assign the integer result to a variable.  Inside the shell I can do this:

shell/iX> x=$(echo "31.1 * 4.7" | bc)
shell/iX> echo $x
shell/iX> x=$(echo "31.1 * 4.7 + 2" | bc)
shell/iX> echo $x
shell/iX> x=$(echo "31.1 * 4.70 + 2" | bc)
shell/iX> echo $x

What I would like to do is the same thing albeit at the MPE : prompt instead, and assign the result to an MPE variable.

Donna Hofmeister of Allegro replies

CI numeric variables only handle integers (whole numbers).  If your answer needs to be expressed with a decimal value (like 148.17 as shown above) you might be able to do something to express it as a string to the CI (setvar string_x "!x").

This is really sounding like something that's best handled by another solution -- like a compiled program or maybe a perl script.

For what it’s worth, the perl bundle that's available from Allegro has the MPE extensions included.  This means you could do take advantage of perl's 'getoptions' as well as 'hpcicmds' (if you really need to get your result available at the CI level.

Barry Lake of Allegro adds

The answer to your question of why, for the record, is that the first token is what's passed to the shell as the command to execute. In this case, the first token is simply "echo", and the rest of the command is either eaten or ignored.

To fix it, the entire command needs to be a single string passed to the shell, as in:

:run sh.hpbin.sys; info='-c "echo 21 + 21 | bc"'

And if you want to clean that up a bit you can use XEQ instead of RUN:

 :xeq sh.hpbin.sys '-c "echo 21 + 21 | bc"'

Or, you can do it with, for example, Vesoft's MPEX:

 : mpex

 MPEX/3000  34N60120  (c) VESOFT Inc, 1980  7.5  04:07407  For help type 'HELP'

 % setvar pi 3.14159
 % setvar r  4.5
 % calc !pi * !r * !r
 % setvar Area !pi * !r * !r
 % showvar Area
 AREA =            63.617191
 % exit


But the only thing I want is to be able to use a complied program which handles arbitrary precision variables from inside a job stream — such that I can return the integer part of the result to an MPE/iX variable.

Barry Lake replies

If you're happy with truncating your arithmetic result — that is, lopping off everything to the right of the decimal point, including the decimal point — then here's one way to do it:

/SYS/PUB $ echo "31.1 * 4.7" | bc
 /SYS/PUB $ echo "31.1 * 4.7" | bc | cut -f1 -d.
 /SYS/PUB $ callci setvar result $(echo "31.1 * 4.7" | bc | cut -f1 -d.)
 /SYS/PUB $ callci showvar result
 RESULT = 146
 /SYS/PUB $ exit

 : showvar result
 RESULT = 146

Perfect! Thank you. And this construct accepts CI VAR values as I require.

:SETVAR V1 "31.1"
:SETVAR v2 "4.7"
:XEQ SH.HPBIN.SYS;INFO='-c "callci setvar result $(echo ""!V1 * !V2"" | bc |
cut -f1 -d.)"'
RESULT = 146

Posted by Ron Seybold at 05:55 PM in Hidden Value, Homesteading | Permalink | Comments (0)

February 28, 2014

How MPE Balances New Disk Space

If we have a system (volume set) with mostly full disks, and I add a new big empty disk to it, how will MPE/iX do all new allocation on that disk — will it wait until it fills up to the same relative fullness as the existing drives?

See, we have a system with a Nike Model 20 and a bunch of RAID 1 LUNS, and we’ve added five new drives in RAID 5 to the system volume set. But that sounds like we’re on the cusp of a disaster, because while the read performance is measurably better, all the system is going to be doing is writes to this drive for every new extract and scratch file. And as everybody knows, the write performance is like 2.8 times slower to the RAID 5 LUN than the RAID 1 LUN.

[Corrected, to identify the BALANCE command as a part of DeFrag/X.]

Craig Lalley noted, "There is a command you will want to use if you have Defrag/X, [created by Lund, sold by Allegro] The command is BALANCE VS. As an example, 


"There’s online help for this command in Defrag at HELP BALANCE. Without that, I would use system logging to determine the most heavily accessed files and store/restore them to spread the extents."

And there's also help to manage this kind of balancing and defragmentation from VEsoft, as well as that Lund tool.

HP designed the 3000's storage management so that the MPE/iX algorithm will be picking which "most empty" disk to write to next based on percentage full, not sector counts.

MPE watches the percentage full that will cause a switch to another disk. The watching has gotten more precise. An older algorithm would wait until there was a 1 percentage difference in fullness to switch -- and as an example, that would mean 3 GB of data on a 300 GB disk. Now MPE waits until there’s just a .01 percentage.

MPEX from VEsoft can be an aid for manual load balancing after disk installation, before production use. It has commands that can be used to build a DEFRAG process. There's also that DeFrag/X software from Lund to manage storage assignment.

As a last resort, one HP MPE veteran suggests that if nothing else helps, and no MPEX or DEFRAG are at hand, "managers could try to limit the penalty by moving large but less "interesting" files/accounts to the new LDEV (freeing space on the old LDEVs) and then use the VOLUTIL ALTERVOL command to limit the remaining PERM space on the new LDEV afterwards. Yes, a somewhat insane approach, I know."

Posted by Ron Seybold at 08:14 PM in Hidden Value | Permalink | Comments (0)

February 24, 2014

Expanding that Posix Shell on the 3000

ShellWay back in the middle 1990s, HP added the Posix shell to the HP 3000, so customers who had Unix and MPE running in the same shop could train operators and managers with a single set of commands. Posix was a plus, making the 3000 appear more Unix-like (which seemed important at the time).

It's been said that Posix was a promise only partly fulfilled for the 3000. There was a move to make the system more inclusive, to make it possible to port Unix software onto MPE/iX. Alas, a tech roadblock called the Fork of Death stood in the way of more widespread porting.

Over the years, however, Posix has been a feature to be discovered for most 3000 managers and operators. HP intended it to be essential; the computer's operating system was renamed from MPE/XL to MPE/iX just to call attention to these added Posix, Unix-like capabilities.

MPE failed in the Posix world primarily because of the unix "fork()" concept, so critical to the very nature of all that is Unix. It is a totally alien concept to MPE. MPE was designed to easily add additional new users to an executing process, and maintain the security/integrity of each individual user.  It was not designed to duplicate a current process's environment, including the local data and state, because there was no point.

As one sage developer said of the deathly fork, "Yes, MPE would fork(), but very reluctantly, and very slowly. So nothing that depended on it worked very well."

But enough history; Posix is still on the 3000 and remains a powerful interface tool, an alternative to the CI interface that HP created for the system. You can even call Posix commands from the CI, a nifty piece of engineering when it can be done. That's not always possible, though. A customer wanted to know how to "expand wildcard shells" using Posix. He tried from the CI and had this story to relate.

ls: File or directory “/BACKUPS/HARTLYNE/S*” is not found

So how do I do this? I need to be able to tell tar to archive all of the reels of a STD STORE set via a regexp.  It does not work in tar, and it apparently does not in ls, so I speculate that there is something special about the innovation of Posix utilities from the CI that I am not aware of. What is it?

Jeff Vance, the 3000 CI guru while at HP, who's gone on to work in open system and open source development, said this in reply:

Wildcards on most (all) Unix systems, including Posix implementations, are done by the shell, not the individual programs or in-lined shell commands, like ls in your example. A solution is to run the shell and execute ll from within.

Greg Stigers then supplied the magic Posix shell command to do the expansion:


In a note of thanks, the customer said that getting the answer by working with the HP 3000 community's newsgroup "is like having an entire IT department right outside my door."

An interesting footnote if you've read this far: The Posix shell for the 3000 is one part of the operating system not built by HP. The shell was licensed by HP from MKS, and Hewlett-Packard pays royalties to MKS so Posix can work inside of MPE/iX.

For now, enjoy using Posix as a way to get familiar with the commands in Unix systems. In the great majority of instances, these commands are the same.

Posted by Ron Seybold at 08:29 PM in Hidden Value, Homesteading | Permalink | Comments (0)

February 17, 2014

Durable advice speeds up HP 3000s

Our editor Gilles Schipper posted a fine article on improving CPU performance on 3000s "in a heartbeat." One of our readers asked a question which prompted Gilles to clarify part of the process to speed up a 3000, for free.

Gilles, who offers HP 3000 and HP 9000 support through his firm GSA, Inc., has also replied to a recent question about how to make a DLT backup device return to its speedy performance, after slowing to about a third of its performance.

The Heartbeat article focused on needless CPU overhead that could be caused by a networking heartbeat on 3000s. Gilles points out:

Fortunately, there is a very simple way to recognize whether the problem exists, and also a simple cure. If your DTCs are connected without transceivers, you will not be subject to this problem. Otherwise, to determine if you have the problem, simply type the command


In the report that is produced, you will notice OPEN files (ones with an associated asterisk ending the file name); these are 1W in size.

There are two such files associated with each configured DTC, file name starting with the letter H, followed by six characters that represent the last six characters of the DTC MAC address, followed by the letter A or B. The EOF for these files should be 0 and 5 for the respective "A" and "B" files.

Otherwise, your CPU is being subjected to high-volume, unnecessary IO, requiring CPU attention. The solution is to simply enable SQL heartbeat for each transceiver attached to each DTC. This is done via a small white jumper switch that you should see at the side of each transceiver. Voila, you've just achieved a significant no-cost CPU upgrade.

Compete details are in Gilles' original article. On speeding up backup time, he pointed out that adding an option to the STORE command will help you track IO retries.

We have a DLT tape drive. Lately it wants to take 6-7 hours to do backup instead of its usual two or less.  But not every night,  and not on the same night every week.  I have been putting in new tapes now, but it still occurs randomly. I have cleaned it. I can restore from the tapes no problem. It doesn’t appear to be fighting some nightly process for CPU cycles. Any ideas on what gives?

Something that may be causing extended backup time is excessive IO retries, as the result of deteriorating tapes or tape drive.

One way to know is to add the ;STATISTICS option to your STORE command. This will show you the number of IO retries as well as the actual IO rate and actual volume of data output.

Another possibilty is that your machine is experiencing other physical problems resulting in excessive logging activity and abnormal CPU interrupt activity — which is depleting your system resources resulting in extended backup times.

Check out the following files in the following Posix directories:


If they are very large, you indeed may have a hardware problem — one that is not "breaking" your machine, but simply "bending" it.

Posted by Ron Seybold at 10:34 PM in Hidden Value, Homesteading | Permalink | Comments (0)

February 11, 2014

Making a few more comparisons of code

CompareIt's always a good thing for the community to read about a tool they need and use, because it usually brings up some notes about allied solutions. When we wrote about replacing code comparison tools for developers who work on the 3000, we got several notes about other solutions. One can't be purchased any longer. Come to think of it, the other one cannot either -- but both of these tools can be obtained and be used in a development environment for HP 3000s.

The first is the much-beloved Whisper Programmer Studio. Bruce Hobbs left us a comment to say that this PC-based dev environment, one built to talk to the HP 3000 and files on the server, "offers a Compare Files item from their Tools menu. It does a fine job in a GUI environment."

Whisper came up in a note that our contributing editor Brian Edminster sent after the story emerged. "I still use it daily at my primary client," Edminster said, while giving us a heads-up he's still looking into how to make Notepad ++ a better player in the MPE development world. 3000 access is a problem to be solved, but Edminster specializes in open source solutions, so we'll stay in touch to see what he discovers.

In the meantime, you can enjoy his rundown on Programmer Studio versus Qedit for Windows.

The other solution for comparing files lies inside MPE/iX itself. That OS is also a product that, like the beloved Whisper, is no longer being sold. (It's being re-sold, however, each time a used 3000 changes hands.) Vesoft's Vladimir Volokh called to remind us of the hidden value inside MPE.

The HP 3000's File Copier, FCOPY, includes a COMPARE option. Vladimir called to remind us (after he mentioned celebrating his 75th birthday last week) that FCOPY COMPARE will only work on a single file at a time. "But with MPEX, you can use it on a file set," he said.

If you're able to log onto a 3000 you can find FCOPY and COMPARE with it. MPEX is for sale, so that makes a complete solution set. Alas for Whisper, it dropped out of the market. The company that built the Studio ended an 18-year run in 2009, according to company founder Graham Wooley. The UK's Whisper built and promoted the Programmer Studio PC-based toolset, selling it as a development environment which engineered exchanges with the 3000 but could be used to create programs under Windows. Robelle responded promptly with a Windows version of Qedit, and the 3000 ecosystem had a lively competition for programming tools for more than five years.

Programmer Studio seems to be available as 1. A free download, or 2. A $299 product, also downloadable. Sources include Download A to Z, and another location is But with commerical products on hand, we'd urge some caution about downloading free versions of formerly commercial software. Heaven only knows what might come down into a Windows hard drive while looking for something with so much value -- but now being offered for nothing.

Posted by Ron Seybold at 09:33 PM in Hidden Value, Homesteading | Permalink | Comments (0)

February 07, 2014

Code-cutter Comparing Solutions for 3000s

Npp-compareWhen a 3000 utility goes dark — because its creator has dropped MPE/iX operations, or the trail to the support business for the tool has grown faint — the 3000 community can serve up alternatives quickly. A mature operating system and experienced users offer options that are hard to beat.

One such example was Aldon Computing's SCOMPARE development tool, once a staple for 3000-based developers. It compared source files for more than 15 years in the HP 3000 world. Eventually Aldon left the MPE business. But there are a fistful of alternatives. Allegro Consultants offers a free MPE/iX solution in SCOM, located at

At that Web page, scroll down to SCOM. Other candidates included a compare UDC from Robelle, GNU Diff, diff in the HP 3000's Posix environment, and more. If you're willing to go off the MPE reservation -- and a lot of developers work on PCs by now -- there's even a free plug-in for Notepad++, that freeware source code editor which relaces Notepad in Windows. You can download that plug-in as an open source tool at

When the subject first surfaced, Bruce Collins of Softvoyage offered details on using diff in the HP 3000's Posix.

run diff.hpbin.sys;info="FILE1 FILE2"

The file names use HFS syntax so they should be entered in upper case. If the files aren't in the current account or group, they should be entered as /ACCOUNT/GROUP/FILE

Donna Hofmeister offered a tip on using Robelle's compare UDC:

Regarding Robelle's compare.  Being a scripting advocate, I strongly recommend adapting their UDC into a script.... and take a few seconds to add a wee bit of help text to the script, to make life more enjoyable for all (which is the reason for scripting, yes?)

Other environments that might be operating in the 3000 datacenter provide alternatives. Former HP engineer Lars Appel brought up a Linux option in the KDE development environment:

If using KDE, you might also find Kompare handy... (see screenshot)

On MPE, as others mentioned, there is still the Posix diff in two flavours: the HP-supplied in /bin and the GNU version that lives in /usr/local/bin. The former allows two output formats (diff and diff -c); the latter also allows “diff -u”.

Oh, regarding /bin/diff on MPE... I sometimes got “strange” errors (like “file too big”) from it when trying to compare MPE record oriented files. A workaround was to use tobyte (with -at options) to created bytestream files for diff’ing.

Appel has noted the problem of comparing numbered files, like COBOL source files, when one or both files have been renumbered.

With Posix tools, one might use cut(1) with -c option to “peel off” the line number columns before using diff(1) for comparing the “meat”. Something in the line of ... /bin/cut -c7-72 SourceFile1 > BodyText1.

Posted by Ron Seybold at 11:25 PM in Hidden Value, Homesteading, User Reports, Web Resources | Permalink | Comments (1)

January 20, 2014

How to convert 3000 packed decimal data?

Independent consultant Dan Miller wrote us to hunt down the details on converting between data types on the HP 3000. He's written a utility to integrate VPlus, IMAGE/SQL and Query for updating and modifying records. We'll let Miller explain. He wants to expand his utility that he's written in SPL -- the root language of MPE -- to include packed decimal data.

Can you tell me how to transfer a packed decimal to ASCII for display, then convert ASCII characters to the corresponding packed decimal data item?

I wrote a utility that integrates VPlus, IMAGE/SQL and Query, one that I used in a Federal services contract for data entry and word processing. Basically, VIQ lets me design a VPlus screen with field names the same as IMAGE data items. From the formatted screen a function key drops you into Query. You select the records to be maintained, specify "LP" as output, and execute the "NUMBERS" command (a file equation for QSLIST is necessary before this). From there, you can scroll thru the records, modify any field, and update. I never marketed it commercially, but I have used it at consulting customer sites.

I recently had occasion to use it at a new customer's site and realized that I never programmed it to handle packed decimal format numbers; the customer has a few defined in their database. Typically, database designers use INTEGER or DOUBLE INTEGER formats for numeric data, which occupy even less space -- the goal of using packed decimal) employing ASCII/DASCII, or BINARY/DBINARY intrinsics.

I need to discover the proper intrinsics to transfer the packed decimal numbers to ASCII characters and back. I'm sure there's a way, as QUERY does it. In COBOL, I think the "MOVE" converts it automatically, but my utility is written in SPL.

HP's documentation on data types conversion includes some help on this challenge. But Miller hopes that the readers of the Newswire can offer some other suggestions, too. Email me with your suggestions and we'll share them with the readers.

In the Data Types Conversion Programmer's Guide (tip of the hat to HP MM Support), we read about techniques to convert to real data types when when working outside of the COBOL library and compiler. From HP's documentation:

To Packed Decimal  

The compiler procedure HPACCVBD converts a signed binary integer to a packed decimal.  The input number is considered to be in twos complement form, from 2 to 12 bytes long. 

Packed-decimal procedures must be declared as intrinsics to be called from within high-level NM languages. In languages other than COBOL and RPG, follow these steps to convert from an input real to a packed decimal:

   1.  Multiply or divide the real number by an appropriate power of 10. 

   2.  Convert the resulting value to an base-ten integer. 

   3.  Convert that integer to a decimal.

The MOVE command is used to change one decimal to another within COBOL or RPG. But outside of COBOL or RPG, use the compiler library functions HPPACSRD and HPPACSLD to perform right and left shifts on packed decimals. You specify the amount of offset (the number of digits to be shifted).

To convert a packed decimal to a BASIC decimal, you should convert first to a twos complement integer or type ASCII, and then convert to decimal within BASIC with an assignment.  For example, assign an integer value to a decimal with decval = intval * n0, where n00 is the appropriate power of 10.  To convert between ASCII and decimal, use the VAL or VAL$ internal functions.

Posted by Ron Seybold at 01:55 PM in Hidden Value, Homesteading | Permalink | Comments (0)

December 27, 2013

Expert's restore job INSTALLS, RELOADS

Mark Ranft, the IT manager who's been stewarding a farm of 3000s at Navitaire/Accenture for many years, recently sent what he calls a geek gift for the holidays. Ranft, who's also done service in the community under his Pro3k service company, offered a restore job for the 3000 console. The job's extra value is preserving error messages.

Here is a HP 3000 geek present for you! I used to do the first system restore interactively on the console, but would occasionally lose some important error messages as they scrolled by and I wasn't able to look back. So I came up with the following expert tip.

After I boot a system, set up disks, tapes and console access and set up the volumes for the MPEXL_SYSTEM_VOLUME_SET, I copy and paste the file below from a PC text file into the console. Once it's complete, the tellop commands simply spell DONE. I wanted it to show so I would notice it more than a single LOGOFF.

Note: I intentionally added the pause to ensure the tape in LDEV 7 reloads before the job starts.

Editor's Note: For those who might not know, the ">" indicates a redirect to a file; two in a row indicates an append to a temporary file. (Thanks to Vladimir Volokh for pointing this MPE fundamental out to us.) You can see a version of the job which you can cut and paste online at the 3000 newsgroup archive.


Posted by Ron Seybold at 02:26 PM in Hidden Value, Homesteading | Permalink | Comments (0)

December 18, 2013

Store to Disk preserves backups' attributes

By Brian Edminster

Second of two parts

Yesterday I outlined some of the powers of the Posix program pax, as well as tar, to move MPE/iX backup files offsite. Here’s a warning. There are some file types that cannot be backed up by tar/pax while also storing their attributes:  ;CIR (circular) and ;MSG (message) files (and possibly others. I haven’t tested all possible file types yet.  Also, there is an issue with tar that is a fairly well known and has been discussed on the 3000 newsgroup. Occasionally it does not un-tar correctly.  It is unclear if and when this was fixed, but I’d love to hear from anybody that might be in the know, or which specific situations to avoid.

Regardless of these limitations, I’ve found a simple way around this. Use store-to-disk to make your backup, then tar to wrap it, so as to preserve the store-to-disk files’ characteristics, before shipping the files off-system. Later, when you retrieve your tar backups and un-tar them, you’ll get your original store-to-disk files back without having to specify the proper ‘;REC= , CODE= , and DISC=’ options on an FTP ‘GET’. I’ve been doing this for several months now on several systems, and I have not had any failures.

If you have a version of STORE that has compression, use it to reduce the size of backup.  If not, use the ‘z’ option in the tar/pax archive you create from your store-to-disk backup.  Do not use both.  They don’t play well together, and you may end up with a larger tar file.

But what about the tar archive size limit of 2GB?  There’s an easy way around this as well, as this limit is common on early Unix and Linux systems. Just pipe the output through ‘split’ to create chunks of whatever size you want. Below, there's simple examples for both directions.

Piping Tar Figure 1, just below, is an example of the ‘cksum’ file produced.

Checksum 1 Grey

Below, Figure 2 is an example of a ‘cksum’ created of the files as they’re stored on the NAS. 

Checksum 2 GreyAs both the hashes and #bytes shown in each file are the same as on the MPE/iX server — we know the backups are transferred correctly.  The same technique can be used ‘in reverse’ to verify that when FTP’d back to the FTP server, they’re still intact.

When un-taring this backup, ‘cat’ the pieces together and pipe it through tar.   At least, that’s the way it’s supposed to work.  Yes, there is a known issue with the MPE/iX Posix shell’s built-in cat command. But I’ve so far been unable to successfully use the external cat command to successfully cat either.  Here’s how this should work for a 2-chunk tar backup:

sh>/bin/cat ./CS1STD1.ustar.aa ./CS1STD1.ustar.ab | tar -xfv - *

Unfortunately, for me at least, it always throws an error indicating bad format for the tar files.There is a work-around, however.  Note that while ‘cat’ing the tar ‘chunks’ didn’t work using the internal or external cat command, untar with multi-file option does work.  Even though it gives a minor error messages, files were returned to proper store-to-disk format, and the recovered store-to-disk backup is intact and has been used to recover the desired files. To do this, use tar like this: 

sh>tar -xfv ./CS1STD1.ustar.aa *  

Also note that when using tar in this way, it will ask for the name of the 2nd-nth component tar files, as it finishes reading each prior piece.  You must give the filename and press return to continue for each.  I believe that it should be possible to script this so that it’s fed the filenames, but I haven’t gotten around to doing that yet.  

Brian Edminster is president of Applied Technologies.

Posted by Ron Seybold at 10:06 PM in Hidden Value, Homesteading | Permalink | Comments (0)

December 17, 2013

HP 3000 Backup Files, On the Move

By Brian Edminster 

First of two parts

Once store-to-disk backups are regularly being processed, it’s highly desirable to move them offsite — for the same reasons that it’s desirable to rotate tape media to offsite storage. You want to protect against site-wide catastrophic failures. It could be something as simple as fire, flood, or a disgruntled employee, or as unusual as earthquake or act of war.

Regardless of the most pressing reason, it really is important to keep at least some of your backups offsite, so as to facilitate rebuilding / recovering from scratch, either at your own facility, or at a backup/recovery site.

The problem comes in that the MPE/iX file system is far more structured than Unix, Windows, or any other non-MPE/iX file system-based storage mechanisms. While transferring a file off MPE/iX is easy via FTP, sftp/scp, or rsync, retrieving it is problematic, at least if you wish the retrieved files and the original store-to-disk files to be identical (i.e., with the same file characteristics: filecode, recsize, blockfactor, type, and so forth).

What would be optimal is automatic preservation of these attributes, so that a file could be moved to any offsite storage that could communicate with the MPE/iX system. Posix on MPE/iX comes to the rescue.

For FTP transfers between late-model MPE/iX systems this retrieval is automatic, because the FTP client and server recognize themselves as MPE/iX systems.  For retrieving files from other systems, HP has made that somewhat easier by making its FTP client able to specify ‘;REC= , CODE= , and ;DISC=’ on a ‘GET’:

Figure 1If you do not specify the ‘buildparms’ for a file being retrieved, it will default to the file-type implied by the FTP transfer mode: ASCII (the default), binary, or byte-stream (often called ‘tenex’ on Unix systems).  The respective defaults used are shown below:

Figure 2 GreyWhat follows is an example of automatic preservation of these attributes, so that a file could be moved to any offsite storage that could communicate with the MPE/iX system.  And this is yet again where Posix comes to the rescue, via the venerable ‘tar’ (Tape ARchiver), or ‘pax’ archiving utilities.

‘pax’ is a newer backup tool, designed to be able to read/write with tar format archives, newer ‘ustar’ format (that includes Extended Attributes of files). At the same time it has a more ‘normal/consistent’ command syntax (as Unix/Posix stuff goes, anyway), plus a number of other improvements. Think of it as tar’s younger (and supposedly more handsome) brother.

A little known feature of most ‘late-model’ tar and all pax commands is the ability for it to recognize and utilize Extended Attributes.  These will vary with the target implementation platform, but for the tar and pax commands included with releases after v5.5 of MPE/iX this capability is not only present — but contrary to the man command’s output and HP’s Posix Command Line manual, it’s the default! You use the -A switch to turn it off, returning tar to a bytestream-only tool.

While not externally documented, via a little experimentation I’ve determined that the following series of Extended Attributes value-pairs are in the MPE/iX Posix implementation of a tar or pax ‘file header’ for each non-Posix file archived:

MPE.RECORDSIZE= value in bytes
MPE.BLOCKFACTOR= integer value
MPE.RECORDFORMAT= integer value (0=unstructured?)
MPE.CCTL= integer value (0=nocctl)
MPE.ASCII= integer value (0=binary, 1=ascii)
MPE.FILECODE= integer value, absent for ‘0’
MPE.FILELIMIT= value in bytes
MPE.NUMEXTENTS= integer value, may be absent
MPE.NUMUSERLABELS= integer value (0=no user labels), and
MPE.USERLABELS=[binary content of user labels]

Posted by Ron Seybold at 04:48 PM in Hidden Value, Homesteading | Permalink | Comments (0)

December 04, 2013

A-Class servers bid to retain some value

When HP released the A-Class HP 3000 models, the computers represented a new entry point for MPE servers. This lowest-end machine, including an MPE/iX license and the IMAGE/SQL database, sold at retail for $15,900. It ran about 70 percent faster than the 3000's previous low end unit, the Series 918. The customer base was hungry for something this small. HP product manager Dave Snow walked the first one down the aisle at the SIGPROF user meeting.

That was more than 12 years ago. The A-Class was built upon PA-RISC processors, chips that are several generations behind HP's latest Itanium-class CPUs. You might expect that the A-Class boxes could be worth less than one tenth of what they sold for during the year that HP curtailed its 3000 plans.

Cypress Technology has got three of these A-Class servers available via eBay, selling them for $3,400 each. They've been out on the auction website for awhile now -- more than 10 days -- but the Buy It Now price hasn't come down. So far, the sellers are still arranging for a transferrable license for these boxes. That's something that runs up the price of a used 3000. But then, so can the extras.

Let's pause here for a moment and consider the value retention of this piece of IT equipment. A robust PC, tricked out at the top end of 2001 technology, couldn't even manage the price of a doorstop in today's marketplace.

Take HP's fastest laptop of 2001, the Omnibook 6000. Listed at a minimum of $1,799 on its release, the computer

...combines the power of Intel's fastest mobile processor with HP's tradition of providing reliable, manageable, stable, secure and expandable products. Its sleek styling with magnesium alloy cover, rubberized corners and grips, and spill-resistant keyboard, help make this a durable machine that holds up well for people on the go. 

Today on the same eBay website, that $1,799 computer is selling for $95. You can get one as cheap as $40.

HP's computers, whether laptop or rack-mounted, were built to last with above-the-norm components. No, you won't mistake the drives and memory in that Omnibook with those that have the quality of an A400-100-110 HP 3000. But after a dozen years, without a license that would satisfy an auditor, the 3000 sells for more than 20 percent of its list. The Windows-based laptop, portable in a way that only the 3000 user could dream about, is selling for about 5 percent.

These A-Class systems each have a 9GB boot disk (yeah, smaller than a thumb drive's capacity) and a 300GB main storage disk, along with a whopping 2GB of RAM. The sellers report that they're working on getting an auditor-happy license for the pre-installed MPE/iX 7.5 on the A-Class, too.

These came from HP as part of the e3000 trade in program. I am still in the process of getting all the license transferred on all these A400 and A500 boxes that we got. So to answer your question about a licensed copy of MPE/iX, not yet but yes soon, hopefully.

HP took the value protection of its 3000 line a little too seriously. The horsepower on these A-Class boxes was hobbled by MPE/iX, so a chip that ran at 440MhZ was made to perform at 110. But with MPE/iX as its core value, and the fact that these were the ultimate generation of HP-crafted 3000s, several thousand dollars for trade-in servers that are more than a decade old proves a point about value protection.

When you can find someone offering an Omnibook for $195, running the latest Linux and PostgreSQL installed, you'll have something to compare.     

Posted by Ron Seybold at 08:23 PM in Hidden Value, Homesteading | Permalink | Comments (0)

December 03, 2013

When MPE's Experts Vied at Trivial Pursuit

Pursuit1As the range of expertise on MPE and the 3000 continues to wane, it's fun to revisit a time when knowing commands could make you a leader in a community. The archives of the Newswire run rich into an era before MPE's RISC version, when MPE V was the common coin of data commerce. In those times, regional user group members gathered in person once a year. One such group, the Southern California Regional User Group (SCRUG) mounted a conference so elaborate that it hosted its own Trivial Pursuit version for MPE. Six months before anybody could boot up a PA-RISC 3000, I reported on a showdown between the leading lights in March, 1987 -- a contest moderated by Eugene Volokh in his heartland of SoCal.

PASADENA, Calif. -- It took 10 of the sharpest wits in the HP world to provide it, but entertainment at the SCRUG conference here became a trivial matter for an hour. The prizes were limited to bragging rights, laughter from insiders, and a useless bit of plastic which everybody had and nobody needed.

PursuitwriteringsVesoft's Eugene Volokh moderated the first all-star HP Trivial Pursuit at the conference, as nine top programmers matched wits with each other and Volokh's list of questions. Correct answers drew a small, round reward: mag tape write rings. "Because," said Volokh, "there is no other use for them."

Pursuit 2Competing on four different teams were some of the better-known names from HP's history. Adager's Fred White and Robelle's Bob Green were on hand; local developer Bruce Toback of OPT and Bradmark's David Merit represented the Southern California contingent; Fastran's Nick Demos was on hand from the East Coast, along with Vesoft's Vladimir Volokh adding his Russian wit; and SPLash savants Stan Sieler, Steve Cooper and Jason Goertz made a prominent showing from Allegro and Software Research Northwest.

The questions, like all good trivia, covered HP's most arcane and obscure knowledge of the 3000's OS. Several stumped the teams. For example, "What's the highest alphabetical MPE command, with A as the lowest and Z as the highest?" Green offered VINIT as an answer, but he was told WELCOME was correct.

"No fair," Green said in protest. "They didn't have that one when I started on the 3000."

There were others more obscure, but less difficult for the panel. The product number of MPE (HP 32002). The distinguishing feature of the 2641 terminals (an APL command set) and the product which preceeded V/3000 (DEL/3000, for Data Entry Language).

Non-technical trivia was also included. One that had to be answered by the audience was "What does the HP stand for in HP Steak Sauce?" (House of Parliament). And on one question, Eugene himself was humbled by an overlooked answer. He'd asked what four MPE commands can only be executed by the file's creator. The panel found RELEASE, RENAME, ALTSEC and SECURE. But a crowd member said, "There's one more."

"One more?" said Volokh.

"Think about it -- BUILD," came the reply from the crowd.

HP's history offered some political wit in one question. After asking what post David Packard held in the US government (Assistant Secretary of Defense) Volokh added, "and what years did he serve?"

Green, a Canadian, quipped back "In the Nixon administration, which was too long, to be sure."

As the laughs subsided, the Soviet-born US citizen-moderator chided back, "Now, we'll not have foreigners commenting on our government."

But it was an exchange including father and son of that Volokh family that showed the beneficial byproduct of the contest -- expanding the knowledge of HP's engineering roots. Eugene asked the panel, "What is the earliest date of the century the DATELINE intrinsic works with?" A first answer came from the panel, and then Vladimir answered with March 1, 1900.

His son then gave the correct answer: Feb. 29, 1900. "It incorrectly assumes that 1900 was a leap year," Eugene said. "I should know, since Feb. 29 is my birthday."

Posted by Ron Seybold at 04:06 PM in Hidden Value, History | Permalink | Comments (0)