September 03, 2019

ERP Tips: Using work orders to backflush

Pipe-and-plumbingPhoto by Samuel Sianipar on Unsplash

MANMAN still runs operations at companies around the world. Not a lot of companies, of course. It's 2019 and everything is smaller in size, not just your hearing aids. The MANMAN managers are still looking for tips. Here's one generated from a question out of the Altra Industrial Motion Corp. from senior systems analyst James English.

We are on MANMAN version 9.1 on an HP 3000. We have all MANMAN modules, including MANMAN/Repetitive. Is it possible to backflush work orders without using Repetitive? Our one manufacturing location is looking at simplifying work order transactions. They are manually transacting each operation on their work orders, even though they don’t collect actual hours.

Short question: How can they use work orders instead of using Repetitive?

When a work order has been received into stock, it comes to the scheduler-planner to push the times through each sequence, since the operation no longer does time cards. Once that time-pushing is done, the work orders are closed for material and labor. Once a work order is received into FG, instead of pushing the time through each operation, could we just back flush?

Alice West of Aware Consulting says

You can set all the components on your bill as “consumable” and then when you complete the WO the system will consume all the materials.  We always called this feature “poor man’s Repetitive.” 

However, it sounds like you are trying to simplify the labor portion of the transaction.  For that, you can look at your COMIN variable settings. Here is a chart I put together to show how 3 different variables work together.

MANMAN backflush

So to combine the operation movements with the movement to stores, you probably want to set your COMIN variables as follows:

93 to 0

121 to 1

177 to 1

You can also put all the hours on the routing onto the last operation and then only transact that one sequence.  In order to earn the correct labor cost you only need to transact the sequences that have standard labor hours.  One reason users transact every step is to get an operation status, but since you’re transacting the movements after the fact you clearly don’t need that.

Terry Floyd of The Support Group also says

I wrote a mod years ago that is/was installed at several MANMAN/MPE sites.  I called it TR 3-eightly-thru (because it was originally from TR302 and does a “move-thru”).  It’s a bunch of code down inside subroutines under GENTRS; when it prompts Next Sequence and you enter the last Operation Sequence (or any Operation Sequence out of order) it prompts “Move Thru?” and if you answer YES, it performs all of the moves through the subsequent sequences as if you had done them one at a time.

So, instead of changing all of your routings (that might be a lot of work), you could use this version of TR302, do a lot less data entry, and still earn standard labor dollars.  I still have this code for Releases 9 and 11, Jim, but it’s not a Standalone and requires re-linking all of SYSMAN.  I could probably turn it into a Standalone and make it a lot easier to install.

Posted by Ron Seybold at 10:52 AM in Hidden Value, Homesteading | Permalink | Comments (0)

Get e-mail notice when the NewsWire blog gets a new entry. Just say "Blog Me" in a message to [email protected].

July 25, 2019

Using VSTORE to verify backups

Storage-1209059_1920
The VSTORE command of MPE/iX has a role in system backup verification. It's good standard practice to include VSTORE in every backup job's command process. Using VSTORE is documented in the manuals for the original OS in which it was introduced: 5.0.

If possible, do your VSTOREs on a different (but compatible model) of tape drive than the one the tape was created on. Why? DDS tape drives slowly go out of alignment as they wear.

In other words, it's possible to write a backup tape, and have it successfully VSTORE on the same drive. But if you have to take that same tape to a different server with a new and in-alignment drive, you could have it not be readable!

If you'll only ever need to read tapes on the same drive as you wrote them, you're still not safe. What happens if you write a tape on a worn drive, have the drive fail at some later date -- and that replacement drive cannot read old backup tapes?

Using the 'two-drive' method to validate backup (and even SLT) tapes is a very prudent choice, if you have access to that array of hardware. It can also often help identify a drive that's going out of alignment -- before it's too late! 

Unfortunately, SLTs have to be written to tape (at least, for non-emulated HP 3000s). However, your drive will last years longer if you only write to it a few times a year.

You can find HP's VSTORE documentation page from that HP STORE command manual on the Web, (thanks to 3K Ranger for keeping all those those pages online).

Posted by Ron Seybold at 07:01 PM in Hidden Value | Permalink | Comments (0)

July 16, 2019

Wayback: Security boosts as enhancements

Booster-seat
They weren't called enhancements at the time, but 13 years ago this month some security patches to MPE represented internal improvements that no company except HP could deliver to 3000s. Not at that time, anyway. This was the era when the 3000 community knew it needed lab-level work, but its independent support providers had no access to source code.

Just bringing FTP capability up to speed was a little evidence the vendor would continue to work on MPE/iX. For the next few years, at least; HP had halted OpenMPE's dreams to staff up a source code lab by delaying end of support until 2008. The vendor announced a couple more years of its support to 3000 customers.

In doing that, though, HP made an assignment for itself with the support extension, the first of two given to the 3000 before the MPE lab went dark in 2010. That assignment was just like the one facing today's remaining HP 3000 customers: figure out how to extend the lifespan of MPE expertise in a company.

FTP subsequently worked better in 2006 than it had in the years leading up to it. It's not an arbitrary subject. FTP was the focus of a wide-ranging online chat in May. Did you know, for example, that FTP has a timeout command on MPE/iX?

The connection time-out value indicates how long to wait for a message from the remote FTP server before giving up. The allowable range is 0 to 3000. A value from 1 to 3000 indicates a time-out value in seconds. A value of 0 means no time-out (i.e., wait forever). If num-secs is not specified, the current time-out value will be displayed. Otherwise, this command sets the connection time-out to num-secs seconds.

When an FTP job gets stuck, using timeout can help.

MPE/iX engineers and systems managers were working more often in 2006 than they do today. When anybody who uses MPE/iX finds a 3000 expert still available, they need to get in line for available work time. It remains one good reason to have a support resource on contract. A company relying on a 3000 shouldn't be thinking a mailing list or a Slack channel represents a genuine support asset. Even if that FTP tip did arrive via the 3000-L.

The resource of good answers for crucial questions gets ever more rare. The 3000-L mailing list has rarely been so quiet. There are information points out there, but gathering them and starting a discussion is more challenging than ever. File Transfer Protocol is pretty antique technology for data exchange. It turns out to be one of the most current standards the 3000 supports.

Before the boosts, FTP services had lagged behind the rest of the world's FTP for some time. The final patches for MPE/iX 7.5, 7.0 and 6.5 improved FTP in several areas. HP said in 2006 it made SOX compliance easier. It was an era when people cared about SOX and HP still wanted 3000 customers to be able to meet the Sarbanes-Oxley protocols.

The final security enhancements to FTP/iX are in patches FTPHDG9 7.5, FTPHDH0 7.0 and FTPHDH1 6.5. The patches include:

  • deny del, overwrite, rename
  • chroot limiting cd, dir, put, get, mput, mget
  • new enhanced semantics for 'noretrieve'

Patch IDs are:

FTPHDH1(A) for 6.5
FTPHDF5(A) for 7.0 ** note, this patch is for some general FTP fixes
FTPHDH0(A) for 7.0
FTPHDG9(A) for 7.5

Posted by Ron Seybold at 08:26 PM in Hidden Value, History, News Outta HP | Permalink | Comments (0)

June 27, 2019

Make that 3000 release a printer grip

Fist-artwork
A printer connected to our HP 3000 received a "non-character" input and stopped printing. The spooler was told to stop in order for the queue to be closed and restarted. When we do a show command on that spooler, it reports " *STOP .......CLOSING CONN " How do I force a close on the connection? The HP 3000 is used so much it can't really be shut down any time soon.

Tracy Johnson says

If it is a network printer, just "create" another LDEV with the same IP. The 3000 doesn't care if you have more than one LDEV to the same IP (or DNS). Raise the outfence on the original LDEV. Once created, do a SPOOLF of any old spool files on that LDEV to the new LDEV. You can do it in a job that reschedules itself if it persists. The first spool file still in a print state will probably be stuck, but this technique should fix subsequent spool files. The situation probably won't go away until the next reboot.

We've had our full backup on Friday nights abort several times and are not really able to discern why; sometimes it works while other times it doesn't. As a test/fix, we're swapping out the “not very old DLT tape” for a brand new DLT tape to see if that makes a difference. Our daily, partial backups work just fine—each day has its own tape.

Mark Ranft says

Let's talk tapes. How old are these unused new tapes? From my experience, new tapes and old tapes both have issues. I would not call a tape that was manufactured years ago, but hasn't been used, "New." It is still an old tape. But an unused tape will have microscopic debris from the manufacturing process. It may work just fine, but be prepared for more frequent cleaning if you are using unused tapes.

Old tapes are tried and true. That is, until they start stretching and wearing from overuse. If it was my STORE that failed, I would start by cleaning the drive. And cleaning cartridges can only be used a specific number of times. That is why they come with the check off label. After the allowed number of cleanings, you can put them in the drive but they don't do anything.

I was told by a trusted CE friend that cleaning a drive three times is sometimes necessary to get it working again. I don't know the science behind it, but that process did seem to save my behind more than once. After cleaning, do a small test backup and a VSTORE. Try to read (VSTORE) an old tape.

Posted by Ron Seybold at 03:33 PM in Hidden Value, Homesteading | Permalink | Comments (0)

April 12, 2019

Fine-Tune: Creating Store to Disc from tape

NewsWire Classic

I still have some 3000 information on a tape. I’d like to create a Store to Disc file with it — how do I do that?

Jack Connor replies:

There are several solutions. The first and easiest is to simply restore the info to a system (RESTORE *T;/;SHOW;CREATE;ACCOUNT=WORKSTOR) where WORKSTOR is an account you create to pull the data in.

Then a simple FILE D=REGSFILE;DEV=DISC and STORE /WORKSTOR/;*D; with whatever else should create the disc store.

The second method is to use FCOPY. You'll have to research the STORE format, but I believe it's FILE TAPEIN;DEV=TAPE;REC=8192,,U,BINARY.

The third (also easy, but you need the software) is to use Allegro's tool TAPECOPY, which moves from tape store to disc store and back.

John Pitman adds:

Do you mean copy it off tape to a disk store file? I’m not sure if that can be done, as in my experience of tapes, there is a file mark between files, and EOT is signified by multiple file marks in a row... but anything may be possible. If you do a file equate and FCOPY as shown below, you should be able to look at the raw data, and it should show separate files, after a file list at the front.

FILE TX;DEV=TAPE;REC=32767
FCOPY
FROM=*TX;TO=;CHAR;FILES=ALL

Here is our current store command, and the message it provokes. MAXTAPEBUF speeds it up somewhat

STORE  !INSTOREX.NEW.STOCK2K;*DDS777;
FILES=100000;DIRECTORY;MAXTAPEBUF

Posted by Ron Seybold at 04:52 PM in Hidden Value, Newswire Classics | Permalink | Comments (0)

March 25, 2019

Making Directories Do Up To Date Duty

Last week we covered the details of making a good meal out of LDAP on an MPE system. Along the way we referred to an OpenLDAP port that made that directory service software useful to 3000 sites. The port was developed by Lars Appel, the engineer based in Germany whose work lifted many a 3000 system to new levels.

Appel is still working in 3000s, from time to time. We checked in with him to learn about the good health of LDAP under MPE/iX.

Is this port still out in the world for 3000 fans and developers to use?

Well, I don't recall if anyone ever used it (and I must admit that I don't recall of the top of my head, what drove me to build it for MPE/iX at that time... maybe just curiosity). However, the old 1.1 and 2.0.7 versions at still available at the website maintained by Michael Gueterman, who is still hosting my old pages there.

The versions are — of course — outdated compared to the current 2.4.x versions at openldap.org. But anyone with too much spare time on their hands could probably update the port.

But it's still useful?

Funny coincidence, though. Just yesterday, I had to use a few ldapsearch, ldapadd, and ldapmodify commands against our Linux mail server. If I had seen your mail two days ago, I could probably have looked up examples in my own help web pages, instead of digging up syntax in some old notes and man pages.

And you're still working in MPE?

I am still involved with Marxmeier and Eloquence, so it is more with former HP 3000 users that with current ones.

Posted by Ron Seybold at 09:12 PM in Hidden Value, Homesteading | Permalink | Comments (0)

March 22, 2019

Making LDAP Do Directory Duty

DAP
Explore a 3000 feature to see how a little LDAP’ll do ya

NewsWire Classic

By Curtis Larsen

When you think of LDAP, what do you think of? You’ve probably heard about it — something to do with directories, right? — but you’re not quite sure. You’ve heard some industry buzz about it here and there, read a paper or two, but perhaps you still don’t quite know what it can do for you, or how it could work with an HP 3000. Hopefully this article will de-mystify it a bit for you, and spark some ways you could use it in your own organization.

MPE currently has limited support for LDAP, but the support is growing. Aside from the OpenLDAP source ported by Lars Appel, HP offers an LDAP “C” Software Development Kit for writing MPE/iX code to access directories, er, directly.

LDAP stands for “Lightweight Directory Access Protocol.” In a nutshell, it allows you to create directories of information similar to what you would see in a telephone book. Any information you want to store for later quick retrieval: names, telephone numbers, conference room capacities, addresses, directions — even picture or sound files. Using directories such as these is an incredible time-saver (can’t you think of company applications for one already?), but LDAP can do so much more. The directories you create are wholly up to you, so the sky’s the limit.

At this point you might be saying “Great, but why not use a database for this stuff?” That’s an excellent question, and in truth, there is some overlap in what you might want stored in a database versus being stored in a directory. The first and foremost difference between them is that a directory is designed for high-speed reading (and searching) — not writing.

The idea is that, generally speaking, a directory doesn’t change much, but quickly reading its information is a must. Understand that this doesn’t mean that directory writes are at all bad — they’re just not structurally designed to be as fast as reads are.

Databases also require more in the way of overhead: high-powered servers and disks, (usually) high-priced Database Management Systems — which one will be best for you? — and highly-skilled, highly-paid DBAs to keep it all happy. (Our DBA said I had to mention that part.)

LDAP directories are generally simpler and faster to set up and manage. LDAP is (also) a common client-server access standard across many different systems. You don’t have to deal with the outrageous slings of one DBMS, or the delightful syntax variations in SQL or ODBC implementations. LDAP directories can even be replicated. Copies of directories, or just sections of larger directories, can be stored on different servers and updated (or cross-updated) periodically. This can be done for security (“mirrored directories” — one here, one elsewhere), performance (all queries against local entries on a local server), or both.

Let’s dig more into what LDAP directories can do. I won’t get into any real technical details about syntax, history, specifications, etc. Better explanations of these things have already been written by folks far more knowledgeable than I am. You’ll find some links at the end of this article that you can use to understand more.

Practical LDAP ideas

For most, adding all the different network and systems logons can be a tedious task. Add to that the different options on each system (like VESoft security on e3000s) and you have a necessary, but time-consuming chore. With an LDAP directory maintained by your Human Resources folks, regular batch jobs can query that directory and perform the add/change/delete maintenance automatically. The jobs can even update the directory with the new logon information.

Wouldn’t it be nice to have different batch jobs send various e-mail (or faxes, or pages, or…) to different folks about something? Using that same HR LDAP directory, any job can look up someone’s current contact info and send them that message, page, fax, etc. Your Admins, Ops, and Help Desk folks are now on to bigger things, and no more need for maintaining separate KSAM or Image tables on the HP system.

Here’s a bit of a mind-bender: How about “personalized printing”? Same as the last example, but what if an LDAP directory stored the list of printers your new hire will (generally) use as well? Now all your systems — 3000 included -— can direct printed output based upon a person — not a destination. If they change locations, just change the directory entry. We can get even more esoteric and say “logical entity” instead of “person.”

Suddenly, you have the ability to address output to groups of people (“Accounting’s printer,” fax or e-mail addresses), other computer systems, etc. Get a little wilder and you can even have the LDAP directory describe the format for the addresses used. (Batch job to LDAP server: “I have this color print file on legal paper for ‘Bob’ — where should I send it?”) Really, it’s just a small step past device classes.

Interesting stuff, eh? All of these things are possible this very instant. Yes, they do require some preparation and support work (but then every new technology does). Let’s try another idea:

You say you want system-wide variables? How about enterprise-wide variables? Using LDAP, practically any type of system can share information with any other system. You would want those variables to be fairly static (that “reads are better” thing), but certainly a central repository for something like cross-system daily scheduling information (dates don’t change much) or summary totals could be a handy thing. “Variable” states can be retained as long as needed, too.

Okay, let’s explain a little more about that last one. LDAP directories are made up of basically three things: containers, variables and values. Containers can hold either variables or other containers, and variables have values. (There’s a little more to it than this, but it will hold us for now.) Using those elements, the LDAP directory you create looks very much like the classic tree structure used in most file systems (like Unix) or even DNS.

An example of this might be a “root” container for your company, holding a container object named “USA,, holding one named “New York.” In “New York” is a “dept.” container named “Accounting,” and in that is one named “P. Strings.” That object type might be “person,” and it, in turn holds all sorts of contact information. You could just as easily have a container named “Jobs,” holding one named “Schedule X,” wherein we could find lots of information on Schedule X, like its completion time and status.

For the object-oriented among you, you can also think of a container as an object, and variables as properties of that object. This makes LDAP very workable with OOP. Perl likes it, Python likes it — shucks, even C++ thinks it has class.

This idea is a bit stretched, but: persistent, cross-platform objects/object classes. You can use an LDAP directory as a base class template library and retrieve certain directory sections into a program for later use. Use them as a library of other things as well.

LDAP directories fit DOM fairly well, so you can use LDAP to store DTDs and other work with XML. (“Does anyone know where the DTD for getting EDI XML data from ‘Foobar, Inc.’ is?” “Yeah, It’s the LDAP directory under “DTDs, EDI, Foobar, or just do a search for ‘Foobar, Inc.’”.

I hope this article gives you some ideas about what you might want to do with an LDAP server or two accessed by your HP 3000. LDAP, like Samba and Apache before it, is yet another example of innovative technology working upon the rock-solid stability of the HP 3000 system. File and print server, web server, and now LDAP — who says the 3000 can’t share?

Posted by Ron Seybold at 06:27 PM in Hidden Value, Homesteading, Newswire Classics | Permalink | Comments (0)

March 15, 2019

Samba, and making it dance on MPE/iX

Screen Shot 2019-03-24 at 5.11.14 PM
HP 3000 sites have the Samba file sharing system, a universal utility you find on nearly every computer.

Samba arrived because of two community coding kings: Lars Appel, who ported the Samba open source package to the 3000, and Mark Klein, who ported the bootstrap toolbox to make such ports possible. As John Burke said in the sunnier year of 1999:

Without Mark Klein’s initial porting of and continued attention to the Gnu C++ compiler and utilities on the HP 3000, there would be no Apache/iX, syslog/iX, sendmail/iX, bind/iX, etc. from Mark Bixby, and no Samba/iX from Lars Appel. And the HP 3000 would still be trying to hang on for dear life, rather than being a player in the new e-commerce arena.

So Samba is there on your HP 3000, so long as you've got an MPE version minted during the current century. Getting started with it might perplex a few managers, like one who asked how to get Samba up on its feet on his 3000. One superb addition is SWAT, the Samba administration tool. Yup, the 3000's got that, too.

As a total network newbie, I tried to get Samba up and running from the directions at docs.hp.com, but failed miserably. Do you need Samba running before you can run SWAT? Where can I find the instructions for Samba on the 3000 for the complete idiot?

When this user asked how to do the Samba, OpenMPE director Hofmeister showed off some steps to walk through to get it started.

1. What OS version are you on?  And Samba’s version?
2. When you click on explore your network on your PC (aka network neighborhood, in old-speak) is there a domain/workgroup name?  Did you add that to your smb.conf file?  Is your MPE server on the same network as the rest of your servers?
3. Did you edit smb.conf with vi or another bytestream-friendly editor?
4. In smb.conf, is interfaces set correctly to your MPE system’s IP address and mask?
5. You probably don’t want to be a domain master or preferred master.

Samba is generally pretty easy to get started.  There’s not a whole lot to change in the smb.conf file (adding logons is a bit different though). A few minor changes and Samba should start up.

In addition to Donna's advice, we can add a few pointers to help. First, SWAT runs on the HP 3000 fine. Have a look at the webpage about the last version of MPE/iX Samba to see a SWAT confirmation. SWAT's been around since we took note of when Appel ported it, in 2002.

There's also a nice roundup of a Samba startup regimen for the HP 3000 at the Enterprise Systems Journal website. Enjoy the news about the pizza contributions on that page, too.

Posted by Ron Seybold at 05:02 PM in Hidden Value | Permalink | Comments (0)

March 08, 2019

It may be later than you think, by Monday

Clock-face
Daylight Saving Time kicks off early on Sunday. By the time you're at work on Monday it might seem late for the amount of light coming in your window. If you're working at home and next to the window, it will amount to the same thing. We lose an hour this weekend.

This reset of our circadian rhythms isn't as automatic as in later-model devices. Like my new Chevy, which is so connected it changes its own clocks, based on its contact with the outer world. HP 3000s and MPE systems like those from Stromasys don't reach out like that on their own. The twice-a-year event demands that HP 3000 owners adjust their system clocks.

Programs can slowly change the 3000's clocks in March and November. You can get a good start with this article by John Burke from our net.digest archives.

The longer that MPE servers stay in on the job, the more their important date manipulations will be to its users. The server already hosts a lot of the longest-lived data in the industry. Not every platform in the business world is so well-tooled to accept changes in time. The AS/400s running older versions of OS400 struggled with this task.

You also need to be sure your 3000's timezone is set correctly. Shawn Gordon explained how his scheduled job takes care of that:

"You only have to change TIMEZONE. For SUNDAY in my job scheduler I have the following set up to automatically handle it:

IF HPMONTH = 3 AND HPDATE > [this year's DST] THEN
   ECHO We are going back to Standard Time
   SETCLOCK TIMEZONE = W8:00
ENDIF
IF HPMONTH = 11 AND HPDATE < [this year's ST] THEN
   ECHO Setting clock for Daylight Savings Time
   SETCLOCK TIMEZONE = W7:00
ENDIF

3000 customers say that HP's help text for SETCLOCK can be confusing:

SETCLOCK  {DATE= date spec; TIME= time spec [;GRADUAL | ;NOW]}
   {CORRECTION= correction spec [;GRADUAL | ;NOW]}
   {TIMEZONE= time zone spec}
   {;CANCEL}

Orbit Software's pocket guide for MPE/iX explains shows the correct syntax. In this case, ;GRADUAL and ;NOW may only be applied as modifiers to the DATE=; TIME= keywords, not to ;CORRECTION=.

Posted by Ron Seybold at 09:47 PM in Hidden Value, Homesteading | Permalink | Comments (0)

March 01, 2019

There's more of this all the time, so dust

Vacuum-cleaner
Newswire Classic

By John Burke

As equipment gets older and as we neglect the maintenance habits we learned, we will see more messages like this.

Upon arrival this morning the console had locked up. I re-started the unit, but the SCSI drives do not seem to be powering up. The green lights flash on for a second after the power is applied, but that is it. The cooling fan does not turn either. I am able to boot, but get the following messages: LDEVS 5, 8, 4, 3, 2 are not available and FILE SYSTEM ERROR READING $STDIN (CIERR 1807).

When I try to log on as manager.sys, I must do so HIPRI, and get the following: Couldn’t open UDC directory file, COMMAND.PUB.SYS. (CIERR 1910) If I had to guess, I would say the SCSI drives are not working. Is there a quick fix, or are all the files lost? I should add that I just inherited this system. It has been neglected, but running, for close to two years. Is it time to pull the plug?

Tom Emerson responded

This sounds very familiar. I’d say the power supply on the drive cabinet is either going or gone [does the fan ‘not spin’ due to being gunked up with dust and grease, or just ‘no power’?] I’m thinking that the power supply is detecting a problem and shutting down moments after powering up [hence why you see a ‘momentary flicker’].

Tim Atwood added

"I concur. The power supply on the drive cabinet has probably gone bad. If this is an HP6000 series SCSI disc enclosure for two and four GB SCSI drives, move very quickly. Third-party hardware suppliers are having trouble getting these power supplies. I know the 4GB drives are near impossible to find. So, if it is an HP6000 series you may want to stock up on power supplies if you find them. Or take this opportunity to convert to another drive type that is supported.”

The person posting the original question replied, “Your post gave me the courage to open the box and the design is pretty straight forward. It appears to be the power supply. As I recall now, the cooling fan that is built into the supply was making noise last week. I will shop around for a replacement. I can’t believe the amount of dust inside!”

Which prompted Denys Beauchemin to respond

The dust inside the power supply probably contributed to its early demise. It is a good idea to get a couple of cans of compressed air and clean out the fans and power supplies every once in a while. That goes for PCs, desktops, servers, and other electronic equipment. The electrical current is a magnet for dust bunnies and other such putrid creatures.

Wayne Boyer of Cal-Logic had this to say; useful because supplies may be hard to locate

Fixing these power supplies should run around $75 to $100. Any modular power supply like these is relatively easy to service. I never understand reports of common and fairly recent equipment being in short supply. It is good advice to stock up on spares for older equipment. Just because it’s available somewhere and not too expensive doesn’t mean that you can afford to be down while fussing around with getting a spare shipped in.

The compressed air cans work, but to really do a good job on blowing out computer equipment, you need to use an air compressor and strip the covers off of the equipment. We run our air compressor at 100 PSI. Note that you want to do this blasting outside! Otherwise you will get the dust all over whereever you are working. This is especially important with printers, as you get paper dust, excess toner, etc. building up inside the equipment. I try and give our office equipment a blow out once a year or so. Good to do that if a system is powered down for some other reason.

Bob J. of Ideal Computer Services added

The truth sucks. There are support companies that don’t stock spare parts. The convenient excuse when a part is needed is to claim that ‘parts are tough to get.’ Next they start looking for a source for that part. One of my former employers always pulled that crap.

Unfortunately, quality companies get grouped with the bad apples. I always suggest system managers ask to visit the support supplier's local parts warehouse. The parts in their warehouse should resemble the units on support. No reason to assume the OEM has the most complete local stock either. Remember HP's snow job suggesting that 9x7 parts would become scarce and expensive? Different motive, but still nonsense.

Posted by Ron Seybold at 10:09 PM in Hidden Value, Homesteading, Newswire Classics | Permalink | Comments (0)

February 22, 2019

Cautions of a SM broadsword for every user

Broadsword
NewsWire Classic

By Bob Green

Vladimir Volkh was doing MPE system and security consulting at a site. One of his regular steps is to run VESOFT’s Veaudit tool on the system. From this he learned that every user in the production account had System Manager (SM) capability.

Giving a regular user SM capability is a really bad thing. It means that the users can purge the entire system, look at any data on the system, insert nasty code into the system, etc. And this site had just passed their Sarbanes-Oxley audit.

Vladimir removed SM capability from the users and sat back to see what would happen. The first problem to occur was a job stream failure. The reason it failed was because the user did not have Read access to the STUSE group, which contained the Suprtool "Use" scripts. So, Suprtool aborted. 

Background Info

For those whose MPE security knowledge is a little rusty, or non-existent, we offer a a helpful excerpt from Vladimir’s son Eugene, from his article Burn Before Reading - HP3000 Security And You – available at www.adager.com/VeSoft/SecurityAndYou.html

<beginarticlequote>

When a user tries to open a file, MPE checks the account security matrix, the group security matrix, and the file security matrix to see if the user is allowed to access the file. If he is allowed by all three, the file is opened; if at least one security matrix forbids access by this user, the open fails.

For instance, if we try to open TESTFILE.JOHN.DEV when logged on to an account other than DEV and the security matrix of the group JOHN.DEV forbids access by users of other accounts, the open will fail (even though both TESTFILE’s and DEV’s security matrices permit access by users of other accounts).

Each security matrix describes which of the following classes can READ, WRITE, EXECUTE, APPEND to, and LOCK the file:

• CR - File’s creator

• GU - Any user logged on to the same group as the file is in

• GL - User logged on to the same group as the file is in and having Group Librarian (GL) capability

• AC - Any user logged on to the same account as the file is in

• AL - User logged on to the same account as the file is in and having Account Librarian (AL) capability

• ANY - any user

• Any combination of the above (including none of the above)

...

Whenever any group is created, access to all its files is restricted to GU (group users only).

<endarticlequote>

As Eugene points out above, account users do not have Read access by default to a new group in their account. This was the source of the problem at the site Vladimir was visiting. When the jobs could not read the files in the new STUSE group, the system manager then wielded the MPE equivalent of the medieval broadsword: give all the users SM capability.

ALTUSER PRODCLRK; CAP=SM,IA,BA,SF,...

This did solve the problem, since it certainly allowed them to read the STUSE files, but it also allowed them to read or purge any file on the system, in any account.

What he should have done was an Altgroup command immediately after the Newgroup command:

ALTGROUP stuse; access=(R:any;a,w,x,l: gu)

or specified the correct access when the group was built:

NEWGROUP stuse;access=(r:any;a,w,x,l:gu)

Since the HP 3000 runs in a corner virtually unattended (except for feeding the occasional backup tape), we often forget many of the options on the commands that are used sparingly. Neil Armstrong, my cohort in our Labs, often does a Help commandname to remind himself of some of the pitfalls and options on the lesser-used commands, NEWGROUP being one of them.

Posted by Ron Seybold at 02:36 PM in Hidden Value, Homesteading | Permalink | Comments (0)

February 20, 2019

Security advice for MPE appears flameproof

Burn-before-reading

Long ago, about 30 years or so, I got a contract to create an HP 3000 software manual. There was a big component of the job that involved making something called a desktop publishing file (quite novel in 1987). There was also the task of explaining the EnGarde/3000 security software to potential users. Yow, a technical writer without MPE hands-on experience, documenting MPE V software. 

Yes, it was so long ago that MPE/XL wasn't even in widespread use. Never mind MPE/iX, the 3.0 release of MPE/XL. All that didn't matter, because HP preserved the goodness of 3000 security from MPE V through XL and iX. My work was to make sense of this security as it related to privileges.

I'll admit it took yeoman help from Vicky Shoemaker at Taurus Software to get that manual correct. Afterward I found myself with an inherent understanding, however superficial, about security privileges on the HP 3000. I was far from the first to acquire this knowledge. Given another 17 years, security privileges popped up again in a NewsWire article. The article by Bob Green of Robelle chronicled the use of SM capability, pointed out by Vladimir Volokh of VEsoft.

Security is one of those things that MPE managers didn't take for granted at first, then became a little smug about once the Internet cracked open lots of business servers. Volokh's son Eugene wrote a blisteringly brilliant paper called Burn Before Reading that outlines the many ways a 3000 can be secured. For the company which is managing MPE/iX applications — even on a virtualized Charon server — this stuff is still important.

I give a hat-tip to our friends at Adager for hosting this wisdom on their website. Here's a recap of a portion of that paper's good security practices for MPE/iX look like.

Volokh’s technical advisory begins with a warning. “The user is the weakest link in the logon security system -- discourage a user from revealing passwords. Use techniques such as personal profile security or even reprimanding people who reveal passwords. Such mistakes seem innocent, but they can lose you millions."

His bullet points from the heyday of MPE still make good sense to follow, if you're managing a system in our homesteading and archiving era.
  • Passwords embedded in job streams are easy to see and virtually impossible to change -- avoid them.

  • Some forms of access are inherently suspect (and thus require extra passwords) or are inherently security violations. Thus, access to certain user IDs at certain times of day, on certain days of the week, should be specially restricted.

  • Many security violations can be averted by monitoring the warnings of unsuccessful violation attempts that often precede a successful attempt. If possible, change the usual MPE console messages so they will be more visible.

  • Leaving a terminal logged on and unattended is just as much a security violation as revealing the logon password. Use some kind of timeout facility to ensure that terminals don't remain inactive for long; set up all your dial-in terminals with subtype 1.

  • A useful approach to securing your system is to set up a logon menu which allows the user to choose one of several options rather than to let the user access MPE and all its power directly.

  • Blocking out MPE commands via UDCs with the same name will usually fail unless the command is SETCATALOG or SHOWCATALOG, or if you also forbid access to many HP subsystems and HP-supplied programs. This severely limits the usefulness of this method.

  • Remember that RELEASE-ing a file leaves it wide open for any kind of access; RELEASE files cautiously, and re-secure them as soon as possible.

  • Try to make it as easy as possible for people to allow their files to be accessed by others without having to RELEASE them. Thus, build all accounts with (r,w,x,a,l:any) so that allowing access to a group will be easier.

  • If a group is composed mostly of files that should be accessible by all users in the system or by all account users, build it that way. This will also reduce RELEASEs.

  • The ALTSEC command is useful for restricting access to files in a group to which access is normally less restricted.
  • Lockwords aren't all they're cracked up to be. Other approaches should be preferred.

  • You should only give OP capability to users who you trust as much as you would a system manager; to users who have no access to magnetic tapes or serial disks; or to users who have a logon UDC that drops them into a menu which forbids them from doing STOREs or RESTOREs.

  • You should give PM capability only to users who you trust as much as you would a system manager.

  • If any user has SAVE access to a group with PM capability, or write and execute access to any program file that resides in a group with PM capability, he can write and run privileged code.

  • Never RELEASE a program file that resides in a group which has PM capability.

  • Privileged programs must never call DEBUG unless their user is privileged, and must never dynamically load and call procedures from a user's group or account SL unless the user is privileged.

  • IMAGE/SQL security is not particularly useful except for protecting databases against unauthorized QUERY access. In fact, some degree of protection against unauthorized QUERY access can be given by using the DBUTIL "set subsystem" command to disallow any QUERY access or QUERY modification of a database.

Posted by Ron Seybold at 02:23 PM in Hidden Value, Homesteading | Permalink | Comments (0)

February 15, 2019

Scripting a Better UPS link to MPE/iX

In another article we talked about how HP dropped the ball on getting better communication between UPS units and the HP 3000. It was a promise that arrived at about the same time as HP's step-away from the 3000. The software upgrade to MPE/iX didn't make it out of the labs.
 
That didn’t stop Donna Hofmeister. About that time she was en route to a director's spot on OpenMPE. Later on she joined Allegro. We checked in to see if better links between Uninterrupted Power Supplies via MPE/iX was possible. Oh yes, provided you were adept at scripting and job stream creation. She was.
 
"I wrote a series of jobs and scripts that interrogate an APC UPS that is fully-connected to the network — meaning it has an IP address and can respond to  SNMP," she said. "These are the more expensive devices, for what it's worth."
 
"It worked beautifully when a hurricane hit Hawaii and my 3000 nicely shut itself down when power got low on the UPS. Sadly, the HP-UX systems went belly-up and were rather a pain to get running again."

Posted by Ron Seybold at 12:47 PM in Hidden Value, Homesteading | Permalink | Comments (0)

February 08, 2019

What can a 3000 do to talk to a modern UPS?

SmartUPS
Michel Adam asks, "How can I install and configure a reasonably modern UPS with a 3000? I'd like to use something like an APC SmartUPS or BackUPS, for example. What type of signaling connection would be the easiest, network or serial?"

Jim Maher says

First you need to find out what model 3000. Listed on the back will be the power rating. Some of the older ones use 220V. Then you can match that with a proper UPS.

Michel Adam explains in reply

This HP 3000 is an emulator, i.e. a 9x8 equivalent or A-Class. I guess a regular "emulated" RS-232, or actual ethernet port would be the most likely type of connection. In that sense, the actual voltage is of no consequence; I only need to understand the means of communicating from the UPS to the virtual 3000.

Tracy Johnson reports

While we have three "modern" APC units each with battery racks four high, they also serve the rest of the racks in our computer room. Our HP 3000 is just a bigger server in one of those racks. Each APC services only one of the three power outlets on that N-Class. Their purpose is not to keep the servers "up" for extended periods, but to cover for the few seconds lapse before our building generator kicks in in case of a complete power loss.

As far as the UPS talking to our HP 3000 serial port, we didn't bother. Our APC units are on the network so they have more important things to do, like send emails to some triage guy in Mumbai should they kick in.

Enhanced, or not?

In the history department, Hewlett-Packard had its labbie heart in the right place just weeks before the vendor canceled its 3000 plans. We reported the following in October of 2001

HP 3000s will say more to UPS units

HP's 3000 labs will be enhancing the platform to better communicate with Uninterrupted Power Supply systems in the coming months. HP's Jeff Vance reports that the system will gain the ability to know the remaining time on the UPS, so system managers can know that the UPS will last long enough to shut down my applications and databases and let the system crash. Vance said that HP has scheduled to begin its work on this improvement—voted Number 8 on the last System Improvement Ballot—in late fall.

Late fall of 2001 was not a great time to be managing future enhancements for the 3000 and MPE/iX. The shortfall of hardware improvements and availability has been bridged by Charon. Adjustments to MPE/iX for UPS communication have not been confirmed.

Posted by Ron Seybold at 05:53 PM in Hidden Value, User Reports | Permalink | Comments (0)

January 11, 2019

Fine-tune: how to reinstate config files

Reinstate logo
I’m replacing my old Model 10 with a Model 20 on MPEXL_SYSTEM_VOLUME_SET. This will of course require a re-INSTALL. What’s the best way to reinstate my network config files? Just restore NMCONFIG and NPCONFIG? Can I use my old CSLT to re-add all my old non-Nike drives and mod the product IDs in Sysgen, or do I have to add them manually after using the Factory SLT?


Gilles Schipper replies:

Do the following steps:
- using your CSLT to install onto LDEV 1
- modify your i/o to reflect new/changed config.
- reboot
- use volutil to add non-LDEV1 volumes appropriately
- restore directory or directories from backup
- preform system reload from full backup - using the keep, create, olddate, partdb,show=offline options in the restore command
- reboot again
No need for separate restores of specific files.

We had another hard drive fail this weekend. It was in an enclosure of old 2GB drives that we really did not need, so I just unplugged them and rebuilt my volumes without them. However, when I boot up I get error messages that path 10/4/0.20-26 can’t be mounted. How do I get rid of these messages?

Gilles Schipper replies:
You can safely ignore the messages, but if you want them not to reappear, simply remove those devices from your IO configuration via SYSGEN, keep the new configuration to config.sys and reboot with a start norecovery. When you’re back up again, you should create a new slt tape.

Paul Edwards adds:
Use SYSGEN with DOIONOW or IOCONFIG to delete them. No reboot is required.

Posted by Ron Seybold at 08:14 PM in Hidden Value | Permalink | Comments (0)

December 28, 2018

Fine Tune: Optimized Disaster Recovery

Disasters
By Gilles Schipper

While working with a customer on the design and implementation of disaster recovery (DR) plan for a large HP 3000 system, it became apparent the implementation had room for improvement.

In this specific example, the customer had a production N-Class HP 3000 and a backup HP 3000 Series 969 system in a location several hundred miles from the primary.

The process of implementing the DR was completed entirely from a remote location — thanks to VPNs and an HP Secure Web Console on the 969. One of the most labor-intensive aspects of the DR exercise was to rebuild the IO configuration of the DR machine (the 969) from the full backup tape of the production N-Class machine, which included an integrated system load tape (SLT) as part of the backup.

The ability to integrate the SLT on the same tape as the full backup is very convenient. It results in a simplified recovery procedure as well as the assurance that the SLT to be used will be as current as possible.

When rebuilding a system from scratch from a SLT/Backup tape, if the target system differs in architecture from the source system, it is usually necessary to modify all the device paths and device configuration specifications with SYSGEN and then rebooting the system in order to even be able to utilize the tape drive of the target system to restore any files at all.

(This would be apart from the files restored during the INSTALL process — which does not require proper configuration of any IO component at all).

Some would argue that this system re-configuration needs to be completed only once, since any future system rebuilds would require only a “data refresh” rather than a complete system re-INSTALL.

I say that this would be true only in very stable system environments where IO configurations — including network printer configurations — are static and where TurboIMAGE transaction logging is not utilized. Otherwise there could be unpleasant results and complications from using stale configurations in a real disaster recovery situation. In any case, there really is no reason to take any chances,

The labor-intensive step of creating a proper DR target system configuration environment is achievable minus the labor-intensive part – or at least without repetition of the manual chore of re-configuring the target system each time the DR is exercised.

Unless both the production system and the DR system are architecturally similar (i.e. they belong to same HP 3000 family) the configuration of the target system (the DR machine) cloned from the source system (the production machine) will be non-trivial.

At a minimum, before data restore can begin on the DR machine, the path hierarchy of the tape drive associated with the backup tape must be re-created. Further, if the subsequent restore requires more than just the system disk, all the path components for all the disk drives must also be created.

In a real DR situation, this task can be daunting at best – particularly since it may be difficult to access the appropriate documentation that describes the pertinent SYSGEN configuration. How much preferable would it be to be able to complete this configuration well in advance of the hope-to-never-happen event.

In fact, it is entirely possible to create an appropriate DR configuration environment that is (almost) completely integrated into one’s production environment.

SYSGEN IO requirements

In order to provision a potential DR HP 3000 system’s IO configuration requirements into an existing production HP 3000 SLT, it is only necessary to configure all of the DR path components into the existing production system’s IO configuration.

The fact that these paths do not exist on the production (source) system is immaterial — as long as you can withstand the menacing, although perfectly innocuous console error messages that accompany a reboot of a system so configured.

There is also the matter of actual device numbers — and that is why I included the “almost” when mentioning “completely integrated” earlier.

Clearly, it is not possible to have duplicate device numbers when configuring both production and DR devices into the production SYSGEN IO configuration. So, in order to distinguish between the two systems (one the real production, the other virtual DR), I simply add 100 (you can choose any number) to the device numbers associated with the virtual machine. Then when actually testing or invoking the DR process, it is a simple matter to change the device numbers in a batch job designed for that purpose.

Another batch job could be pre-built that would add the appropriate disk drives and volume sets to the system’s disk pool, using VOLUTIL. These batch jobs would be included in the full backup tape and could be restored almost immediately following the INSTALL by referencing :file tape;dev=107 (to use my example of adding 100 to the corresponding virtual device).

The command :restore *tape;{fileset}; directory;olddate; keep;create;show (where {fileset} corresponds to the fileset that would include the appropriate device number change and volutil batch jobs. One could take this technique one step further in the case where the DR target machine is unknown.

In such a situation, you could create a SYSGEN IO configuration that includes path constructs for any possible virtual machine that you could think of and include them in the host configuration – adding 100 for devices associated with virtual machine 1, 200 for virtual machine 2, and so on.

Posted by Ron Seybold at 08:03 PM in Hidden Value, Homesteading, Newswire Classics | Permalink | Comments (0)

December 19, 2018

Even DTCs can spark memories for 3000s

DTC to 3000 N-Class config
The Distributed Terminal Controller was a networking device with intelligence that stood between an HP 3000 and a peripheral. We use the past tense to describe the DTC usage for many of the homesteading 3000 sites. In some places, DTCs continue to let 3000s shake hands with other devices.

At TE Connectivity in Hampton Roads, Va. the box works between an N-Class 3000 (the ultimate generation) and an impact printer (of considerably older peerage). Al Nizzardini makes the pair work for the company that employs 3000s across the globe, from North America to China.

"Our DTC 48 with 3-pin ports died on us," Nizzardini said. "We have an impact printer connected to the 48, the only thing that is hanging off that DTC." At first the solution to the blocked connection was to use an even older controller, the DTC16 with modem ports. That would've involved shorting out pins on the DTC 16.

Nizzardini asked and a few veterans answered. Francois Desrochers said Nizzardini would need pins 2, 3 and 7 (send, receive, ground). "You may have to short out 5 and 20," he added. Another combination from Gary Robillard suggested connecting 4 and 5 together and 6, 8, and 20 together. "We always had 2 and 3 crossed—2 to 3 and 3 to 2," he said.

It's been 20 years since HP last released a DTC, something that's still useful for older peripherals. The intel to keep one connected to the latest 3000s is still available in the 3000 community. Old doesn't mean dead when someone remembers the essentials. Nizzardini solved his problem without shorting out pins, just by locating another working DTC 48. MANMAN drives the workflow at TE Connectivity, but the real driver is pros like Nizzardini, helping one another remember.

 

Posted by Ron Seybold at 05:35 PM in Hidden Value, Homesteading | Permalink | Comments (0)

December 14, 2018

Routers and switches and hubs, oh my!

Lions-and-tigers-and-bears
Editor's Note: Initial HP 3000 hardware networking can be like a trip down a Yellow Brick Road. Here's a primer for the administrator who's wondering if that HP 3000 can link to a network

By Curtis Larsen

Auntie MAU! Auntie MAU! A Twisted Pair! A Twisted Pair!

Once upon a time networks were as flat as the Kansas prairie, and computers on them were a lot like early prairie farmsteads: few and far between, pretty much speaking to each other only when they had to. (“Business looks good again this year.” “Yep.”) Most systems still used dumb terminals, and when speaking to anything outside the LAN, system-to-system modem connections were the way to do it.

A tornado named the Internet suddenly appeared in this landscape. It uprooted established standards and practices, swept aside protocols and speed limitations, and took us into a Technicolor networking landscape very different than what was there before.

Toto, I get the feeling our packets aren’t in Kansas anymore

Smaller companies were tossed before the tornado to eventually land and quickly begin growing again in the new environment. Large companies like IBM, HP, Digital, and Microsoft, who were rooted and established in their own proprietary standards (it sounds like an oxymoron, but it’s true) survived by generally ignoring the howling winds. Eventually, munchkin-like, they all came out to see what the general fuss was about, and found that a house-sized chunk of change (pun intended) had landed.

Networking, and the TCP/IP protocol had truly arrived in style, bringing strange new applications and markets. Serial connections and proprietary networking (“What do you mean we don’t need SNA to connect to the Wichita office anymore?”) gave way to a new kid on the block. And her little dog, too.

Follow the Yellow-Colored-Cable-and-Labeled-at-Both-Ends Road!

So then the HP 3000 managers found themselves sitting in a strange new networking land of strange new networking things. And for some of us, trying to understand the whole of it all — especially in relation to “legacy” system like the HP e3000 — was a little daunting. What are all these networking black boxes we plug the system into, and what do they all do? How can they make life better? (How can they make life worse?) If you’re not sure (or just plain curious) read on.

We’re off to see the wizard — this wonderful wizard of ours!

The networking wizard of your HP 3000 system is a program named “NMMGR.” It allows you to define networking hardware and tells you how to create connections with them. But what things can you define? Before we talk about connecting to things, we should probably take a crash-course in the things you’re connecting to.

Which path do I take? Well, you could take this one, that one, or both...

The basic networking boxes you’ll connect to are hubs, routers, switches, bridges, and gateways. Oh My. Let’s take them one at a time.

Since life is like an analogy, I’ll stretch one for the hub to go like this: If your network traffic is like water through a hose, then a hub is like a splitter, allowing multiple exits. Generally speaking, a hub simply splits the traffic from the “incoming” line into each connected port “out.” This is cheap and simple to set up if you don’t have a lot of connections, but like too many divisions on any hose, too many hubs will make the end connections anemic. The fewer connections the better, so most hubs have no more than 24 ports total.

Obviously, to make things better for all connections in larger networks, more “water pressure” was needed — and the switch was born.

Pay no attention to that man behind the curtain!

No, I’m not talking about the System Administrator. A switch looks very similar to a hub, but the appearance ends there. Again, if your network is like a stream of water in a hose, then your garden-variety switch is like a water tank, adding pressure to the line. Huge water tanks are placed at the heart of a city’s water system, while small tanks are placed on buildings. At the heart of most networks — tended by a cooing Network Administrator — is a core switch (the main tank).

Additional “work group” switches (building-sized tanks) can be used in wiring closets for special-need areas of the network. So, although a hub and a switch both offer multiple connections, the resulting “streams” have vastly different origin and force. Now that we’re one big speedy networking family, no one minds if it all fails, right? No? Well love can build a bridge, and so can electronics.

My Network’s crashing… What a world! What a world…

Having all your network connections on one physical segment isn’t too grand — especially when it fails. By segregating physical networks and then “bridging” them together, you ensure that in the face of adversity, some people can still laugh at the ones who can’t work. Aquariously speaking, a simple bridge is like a valved pipe between two water systems, passing water in both directions, and shutting one side’s valve if that system “loses pressure” (goes down). You say you want to route water based on content? Well then.

This here’s the ‘Packet of a Different Header’ you’ve heard about

Simply put, a basic router is an intelligent (logically) one-way bridge, examining network data information and very quickly sending data packets down one line or another. In our epic analogy, a router could be a thermal valve, forcing only cold water to flow this way, and hot water that way, preserving us from the heartbreak of tepidity. Since the router has to work quickly, it usually works at a lower level than other equipment does, caring less about content and more about destination. You say you’d like to exchange hot water with someone else? You’d like the gate to swing both ways?

There’s no place like the home network! No place like the home network…

A router is excellent at sending packets from Here to There (and not necessarily Back Again), but nothing beats the gateway for two-way communication. A gateway takes data from one network and sends it to another, even re-creating the data packet on the other side, if need be.

To stretch our analogy to its limits, we could say that two different water systems exist, having the same characteristics, including temperature. One system is chlorinated, while the other is not, and so simply allowing the water to pass unmolested would be an issue — one system would become diluted, and the other exposed. What we need is a filtration pump that allows the water to be pumped in either direction, adding chlorine one way, and taking it out in the other direction.

Connecting to the Internet requires a gateway, since your home network doesn’t “know” how to reach something out there. What it does know is how to hand off a data packet destined for “Not Here” to a gateway for processing. The gateway in turns checks the packet’s address and sends it to the best possible network closer to the packet’s Ultimate Destination, re-labeling the packet as it does so, and putting its own address in the packet’s “return address.” If the packet’s Ultimate Destination isn’t on the new network either, then the gateway there does the same thing until the packet finally hits the Emerald City.

On its way back home, because of all the “return addresses” it picked up, the packet passes back through each gateway that it came from until, clicking its little ruby slippers, the packet realizes it is in no place but home.

Because of its intensive examination work, a gateway is almost always dedicated to its task, especially on larger networks. It was the gateway’s filtering abilities that led to using them as a firewall to protect networks by purposefully filtering and/or denying different types of connections and data. But the firewall is a topic all its own — just make sure you use one!

And you were there, and you, and you.
Oh Auntie Carly, there’s no place like MPE!

So there you have it — Networking Devices 101. Now that you know what you can connect your e3000 to, you can come up with some ideas on how to use them, and answer questions about what to connect to. Should an 3000 be connected to a hub, or to a switch? (Switch!) Does a printer need to be connected to a hub or a switch? (A hub will usually be fine.) Should I use my 3000 as a gateway? (I think not.) Should the physical part of my network the e3000 is on be bridged? (Yes.) Can I configure a gateway and connect my e3000 to the Internet? (Certainly. But make sure you have a firewall first!)

Curtis Larsen has been working with HP 3000s for over 25 years, and believes that, given enough time, any application can be written using the CI.

Posted by Ron Seybold at 07:02 PM in Hidden Value, Newswire Classics | Permalink | Comments (0)

December 07, 2018

Memory and Disk Rules for Performance

Concentration
NewsWire Classic

By Jeff Kubler

You need to get management support for your efforts to keep your systems performing at their best. Memory and disk are two components of your performance picture under MPE/iX. Main Memory is the scratch pad for all the work that the CPU performs. Every item of data that the CPU needs to perform calculations on or updating to must be brought into Main Memory.

CPU used to manage Main Memory: The CPU must manage memory. It must cycle through the memory pages, marking some as Overlay Candidates (this means that new data from disk may be placed here), noting that some are in continued use, and swapping others out to virtual or what is called transient storage. Swapping to disk occurs when data is in continued use but a higher priority process needs room for its data. To accommodate this higher priority process and its need for memory space, the Memory Manager will swap the memory for the lower priority process out to disk. The more activity the Memory Manager performs, the more CPU it takes to do this. Therefore it is the percentage of CPU used to manage memory that we use as a measurement.

Page Faults per Second: A Page Fault occurs each time a memory object is not found in memory. The threshold for the number of Page Faults per second that can be incurred before a memory problem is indicated varies with the size and the power of the CPU. Larger machines can handle more Page Faults per second while a smaller box will encounter problems with far fewer.

An exceptional number of Page Faults should never be used as the sole indicator of memory problems but when observed should be tested with the memory manager percentage. If both agree, you have a memory shortage. There are some strange things that I have observed with Page Faults, so it does not stand alone as an indicator of memory shortage.

The number of Page Faults per second and the amount of CPU needed to manage Memory are always evaluated in conjunction with each other. That is to say the high Page Fault Rate will not be considered a problem if the Memory Manager Percentage is not above 4 percent.

The Disk Environment is usually referred to as Secondary Storage. This is where all the data needed for system use is stored. Since Main Memory is not large enough to store all of the data that will be needed by all the processes, there must be a location for this larger pool of data. In the MPE/iX environment a great attempt was made to limit the impact of the Disk Environment so that it could not be the bottleneck that it once was in the Classic environment. Even though the Disk Environment does not have the significance it once had, this area can still be a bottleneck. As the CPU speeds increase, bottlenecks will become more significant.

Several different factors can affect the Disk Environment. One of these is data locality. Data locality pertains to two different types. There is data locality within Image datasets and data locality across the disk itself.

Data locality across Disk: This refers to the location of separate pieces of files (called extents). When files are placed on the disk, they can be placed in contiguous sectors or sections of files, or they can be placed in non-contiguous locations or even on many different disks. When files are not in contiguous locations they are said to be fragmented. The advantage of contiguous location is that greater efficiencies are allowed in retrieving data. When files need to be read, the head movement of the disk drive is minimal if files are in contiguous locations. The head moves to the location and the retrieval begins.

As the disk fills up the system cannot find one contiguous location to build any new file. Therefore, the system breaks the file up into extents and places the file wherever it can. A system reload will put files back into contiguous location (usually back on the location of the files file label) or products such as Lund Performance Solutions De-Frag/X can be used to put the files back into contiguous location.

Operating systems allocate disk space in chunks as they create and expand files and transient disk space (swap areas, etc.). When files are purged, these chunks are released for reuse. Over time the disc space may end up fragmented into many small pieces, which can slow the performance and the reliability of the system.

To observe and correct MPE fragmentation on MPE, you can use the De-Frag/X product from Lund Performance Software or the Contigvol command of Volutil. The latter is stable and reliable, but requires multiple passes to get the best results.

Data locality within IMAGE data sets is the other area of major concern. There there are two different types of datasets to be concerned with, detail datasets and automatic or master sets.

The Detail Datasets: this type of set holds the day to day data input. Detail sets begin with nothing in them. When records are added 1 is added to something called the high-water-mark, a number that tells how many records have been in the set, and the record is placed in the set.

The problem is that IMAGE automatically reuses space that is given up when a record is deleted. This space is often called the delete chain. New records are placed in the most recent location available on the "delete chain." This means that new records are not in the same physical locality as the rest of the records and may be far removed from the other records.

The ideal state for a detail database is one where the detail entries are sorted by the key field. This allows the data to be retrieved in the smallest amount of IO's making efficient use of the MPE systems pre-fetching of data. When this is not the case we can measure the dataset lack of efficiency with something called the Elongation factor. This is simply a measure of how many more IOs the user must perform to retrieve desired data.

The Master Datasets have unique identifiers (field names). There are two types of master sets, a manual master and an automatic master set. Manual masters have user-entered master entries while automatic masters have automatic entries placed in them to accommodate access to detail records. The issue of importance to performance here is something called the hashing algorithm. This is the method used by the database to calculate the location of the next record placed in the database. The intent is to cause the master set to be as equally distributed as possible.

The hashing algorithm uses the size of the set in its calculation. A poor size or a size that is not large enough will result in an unequally distributed database. A poor size is most easily described as one that does not consist of a prime number. This means that when the hashing algorithm calculates a location there is a higher potential that a record will already exist in that location. When this happens a secondary position must be calculated. When secondaries are placed in another block within the database, another I/O must occur to retrieve needed data. Since IO to disk is the slowest type of access, we want to avoid this at all costs.

Posted by Ron Seybold at 02:10 PM in Hidden Value, Homesteading | Permalink | Comments (0)

November 16, 2018

Fine-tune: 3000 support rescues, MPE/iX version matrix, network printer software

Rescued-boat-people
Steve Douglass of United Technologies Aerospace Systems writes, "We have an A-Class 400-100 machine that would only stay up about an hour before it autobooted. This machine was simply used for archived data lookup from an old ERP system. After trying simple fixes like reseating memory and checking connections we still had the same problem."

"We had no support agreement, and no one wanted to pay for a third-party support company to perform a diagnosis and fix, so we powered the system off. Of late there is interest in resurrecting this machine, and someone may be willing to foot the bill. We've researched and found Pivital Solutions and the Ideal Computer Services Group. Are there other recommendations?

John Clogg reports

We currently use Sherlock Services and are happy with the support they provide. I have also used Ideal Services and can recommend them with confidence.

Jim Maher of Saratoga Computers adds

We still service all of the HP 1000, 3000 and e3000 systems. Call anytime.

We replaced a printer recently and we can't get the new one to play nice with the 3000.  It's a LaserJet M608. When sending output to it, it prints a page or two and hangs. The spool file remains in a "print" state. The only way to reset it is to do a STOPSPOOL followed by a couple of ABORTIOs. The next time I start the spooler, the same thing happens, regardless of what I'm printing. What things should I check?

Tracy Johnson says

Try adding SNMP_SUPPORTED = FALSE (or TRUE)  You have a 50/50 chance either way. Sometimes you just have recalcitrant printers that won't cooperate with the HP3000. Consider getting Espul from Richard Corn or Minisoft's licensed version called Netprint.

Jim English adds

We use Netprint and eFormz from Minisoft. The eFormz is installed on a Windows server. Not all of our printers go through Netprint, just the ones that print forms or barcodes. We recently installed a newer HP printer and had the same issue you did. I set it up in Netprint and eFormz and it works great now.

Netprint by itself may solve your issue. I set up the printer in eFormz to print receipt travelers, which may have barcodes on them.

Is there a support matrix document that shows the HP 3000 boxes and what versions of MPE they can run? I'm trying to find all the 3000 boxes that support MPE/iX 6.0.

Donna Hofmeister reports

All 9x8, 9x7 and 99x boxes support 6.0. No A-Class or N-Class 3000s support 6.0.

Posted by Ron Seybold at 06:30 AM in Hidden Value, Homesteading | Permalink | Comments (0)

November 09, 2018

Fine-Tune: Test for disasters in any season

Test-siren
NewsWire Classic

Editor's Note: In October of 2001 the world worked in the aftermath of 9/11 attacks. Our Worst Practices columnist Scott Hirsh wrote this advice about the need to test for disasters. Another crisis was going to rise up for 3000 owners just a few weeks after this article appeared, this one triggered by HP. Regardless of where your datacenter is focused, it's always a good practice to test.

This Is Not a Test

By Scott Hirsh

For those of us in the United States entrusted with a company’s information resources, the events of September 11 changed everything. Before our business continuity or disaster recovery plans were primarily concerned with so-called “acts of God.” But we must now plan for the most improbable human acts imaginable. Who among us, prior to September 11, had a plan that took into account multiple high-rise office buildings being destroyed within minutes of each other? As you read this, the insurance industry is revising its assumptions. Likewise, we must now reconsider our approach to managing and protecting the assets for which we are responsible. Never before has the probability of actually needing to execute our recovery plans been so great.

As of this writing there have already been numerous business continuity and disaster recovery articles in the computer press. By now we understand the distinction between keeping the business going – not just IT, but also the whole business – and recovering after some (hopefully minor) interruption. And we’ve covered the issue of risk, where all the trade-offs and costs are negotiated. This whole topic was explored anew in the last few months, but it is still worthwhile to emphasize some early lessons of the attacks, from which we are still recovering.

It Had Better Work

Worst Practice 1: Trying to Fake It — I was visiting a friend’s datacenter recently, where I was told about a recent audit. This friend’s company spent the whole time trying to fake all the audit criteria: disaster recovery preparedness, security, audit trails, etc. At the risk of sounding like your parents, whom does this behavior really hurt? An audit is an ideal opportunity to validate all the necessary hard work required to run a professional datacenter. And should you ever be subjected to attack, electronic or otherwise, you know that your datacenter will survive.

If you didn’t get it before, you’d better get it now: Faking it is unacceptable. Chances are, at some point you will be required to do a real, honest-to-goodness recovery. And if you think you’re safe just because there may not be very many hijacked planes running into buildings such as yours, think again. The threats to your datacenter are diverse and numerous. And, by the way, violent weather, earthquakes and other natural disasters are still there too.

Worst Practice 2: Not Testing — Once you’re serious about continuity and recovery, not only will you plan, but you’ll test that plan often. There are lots of reasons to test your recovery capability often. Among them are: the ability to react quickly in a crisis; catching changes in your environment since your last test; accommodating changes to staff since your last test. A real recovery is a terrible time to do discovery.

Worst Practice 3: Not Documenting — One of the biggest problems with disasters is no warning. That’s why so many tests are a waste of time. Anyone can recover when you know exactly when and how. The truly prepared can recover when caught by surprise. Since you won’t get any warning – except, perhaps, with some natural disasters – you’ll want to have current, updated procedures. Since you’ll probably be on vacation (or wish you were) when disaster strikes, make sure the recovery procedures are off-site and available. If you’re the only one who knows what to do, even if you never take a day off there still won’t be enough of you to go around at crunch time.

Increasing the Odds of Recovery

Worst Practice 4: Taking Too Long — At this point in technology, there are two main ways to deal with a disaster: fail-over and reconstruction. With fail-over, you are replicating data between your main site and a recovery site. These sites can be relatively near each other – across town or perhaps in an adjoining states – or far away. This kind of remote clustering, if you will, is what the largest and most critical institutions use, and the cost is considerable. However, the cost of not doing it is considerably more.

Reconstruction is more about recovery than continuity. I am guessing that the vast majority of e3000 shops base their recovery plans on recalling tapes from a vault (e.g., Iron Mountain) to a recovery site, then restoring their data either to a bare machine or one on which only MPE has been installed. This was certainly true for my own operation, as my management always deemed this less expensive method “adequate.”

But that was then. Today, the amount of data that must be reloaded is so massive, that the time to recover renders this method all but worthless. True, your plan can call for a critical subset of data to be restored (not the entire data warehouse). But even current data can now stretch into the terabytes, once you include the applications, utilities, etc.

So the point here is to make sure your recovery methodology is practical from a business standpoint, as well as a technical standpoint. You don’t want to be in the position of estimating “just three more days” before you’re up and running.

Worst Practice 5: Not Recovering a Complete Environment — As the state of the art advances, some technology is left behind. We’ll keep it succinct here: If you need to keep an old technology alive, you may need to provide some or all of the solution yourself. Don’t expect the recovery site to stock or maintain every peripheral ever made just because you have one esoteric requirement. And don’t forget to keep backup copies of any obsolete software packages as well.

Another aspect to this issue, recently discovered at a customer site, is the fact that diverse platforms are now highly integrated. It’s not enough just to recover the e3000. The non-e3000 systems that share data feeds must also be recovered. And don’t forget any outside data sources either. Again, if you’re faking it, you can declare victory when you’ve reconstructed an e3000 at the recovery site. In reality, that only counts if the e3000 system can support the business on its own without any external feeds.

Worst Practice 6: Ignoring the Human Factor — Even the best plans don’t execute themselves. Keep in mind who will be doing what and how things will get done if key individuals are unable to perform their tasks. As we know, families come first, which is proper: so we mustn’t lose sight of our humanity in times of crisis. Any recovery is hard work. That counts double when there are casualties.

Reassess Your Assumptions

Worst Practice 7: A Defeatist Attitude — If you’ve been subjected to the “fake it” mentality, you’re probably demoralized. After all, who among us just wants to go through the motions? Well, it’s now a whole new world, and you have a really good shot at doing things right. But you need to forcefully make your case to those who didn’t take contingency planning seriously in the past. By the time you read this there may be stories about companies that unfortunately couldn’t recover from the September 11 attacks. We can emerge from this atrocity stronger if we do some honest introspection. Every rational businessperson should now be willing to do proper planning. If you can get over the bad practices of the past, you can position yourself and your business to be survivors.

Worst Practice 8: Datacenter Placement — As much as I enjoyed the view from my 29th floor datacenter, it’s pretty obvious now that datacenters don’t belong in certain places – high-rise buildings among them. Besides the obvious prohibitive cost of floor space, there are safety and security issues not obvious until recent events.

I have visited many co-location facilities in the past year, and they all had a several things in common:

1. They were in the low-rent district.

2. They were very difficult to find, as they were essentially unmarked.

3. They were very secure (at least relative to downtown datacenters), both physically and electronically.

4. They were redundant up the wazoo.

If this does not describe your datacenter, then perhaps it’s time to consider relocation. Let’s face it, even if there are good reasons why your datacenter needs to be right downtown, I’ll bet your recovery site is in the middle of nowhere. That should tell you something.

Hope for the Best

We’re currently in reactive mode. We’ve now seen one type of unimaginable act, using airliners as missiles. For those unlucky enough to be on the front lines of that atrocity, there was no way to plan for that series of events. And it’s likely that the next event will also be difficult to imagine, and hence plan for. So even the best plans require a great deal of luck, as even the best plan is useless if there is widespread devastation beyond your control. We should be honest about those aspects of business continuity and recovery that are within our control. We must be truly prepared. But we can still hope that we never need to actually use those plans. Not like we did after September 11. At least that’s the hope.

Posted by Ron Seybold at 06:14 PM in Hidden Value, Homesteading | Permalink | Comments (0)

November 02, 2018

Fine-Tune: Ensure Logical Data Consistency

Database_design_concepts
NewsWire Classic

The MPE/iX Transaction Manager for IMAGE does not guarantee logical consistency of your data. How do you ensure logical consistency? Use DBXBEGIN and DBXEND calls around all the DBPUT, DBUPDATE and DBDELETE calls that you make for your logical transaction. Yes, the definition of a logical transaction is up to the programmer.

There can be a lot of confusion about logical consistency, mostly because IMAGE kept adding logging and recovery features over its years of development. Gavin Scott gives a clear explanation of the state of affairs.

It’s amazing how much superstition exists surrounding this kind of stuff, and how many unnecessary rituals and sacrifices are performed daily to appease the mythical pantheon of data integrity gods. Real broken chains are supposed to be impossible to achieve with IMAGE on MPE/iX, no matter what application programs do, or how they are aborted, or how many times the system crashes!

The Transaction Manager provides absolute protection against internal database inconsistencies, as long as there are no bugs in the system and as long as the hardware is not corrupting data. No action or configuration is required on the part of the user.

Logical inconsistencies (order detail without an associated order header record, for example) can easily be created by aborting an application that’s in the middle of performing a database update that spans multiple records. Of course, IMAGE doesn’t care whether your data is logically correct or not, that’s the job of application programmers.

Using DBBEGIN/DBEND will have no effect whatsoever on logical integrity, unless you actually run DBRECOV to roll forward or roll back the database to a consistent point every time you abort a program or suffer any other failure.

By using DBXBEGIN/DBXEND XM style transactions, you can extend IMAGE’s guarantee of physical integrity to the logical integrity of your database. The system will ensure that no matter what happens, either all changes inside a DBX transaction will be applied, or none of them will be. Of course, it’s still possible to use this feature incorrectly (locking strategies are non-trivial as you need to lock the data that you read as well as that which you intend to write in many cases).

HP introduced a feature, far back in the MPE V days, called Intrinsic-Level Recovery (ILR). ILR can still can be enabled for a database. This was sort of a mini-XM that forced updates to disk each time an Intrinsic call completed in order to ensure structural integrity of the database in the face of system failures.

I believe that on MPE/iX, enabling ILR for a database does something really nasty like forcing an XM post after every update intrinsic call, which is a serious performance problem. ILR is no longer required on MPE/iX as XM will ensure integrity without it. With ILR you might be guaranteed that every committed transaction will survive a system abort, whereas without it XM might end up having to roll back the last fraction of a second’s worth of transactions. For almost any application this difference is negligible. Do not turn ILR on!

There are more complexities if your application performs transactions that affect multiple databases or databases and non-database files. It’s possible to do multi-database IMAGE transactions, but only if the databases reside on the same volume set, I believe.

Posted by Ron Seybold at 01:44 PM in Hidden Value, Homesteading | Permalink | Comments (0)

October 26, 2018

Command file tests 3000s for holidays

Holiday-Calendar-Pages
Holiday season is coming up. It's already upon us all at the grocery stores, where merchandising managers have cartons of Thanksgiving decorations waiting their turn. The Halloween stuff has to clear away first.

Community contributor Dave Powell has improved upon a command file created by Tracy Pierce to deliver a streamlined way to tell an HP 3000 about upcoming holidays. Datetest tells whether a day is a holiday. "I finally needed something like that," Powell says, "but I wanted the following main changes:

1:  Boolean function syntax, so I could say :if  holiday()  then instead of

:xeq datetest
:if WhichVariableName = DontRememberWhatValue then

and also because I just think user-functions are cool.

2. Much easier to add or disable specific holidays according to site-specific policies or even other countries’ rules. (Then disable Veterans Day, Presidents Day and MLK Day, because my company doesn’t take them.)

3. Make it easy to add special one-off holidays like the day before/after Christmas at the last minute when the company announces them.

Along the way, I also added midnight-protection and partial input date-checking, and made it more readable, at least to me.

Powell, who's contributed plenty of command files to the community through the HP 3000 newsgroup, says that most of the fun came in the day-of-week calculation.

I didn’t understand that part of Tracy’s script, or trust myself to adapt it without messing up, so I found a second method and used both, with a warning if the results didn’t agree. Surprise, surprise, they disagree about 12/25/2100, although they agree on dates I tested within the expected lifespan of MPE. So I shoveled in a third formula and found a day-of-week calculator spreadsheet, both of which agree with the second method. So anyone who uses Tracy’s original command file and plans to still run it in 2100 might need to make a change.

He offered what he called a preliminary version of the new datetest, which has been checked by Allegro's Steve Cooper


option nolist
parm CCYYMMDD    =   ""
if   bound (HOL_ERRORS)    or    bound (HOL_DAY)
    deletevar   [email protected]
endif
setvar   HOL_ERRORS  0
if   "!CCYYMMDD"     =   ""
    setvar  HOL_CYMD    HPYYYYMMDD
    setvar  HOL_DAY     !HPDAY
    if  HOL_CYMD        <>  HPYYYYMMDD
#            if the date has changed, we just hit midnite and the
#            day-of-week we just set might be the new day; in this
#            case set the date & day-of-week again, and we should
#            be ok (unless the following 2 commands take 24 hours :)
        setvar  HOL_CYMD    HPYYYYMMDD
        setvar  HOL_DAY     !HPDAY
    endif
else
    setvar  HOL_CYMD    "!CCYYMMDD"
    if  not numeric (HOL_CYMD)
        echo **date parm, if entered, must be numeric**
        setvar  HOL_ERRORS  HOL_ERRORS + 1
    endif
    if  len (HOL_CYMD)   <>  8
        echo **date parm must be exactly 8 digits, unless omitted**
        setvar  HOL_ERRORS  HOL_ERRORS + 1
    elseif  numeric (HOL_CYMD)
        if  rht (HOL_CYMD, 2) > "31"
            echo **last 2 digits of date parm can't be more than 31**
            setvar  HOL_ERRORS  HOL_ERRORS + 1
        elseif  rht (HOL_CYMD, 2) = "00"
            echo **last 2 digits of date parm can't be "00"**
            setvar  HOL_ERRORS  HOL_ERRORS + 1
        endif
        if  str (HOL_CYMD, 5, 2) > "12"
            echo **bytes 5 & 6 of date parm can't be more than 12**
            setvar  HOL_ERRORS  HOL_ERRORS + 1
        elseif  str (HOL_CYMD, 5, 2) = "00"
            echo **characters 5 & 6 of date parm can't be "00"**
            setvar  HOL_ERRORS  HOL_ERRORS + 1
        endif
    endif
    if  HOL_ERRORS      >   0
        echo **exiting because the date-parm was not a valid**
        echo **8-digit date in yyyymmdd format **
        return FALSE
    endif
endif

#    -------------------------------------------------------
#    do not casually modify above here
#
#    Take any special / unofficial holidays here
#    OK to replace any dates that are past with the date of a
#    holiday the company just announced (Jewish new year,
#    days before / after Christmas & New Years, etc, etc)

if   HOL_CYMD="20080929"  or  HOL_CYMD="20081008" &
or   HOL_CYMD="20081226"  or  HOL_CYMD="20090102"
    echo It's a special company holiday :)
    return  TRUE
endif

#    do not casually modify below here
#    -------------------------------------------------------

setvar   HOL_YYYY    str (HOL_CYMD, 1, 4)
setvar   HOL_MM      str (HOL_CYMD, 5, 2)
setvar   HOL_DD      str (HOL_CYMD, 7, 2)

#
#    Set day of week, unless already set because processing "today"
#
if   not     bound (HOL_DAY)
#    1st, the method in the original "datetest" command file
    setvar  HOL_DAY str("000031059090120151181212243273304334", &
            !HOL_MM * 3 - 2, 3)
    setvar  HOL_DAY     !HOL_DAY + !HOL_DD
    IF  !HOL_MM > 2    and   ( !HOL_YYYY / 4 * 4 = !HOL_YYYY )
        setvar  HOL_DAY      HOL_DAY + 1
    ENDIF
    setvar  HOL_YWK     !HOL_YYYY - 1
    setvar  HOL_DAY     !HOL_DAY + ( !HOL_YWK / 400 ) * 146097
    setvar  HOL_YWK     !HOL_YWK  mod  400
    setvar  HOL_DAY     !HOL_DAY - ( !HOL_YWK / 100 ) * 36524
    setvar  HOL_YWK     !HOL_YWK mod 100
    setvar  HOL_DAY     !HOL_DAY + ( !HOL_YWK / 4 ) * 1461
    setvar  HOL_YWK     !HOL_YWK mod 4
    setvar  HOL_DAY     !HOL_DAY + ( !HOL_YWK * 365 )
    setvar  HOL_DAY     ( HOL_DAY mod 7 ) + 1
    deletevar HOL_YWK

#    Next, the method posted to the 3000-l by Mike Hornsby 06/04/2004
#    except, add 1 at the end because his was 0-6 and we need
#    1-7.
    setvar  HOL_XYR !HOL_YYYY-((12-!HOL_MM)/10)
    setvar  HOL_XMONTH !HOL_MM+(((12-!HOL_MM)/10)*12)
    setvar  HOL_XDAY !HOL_DD+(!HOL_XMONTH*2)+(((!HOL_XMONTH+1)*6)/10)
    setvar  HOL_XLEAP_YR (HOL_XYR/4) - (HOL_XYR/100) + (HOL_XYR/400)
    setvar  HOL_XDAY (HOL_XDAY+HOL_XYR+HOL_XLEAP_YR+1) mod 7  +  1

#    Next, day-of-week with my adaption of a "Zeller" formula
#    off the internet.
    if  HOL_MM      <   "03"
        setvar  HOL_ZMONTH  !HOL_MM  +  12
        setvar  HOL_ZYEAR   !HOL_YYYY   -   1
    else
        setvar  HOL_ZMONTH  !HOL_MM
        setvar  HOL_ZYEAR   !HOL_YYYY
    endif
    setvar  HOL_ZDAY    ( &
        ((13 * HOL_ZMONTH + 3) / 5)  +  !HOL_DD  +  HOL_ZYEAR &
    +   (HOL_ZYEAR/4) - (HOL_ZYEAR/100) + (HOL_ZYEAR/400) &
    +   1 )     mod 7   +   1

#    Now, see if the day-of-week calcs agree
    if  HOL_DAY     <>  HOL_XDAY &
    or  HOL_DAY     <>  HOL_ZDAY &
    or  HOL_ZDAY    <>  HOL_XDAY
        setvar  HOL_ERRORS  HOL_ERRORS + 1
        echo **day-of-week error**
        echo    HOL_DAY   =   !HOL_DAY
        echo    HOL_XDAY  =   !HOL_XDAY
        echo    HOL_ZDAY  =   !HOL_ZDAY
    endif
    setvar  HOL_DAY     HOL_ZDAY
    deletevar   [email protected],  [email protected]
ENDIF

#
#    Now check for specific regular holidays, month-by-month.
if   HOL_MM  =   "01"
    if  HOL_DD  =   "01"
        echo It's New Years Day
        return  TRUE
    endif
    if  ( !HOL_DAY=2  and  !HOL_DD>=15  and  !HOL_DD<=21 )
        echo (It's Martin Luther King day - but do we get it?)
#        return  TRUE
    endif
    return  FALSE
elseif   HOL_MM  =   "02"
    if  (!HOL_DAY=2  and  !HOL_DD>=15  and  !HOL_DD<=21)
        echo (It's President's Day - but do we get it?)
#        return  TRUE
    endif
    return  FALSE
elseif   HOL_MM  =   "05"
    if  (!HOL_DAY=2  and  !HOL_DD>=25  and  !HOL_DD<=31)
        echo It's Memorial Day
        return  TRUE
    endif
    return  FALSE
elseif   HOL_MM  =   "07"
    if  HOL_DD  =   "04"
        echo It's July 4th
        return  TRUE
    endif
    return  FALSE
elseif   HOL_MM  =   "09"
    if  ( !HOL_DAY=2  and  !HOL_DD>=1  and  !HOL_DD<=7 )
        echo It's Labor Day
        return  TRUE
    endif
    return  FALSE
elseif   HOL_MM  =   "11"
    if  HOL_DD  =   "11"
        echo (it's Veterans Day - but do we get it ?)
#        return  TRUE
    endif
    if  ( !HOL_DAY=5  and  !HOL_DD>=22  and  !HOL_DD<=28 )
        echo It's Thanksgiving
        return  TRUE
    endif
    if  ( !HOL_DAY=6  and  !HOL_DD>=23  and  !HOL_DD<=29 )
        echo It's the day after Thanksgiving
        return  TRUE
    endif
    return  FALSE
elseif   HOL_MM  =   "12"
    if  HOL_DD  =   "25"
        echo It's Christmas
        return  TRUE
    endif
    return  FALSE
endif

Posted by Ron Seybold at 05:57 AM in Hidden Value, Homesteading | Permalink | Comments (0)

October 19, 2018

Fine-Tune: Get the right time for a battery

CMOS-clock-battery
Two weeks from now the world will manage the loss of an hour, as Daylight Saving time ends. The HP 3000 does time shifting of its system clock automatically, thanks to patches HP built during 2007. But what about the internal clock of a computer that might be 20 years old? Components fail after awhile.

The 3000's internal time is preserved using a small battery, according to the experts out on the 3000 newsgroup. This came to light in a discussion about fixing a clock gone slow. A few MPE/iX commands and a trip to Radio Shack can maintain a 3000's sense of time.

"I thought the internal clock could not be altered," said Paul English. "Our server was powered off for many months, and maybe the CMOS battery went flat." The result was that English's 3000 showed Greenwich Mean Time as being four years off reality. CTIME reported for his server:

* Greenwich Mean Time : THU, JUN 17, 2004, 11:30 AM   *
* GMT/MPE offset      : +-19670:30:00                 *
* MPE System Time     : THU, SEP 10, 2009,  2:00 PM   *

Yup, that's a bad battery, said Pro 3k consultant Mark Ranft. "It is cheap at a specialty battery store," he said, "and can be replaced easily, if you have some hardware skills and a grounding strap." Radio Shack offers the needed battery.

But you can also alter the 3000's clock which tracks GMT, he added.

"The internal clock can be set or reset at bootup (the method varies depending on the hardware), or by using the MPE SETCLOCK date=xx/xx/xx;time;NOW command, in conjuction with SETCLOCK ;CANCEL.  Follow these by the SHOWCLKS command. It usually takes me a couple of attempts to get it, but you should be able to straighten this out without even having to reboot."

A few customers warned that utility software will sometimes fail to start up if a bad battery has pulled the internal clock too far off the system clock. Tracy Johnson explained:

Collateral damage may include some third party software going non-operational. I have at least one software package whose license goes bad when the offset gets too large (think years).  When I fix the offset to a reasonable number (within a day or two), then the software works again.

Posted by Ron Seybold at 08:01 PM in Hidden Value, Homesteading | Permalink | Comments (0)

October 12, 2018

Friday Fine-Tune: Speeding up backups

Spinning-wheels
We have a DLT tape drive. Lately it wants to take 6-7 hours to do a backup instead of its usual two or less.  But not every night,  and not on the same night every week.  I have been putting in new tapes now, but it still occurs randomly. I have cleaned it. I can restore from the tapes no problem. It doesn’t appear to be fighting some nightly process for CPU cycles. Any ideas on what gives?

Giles Schipper replies

Something that may be causing extended backup time is excessive IO retries, as the result of deteriorating tapes or tape drive.

One way to know is to add the ;STATISTICS option to your STORE command. This will show you the number of IO retries as well as the actual IO rate and actual volume of data output.

Another possibilty is that your machine is experiencing other physical problems resulting in excessive logging activity and abnormal CPU interrupt activity — which is depleting your system resources resulting in extended backup times.

Check out the following files in the following Posix directories:

/var/stm/logs/os/*
/var/stm/logs/sys/*

If they are very large, you indeed may have a hardware problem — one that is not "breaking" your machine, but simply "bending" it.

Posted by Ron Seybold at 07:25 PM in Hidden Value | Permalink | Comments (0)

September 21, 2018

Fine Tune: Storing in Parallel and to Tapes

Does the MPE/iX Store-to-Disc option allow for a ‘parallel store,’ analogous to a parallel store to tape? For example, when a parallel store to tape is performed, the store writes to two or more tape drives at the same time. Is there a parallel store-to-disc option that allows for the store to write to two or more disc files at the same time (as opposed to running multiple store-to-disc jobs)?

Gavin Scott and Joe Taylor reply

Yes, the same syntax for parallel stores works for disk files as well as tape files. I really don’t know if you would get any benefit from this, but if you went to the trouble of building your STD files on specific disks, then it might be worthwhile.

What is the recommended life or max usage of DLT tapes?

Half a million passes is the commonly used number for DLT III. One thing to remember is that when they talk about the number of passes (500,000 passes), it does not mean number of tape mounts.

For SuperDLT tapes, the tape is divided into 448 physical tracks of 8 channels each giving 56 logical tracks. This means that when you write a SuperDLT tape completely you will have just completed 56 passes. If you read the tape completely, you will have done another 56 passes.

The DLTIV tapes (DLT7000/8000) have a smaller number of physical and logical tracks, but the principle is the same. The number of passes for DLTIIIXT and DLT IV tapes is 1,000,000. The shelf life is 30 years for the DLT III XT and DLT IV tapes and 20 for the DLT III.

Our DDS drive gets cleaned regularly. Our tapes in rotation are fairly old, too. However, we are receiving this error even when we use brand new tapes. 

STORE ENCOUNTERED MEDIA WRITE ERROR ON LDEV 7 (S/R 1454)

The new tapes are Fuji media, not HP like our old ones.

John Burke replies:

Replace that drive. DDS drives are notorious for failing. Also, the drive cannot tell whether or not you are using branded tapes. I’ve used Fuji DDS tapes and have found them to be just as good as HP-branded tapes (note that HP did not actually manufacture the tapes). I have also gotten into the habit of replacing DDS tapes after about 25 uses. When compared to the value of a backup, this is a small expense to pay.

Posted by Ron Seybold at 07:52 PM in Hidden Value, Homesteading | Permalink | Comments (0)

September 14, 2018

Use Command Interpreter to program fast

NewsWire Classic

By Ken Robertson

An overworked, understaffed data processing department is all too common in today’s ever belt-tightening, down-sizing and de-staffing companies.

Running-shoesAn ad-hoc request may come to the harried data processing manager. She may throw her hands up in despair and say, “It can’t be done. Not within the time frame that you need it in.” Of course, every computer-literate person knows deep down in his heart that every programming request can be fulfilled, if the programmer has enough hours to code, debug, test, document and implement the new program. The informed DP manager knows that programming the Command Interpreter (CI) can sometimes reduce that time, changing the “impossible deadline” into something more achievable.

Getting Data Into and Out of Files

So you want to keep some data around for a while? Use a file! Well, you knew that already, I’ll bet. What you probably didn’t know is that you can get data into and out of files fairly easily, using IO re-direction and the print command. IO re-direction allows input or output to be directed to a file instead of to your terminal. IO re-direction uses the symbols ">", ">>" and "<". Use ">" to re-direct output to a temporary file. (You can make the file permanent if you use a file command.) Use ">>" to append output to the file. Finally, use "<" to re-direct input from a file:

echo Value 96 > myfile
echo This is the second line >> myfile
input my_var < myfile
setvar mynum_var str("!my_var",7,2)
setvar mynum_var_2 !mynum_var - (6 * 9 )
echo The answer to the meaning of life, the universe
echo and everything is !mynum_var_2.

After executing the above command file, the file Myfile will contain two lines, “Value 42” and “This is the second line.” (Without quotes, of course.) The Input command uses IO re-direction to read the first record of the file, and assigns the value to the variable my_var. The first Setvar extracts the number from the middle of the string, and proceeds to use the value in an important calculation in the next line.

How can you assign the data in the second and consequent lines of a file to variables? You use the Print command to select the record that you want from the file, sending the output to a new file:

print myfile;start=2;end=2 > myfile2

You can then use the Input command to extract the string from the second file.

Rolling Your Own System Variables

It’s easy enough to create a static file of Setvar commands that gets invoked at logon time, and it’s not difficult to modify the file programmatically. For example, let’s say that you would like to remember a particular variable from session to session, such as the name of your favorite printer. You can name the file that contains the Setvars, Mygvars. It will contain the line: setvar my_printer “biglaser”

The value of this variable may change during your session, but you may want to keep it for the next time that you log on. To do this, you must replace your normal logoff procedure (the Bye or Exit command) with a command file that saves the variable in a file, and then logs you off.

byebye
purge mygvars > $null
file mygvars;save
echo setvar my_printer "!my_printer" > *mygvars
bye

Whenever you type byebye, the setvar command is written to Mygvars and you are then logged off. The default close disposition of an IO re-direction file is TEMP, which is why you have to specify a file equation. Because you are never certain that this file exists beforehand, doing a Purge ensures that it does not.

Posted by Ron Seybold at 07:14 PM in Hidden Value, Homesteading | Permalink | Comments (0)

September 07, 2018

Queue up those 3000 jobs with MPE tools

NewsWire Classic

By Shawn Gordon

A powerful feature of MPE is the concept of user-defined job queues. You can use these JOBQ commands to exert granular job control that is tightly coupled with MPE/iX. HP first introduced the commands in the 6.0 release.

For example, you only want one datacomm job to log on at a time, but there are 100 that need to run. At the same time you need to let users run their reports, and you want to allow only two compile jobs to run at a time. Normally you would set your job limit down to 1, then manually shuffle job priorities around and let jobs go. In the new multiple job queue controlled environment, you can define a DATACOMM job queue whose limit was 1, an ENDUSER job queue whose limit was 6 (for example), and a COMPILE job queue whose limit was 2. You could also set a total job limit of 20 to accommodate your other jobs that may need to run.

Three commands accommodate the job queue feature:

NEWJOBQ qname [;limit=n]
PURGEJOBQ qname
LISTJOBQ

The commands LIMIT, ALTJOB, JOB and STREAM all include the parameter ;JOBQ=.

As an example, I am going to create a new job queue called SHOWTIME that has a job limit of 1. You will notice the job card of the sample job has a JOBQ parameter at the end to specify what queue it is to execute in.

Alternatively I could have said STREAM SHOWTIME.JCL;JOBQ=SHOWTIME to put it into my job queue. Here’s the coding to do this:

NEWJOBQ SHOWTIME;LIMIT=1

!JOB SHOWTIME,MANAGER.SYS,PUB;
!JOBQ=SHOWTIME !
!SETVAR HPAUTOCONT TRUE
!
!SHOWTIME
!
!SHOWCLOCK
!
!SHOWME
!
!SHOWVAR [email protected]
!SHOWVAR [email protected]
!
!ECHO !HPDATEF
!ECHO !HPTIMEF
!
!PAUSE 300
!
!EOJ

I just streamed five copies of the job, and using the LISTJOBQ command I am able to see the default system defined job queue HPSYSJQ. I haven’t been able to find out why it indicates a limit of 3500, since my current job limit was 30. [Editor’s Note: Gavin Scott reports that “All job queues have a LIMIT that is separate from the one true system LIMIT. This includes the default HPSYSJQ. The 3500 default is a number large enough that you should never run into the case where the existence of this second, un-obvious, limit on normal jobs affects you.”]

You can see my SHOWTIME job queue with a limit of 1, with one executing and five total jobs, so four are currently in a wait state. This is obvious in the SHOWJOB command below.

listjobq

JOBQ      LIMIT     EXEC  TOTAL

HPSYSJQ   3500      12    12
SHOWTIME  1         1     5

SHOWJOB [email protected]

JOBNUM  STATE IPRI JIN  JLIST    INTRODUCED  JOB NAME

#J2     EXEC        10S LP       TUE  7:09A  NP92JOB,MGR.MINISOFT
#J3     EXEC        10R LP       TUE  7:09A  BACKG,MANAGER.VESOFT
#J4     EXEC        10S LP       TUE  7:09A  WTRSH,MGR.WTRSH
#J5     EXEC        10S LP       TUE  7:09A  MSJOB,MGR.MINISOFT
#J6     EXEC        10S LP       TUE  7:09A  MASTEROP,MANAGER.SYS
#J7     EXEC        10S LP       TUE  7:09A  VCSSERV,MGR.DIAMOND
#J8     EXEC        10S LP       TUE  7:09A  VCSCHED,MGR.DIAMOND
#J9     EXEC        10S LP       TUE  7:09A  JINETD,MANAGER.SYS
#J10    EXEC        10S LP       TUE  7:09A  JWHSERVR,MANAGER.SYS
#J12    EXEC        10S LP       TUE  7:25A  GUI3000J,MANAGER.SYS
#J19    EXEC        10S LP       TUE  8:08A  BROLMSGJ,JOBS.REVIEW
#J130   EXEC        10S LP       TUE  1:06P  SHOWTIME,MANAGER.SYS
#J131   WAIT:1   8  10S LP       TUE  1:06P  SHOWTIME,MANAGER.SYS
#J132   WAIT:2   8  10S LP       TUE  1:06P  SHOWTIME,MANAGER.SYS
#J133   WAIT:3   8  10S LP       TUE  1:06P  SHOWTIME,MANAGER.SYS
#J134   WAIT:4   8  10S LP       TUE  1:06P  SHOWTIME,MANAGER.SYS

16 JOBS (DISPLAYED):
   0 INTRO
                4 WAIT; INCL 0 DEFERRED
                12 EXEC; INCL 0 SESSIONS
                0 SUSP
JOBFENCE= 6; JLIMIT= 30; SLIMIT= 60

Now if I want to increase the job limit for my SHOWTIME job queue, I can use the following command

limit +1;jobq=showtime
altjob #j131;jobq=hpsysjq

You will probably notice that there are a number of nice enhancements to ALTJOB and LIMIT in support of the job queues, having uses outside of the job queues. For example, LIMIT now allows you to use a plus or minus value to increase or decrease the number, so you don’t have to use an absolute value. It is common to up the limit by one to allow another job to execute, but previously you had to check the current job limit, change it, then change it back. At least now you can just do +1 to let the job launch.

On the ALTJOB command, you can now specify HIPRI to cause a job to start up immediately and not have to play with limits to let it go. You can also alter the output device of the job. I did find during my tests that altering a job to a queue that had open slots didn’t seem to allow the job to release if you sent it to the system default HPSYSJQ. However, if you sent it to a user-defined job queue that had room left in it for another job to execute, then it would launch immediately.

There is another side benefit of job queues, and that is ensuring that never more than one version of a job logs on. For example, if you have some background job running and you cannot have a second copy running, but there is nothing that prevents it, you could create a job queue for it with a limit of 1 that would keep any extra copies from launching.

This is just one example of an extended use of the feature. If you try to purge a job queue that is currently in use, you will receive this message:

Cannot purge job queue as there are jobs
running/waiting in that queue. (CIERR 12251)

If you try to stream a job into a queue that does not exist you will receive the message

JOBQ parameter expected. (CIERR 12255)
Spooler internal error occurred. (CIERR 4522)

The job will be streamed regardless — however, it won’t start executing, because there is no queue for it to execute in. The major problem is that the job will stream into a WAIT state because there is no queue available for it. At this point you can’t abort it, you can’t create the queue it was intended for and have it work, you can’t alter it into the system job queue because of whatever the problem is that we described earlier. Finally you can try to create a new queue and alter it into it. The LISTJOBQ will show it as a job for that queue, but it will never start executing. The only way to get rid of the job is to shut down the system and do a START NORECOVERY.

Posted by Ron Seybold at 06:49 PM in Hidden Value, Homesteading | Permalink | Comments (1)

August 31, 2018

SFTP and the points where transfers may fail

RFC-transfer-card-coverEarlier in August a 3000 manager who relies on the Stromasys virtualized 3000 was searching for failures. Well, he was asking about the causes for failures. He wanted to know more about failures of SFTP transfers on his MPE/iX system. (We'd call it a 3000 but there's no more HP iron there at Ray Legault's shop). He gave the rundown on the problems with MPE/iX.

We send about 40 files each day most of these in the early morning. Sometimes we would have zero to fives connection failures each morning. I noticed that these failures seem to occur when two SFTP jobs ran at the same minute. I then added a "JOBQ=FINLOG" to the job card of every SFTP job I had and set the job limit to 1. This was two weeks ago and we have not had a failure yet.

Brian Edminster, who still hosts open source software for MPE/iX, checked in to offer an answer to why those SFTP jobs were failing.

I'd be willing to bet that Ray's issue at Boeing with SFTP connect failures is due to the Entropy Generator running dry. Connections take lots of entropy data — and the one that comes 'out of the box' with the SFTP client doesn't generate very much without some modifications.

If you need to make more than one connection a minute (job limit 1), this modification will likely become necessary. Let me know if you'd like some pointers on how to do this. It will require some revisions to the SFTP software. The Entropy Gathering Daemon which Mark Klein's SFTP port uses is written in Perl. It is not terribly difficult to modify to include new data sources to "stir into the pool" that is drawn from by the SFTP client.

Edminster's MPE-Opensource.org website has an SFTP quickstart bundle of all packages required to install OpenSSH on MPE/iX including SFTP, scp, and keygen.

Posted by Ron Seybold at 08:28 PM in Hidden Value, Homesteading | Permalink | Comments (0)

August 24, 2018

Kept Promises for Open Source on MPE/iX

OpensourceOpen source software developed a reputation for keeping HP 3000s online and productive, even in the face of industry requirement changes and new government regulations. Applied Technologies founder Brian Edminster has shared reports of a 3000 installation processing Point of Sale transactions, a customer which faced new PCI compliance demands. He was tasked with finding a solution to the new credit card compliance rules late in one December — with a January deadline.

“What we were struggling with was not that uncommon,” he explained. “The solution of choice was a version of the package OpenSSH, an open source implementation of a secure shell.”

OpenSSH offers publicly exchanged authentication, encrypted communication for secure file transfers, a secure shell command line, port forwarding. “It’s amazing how much you get," Edminster said, "and it’s available for many operating systems.” He's got a website devoted to the open source tools for the 3000.

At first, none of those operating system implementations included MPE/iX. OpenSSH requires a shell for the MPE/iX version; it doesn’t run at the MPE command line. But it’s been ported using OpenSSL for the HP 3000 and Perl/iX, both available from Edminster's MPE open source website.  Perl, another open source tool, “was designed for portability across platforms, and it works nicely,” he said.

OpenSSH protects from “man in the middle” security attacks by using DNS resolution, another open source utility wired into MPE/iX. Edminster recommended “the definitive guide to OpenSSH, commonly known as ‘the snail book’ from O’Reilly Press, Second Edition.”

That 3000 site where Edminster was working on POS security requirements had enabled DNS resolution across its enterprise — so Edminster was able to use a handy MPE/iX script called DNSCHECK. It’s a beautiful piece of scripting that checks, step by step, all the things necessary for name resolution to work on an HP 3000.

OpenSSH uses cryptological software to pad out blocks of data which are being transferred. The HP 3000’s random number generation routines are “not so good” for this, Edminster explained. Random number routines must have a much longer cycle length of repeats than MPE/iX provides. MPE has no random number generation built into its kernel, unlike other operating environments.

The solution is “the Entropy Gathering Daemon, which is already packaged up by Ken Hirsh with his port of OpenSSH,” Edminster reported.

Posted by Ron Seybold at 02:14 PM in Hidden Value, Homesteading, Web Resources | Permalink | Comments (0)

August 20, 2018

Following Job Lines in Emulated 3000 Life

Queueing
The Stromasys Charon software is a fact of life in the homesteading community by this year, after almost six years of field service. Lately the emulator users have been offering insights on how they're using their servers.

It's a lot like any HP 3000 has been used for the last 44 years, in some ways. Transferring files. Queueing up jobs. A few of the emulators shared their advisories not long ago.

Ray Legault at Boeing talked about his experiences with file transfers, especially an SFTP client and the SFTP "Connection refused" errors. As the Charon developers like to say, if the MPE/iX software behaves the same on the emulator as it does on 3000 hardware, even if MPE registers an error, then Charon is doing its faithful emulation job.

"We are running on a Stromasys Charon A500-200 and a A500-100 virtual machine which executes on a HP ProLiant DL 380 Gen8 3.59 GHZ CPU, with 6 cores and 64 gig of memory," Legault said.

We send about 40 files each day most of these in the early morning. Sometimes we would have zero to fives connection failures each morning. I noticed that these failures seem to occur when two SFTP jobs ran at the same minute. I then added a "JOBQ=FINLOG" to the job card of every SFTP job I had and set the job limit to 1. This was two weeks ago and we have not had a failure yet.

Another emulator user, Tony Summers of Smith & Williamson in the UK, shared queueing advice and a massive job checker (HOWMANY) that's working well for him.

"Even though we've migrated to an MPE emulator," Summers said, "we use job queues all the time so that jobs that need to run 24/7 don't bed-block the system job queue."
The alternative we've also used to create a UDC or command file that limits the number of instances of any job - example below (which partly uses a link to Posix /Unix using the SH command)

If you are looking at the sFTP failures, have you checked that the FTP server is configured to allow multiple connections?

USER DEFINED COMMAND FILE: HOWMANY.CMD
parm OK_NUMBER_of_JOBS=""

# HOWMANY
#
# HOWMANY is a command file to determine how many Jobs or Sessions are
# running with the same logon attributes as the calling Job or Session.
#
# An optional parameter can be passed to set the allowed number of running
# Jobs or Sessions with the same logon attributes.
#
# Example 1: Passing the number of 'allowed' Jobs / Sessions.
#
#    :HOWMANY 1
#
# If a job logs on and issues the HOWMANY command above, HOWMANY will check
# how many Jobs are running with the same logon attributes. The '1' tells
# HOWMANY that only '1' job should be running. Therefore, if HOWMANY
# determines that more than '1' is running, it will cause the calling Job
# to log off.
#
# Example 2:
#
#    :HOWMANY
#
# On it's own, HOWMANY will return a variable to the calling Job or Session
# called HOWMANY_THIS_USER that will be set to the number of EXECuting
# Jobs or Sessions with the same logon attributes.
#
# Example 3:
#
# If you want to see how many Jobs or Sessions are running for another
# User Id (not the calling Job or Session), then you can pass this as a
# parameter...
#
#    :HOWMANY T990
# Or
#    :HOWMANY "T990,ALL.SWIMS"    <--Quotes required
#
# You cannot log another Job or Session off with this command

setvar HOWMANY_USER  "!HPJOBNAME,!HPUSER.!HPACCOUNT"
setvar HOWMANY_USER2 "!HPJOBNAME,!HPUSER.!HPACCOUNT"

setvar HOWMANY_INPUT "!OK_NUMBER_of_JOBS"

if HOWMANY_INPUT <> "" then
  if HOWMANY_INPUT > "A" then
     setvar HOWMANY_USER ups("!HOWMANY_INPUT")
     if pos(",","!HOWMANY_USER") = 0 then
        setvar HOWMANY_USER2 "!HOWMANY_USER,@[email protected]"
     else
        setvar HOWMANY_USER2 "!HOWMANY_USER"
     endif
     setvar HOWMANY_ALLOWED 9999
  else
     setvar HOWMANY_ALLOWED !HOWMANY_INPUT
  endif
else
  # Set to unlimited
  setvar HOWMANY_ALLOWED 9999
endif

continue
purge [email protected],temp;noconfirm >$null
build HWMNFILE;REC=-1,,B,ASCII;temp
file  HWMNFILE,oldtemp;dev=disc

showjob job=!HOWMANY_USER2;EXEC >HWMNFILE
#  next line links to the unix / posix shell
SH grep '!HOWMANY_USER' $MPE/_$MPE_JOBNUM/tmp/HWMNFILE.!HPGROUP.!HPACCOUNT >HWMN
FLE2

setvar HOWMANY_THIS_USER ![finfo("HWMNFLE2","EOF")]

if HPJOBTYPE = "S" then
  if HOWMANY_THIS_USER = 1 then
     setvar HM_IS_ARE "is"
     setvar HM_TYPE "SESSION"
  else
     setvar HM_IS_ARE "are"
     setvar HM_TYPE "SESSIONS"
  endif
else
  if HOWMANY_THIS_USER = 1 then
     setvar HM_IS_ARE "is"
     setvar HM_TYPE "JOB"
  else
     setvar HM_IS_ARE "are"
     setvar HM_TYPE "JOBS"
  endif
endif

echo There !HM_IS_ARE !HOWMANY_THIS_USER !HM_TYPE running for UserId: !HOWMANY_U
SER2

if OK_NUMBER_of_JOBS <> "" then
  if HOWMANY_THIS_USER > HOWMANY_ALLOWED then
     echo *****************************************************************
     echo Too many !HM_TYPE with this User Id running. Will now log you off
     echo *****************************************************************
     tellop HOWMANY is logging !HOWMANY_THIS_USER off
     if HPJOBTYPE = "S" then
        echo Will now log your Session off
        EOJ
     else
        echo This Job will now log off
        UDCEOJ
        EOJ
     endif
  endif
endif

purge [email protected],temp;noconfirm > $null
deletevar HM_TYPE, HM_IS_ARE, HOWMANY_ALLOWED

Posted by Ron Seybold at 08:17 PM in Hidden Value, Homesteading | Permalink | Comments (1)

August 17, 2018

Nike Arrays 101

Hard-DriveJust a few weeks ago, a 3000 manager using an A-Class server checked in on how he might connect the SC-10 arrays from Hewlett-Packard to his A500. As a West Coast service provider carried the manager toward that hardware (it can be done) it seems like a good time to review the use of storage arrays with MPE/iX systems.

Our founding net.digest editor John Burke covered this ground in the years after HP announced it was cutting off its 3000 operations. While the HP label is still anathema to some, the hardware prices are sometimes too compelling. Here's Nike Arrays 101, advice still worthy on the day you're moving around arrays connected to a 3000.

By John Burke
Newswire Classic

Many 3000 homesteaders are picking up used HP Nike Model 20 disk arrays. The interest comes from the fact that there is a glut of these devices on the market — meaning they are inexpensive — and they work with older models of HP 3000s. However, there is a lot of misinformation floating around about how and when to use them. For example, one company posted the following to 3000-L:

We’re upgrading from a Model 10 to a Model 20 Nike array. I’m in the middle of deciding whether to keep it in hardware RAID configuration or to switch to MPE/iX mirroring, since I can now do it on the system volume set. It wasn’t in place when the system was first bought, so we stayed with the Nike hardware RAID. We’re considering the performance issue of keeping it Nike hardware RAID versus the safety of MPE Mirroring. You can use the 2nd Fast-Wide card on the array when using MPE mirroring, but you can’t when using Model 20 hardware RAID.

So, with hardware RAID, you have to consider the single point of failure of the controller card. If we ‘split the bus’ on the array mechanism into two separate groups of drives, and then connect a separate controller to the other half of the bus, you can’t have the hardware mirrored drive on the other controller. It must be on the same path as the ‘master’ drive because MPE sees them as a single device.

Using software mirroring you can do this because both drives are independently configured in MPE. Software mirroring adds overhead to the CPU, but it’s a tradeoff you have to decide to make. We are evaluating the options, looking for the best (in our situation) combination of efficiency, performance, fault tolerance and cost.

First of all, as a number of people pointed out, Mirrored Disk/iX does not support mirroring of the System Volume Set – never did and never will. Secondly, you most certainly can use a second FWSCSI card with a Model 20 attached to an HP 3000

Bob J. elaborated on the second controller. 

All of the drives are accessible from either controller but of course via different addresses. Your installer should set the DEFAULT ownership of drives to each controller. To improve throughput, each controller should share the load. Only one controller is necessary to address all of the drives, but where MPE falls short is not having a mechanism for auto failover of a failing controller.

In other words, sysgen reconfiguration would be necessary to run on a single controller after SP failure in a dual SP configuration. You could have alternate configurations stored on your system to cover both cases of a single failing controller but the best solution is to get it fixed when it breaks. The best news is that SP failures are not very common.

There is a mechanism in MPE for ‘failover’ called HAFO - High Availability FailOver. Unfortunately for the original poster it is only supported with XP and VA arrays and not on Nike’s or AutoRAIDs (because it does not work with those).

Andrew Popay provided some personal experience.

We have seven Nike SP20 arrays, totaling 140 discs spread across all the arrays, using a combination of RAID 1 (for performance) and RAID 5 (for capacity). We use both SP’s on all arrays, with six arrays used over three systems (two per system). One of our systems has two arrays daisy-chained. The only failures we have suffered on any of the arrays have been due to a disc mechanism failing.

We never find any issues with the hardware raiding; in fact, as a lot of people have mentioned, hardware raiding is much more preferred to software raiding. Software raiding has several issues, system volume, performance, ease of use, etc. Hardware raiding is far more resilient.

As for anyone concerned about single points of failure, I would not worry too much about the Nike arrays, I would say they are almost bullet proof. For those who require a 24x7 system and can’t afford any downtime what so ever, maybe they should consider upgrading to an N-Class, with a VA or XP. Bottom line is SP20’s are sound arrays on the HP 3000s, easy to configure, setup and maintain.

Posted by Ron Seybold at 11:39 PM in Hidden Value, Homesteading | Permalink | Comments (0)

August 10, 2018

HPCALENDAR joins 3000 intrinsics hits

Newswire Classic

Greatest-HitsTwenty years ago HP took steps forward, into the realm beyond 2028, when it released a set of COBOL-related MPE/iX intrinsics. The community is now looking into the next decade and seeing a possibility of hurdling the Dec. 31, 2027 date handling roadblock. In this Inside COBOL column from the late 1990s, Shawn Gordon took readers on a quick tour of the new intrinsics — new to 1998, at least — that would make the 3000 easier to program for the future. He even wrote a sample program employing the improved data handling.

In 2018 the information might seem more history lesson than operational instruction guide. But when a long-running mission critical app needs repairs, knowing the full set of date capabilities might help. Gordon even mentions that using the official intrinsics will help maintain programs written 20 years earlier. Enough time has passed by now that any new programs at the time of the article would be 20 years old.

3000 managers have always had a sharp focus on coding for long life of applications. 

By Shawn Gordon

Since Year 2000 is rapidly approaching, I'll review the date intrinsics that HP gave us in MPE/iX 5.5 starting with PowerPatch 4.

As I've done a lot of Y2K consulting it seems everyone has written their own date routines. Most I have seen will break by Y2K. My goal in my consulting was to implement an HP-supplied solution, making it easier to support YYMMDD as well as YYYYMMDD date functions during the conversion process.

My only negative comment about these intrinsics is that I wish they had been created with the introduction of the Spectrum series of HP 3000s (PA-RISC systems). I could have used them then, too.

Six new intrinsics are available. All of the parameters for all new intrinsics are now 32-bit. This means they will work for as long as anyone reading this will ever care. I feel it’s important to standardize on these new HP-supplied intrinsics. They will make it a lot safer than trying to maintain some piece of code that was probably written 20 years ago. With code that old, it’s likely that nobody remembers how it works.

Here’s the lineup of intrinsics:

1. HPDATECONVERT: converts dates from one supported format to another 
2. HPDATEFORMAT: converts a date into a display type (I usually use this instead of HPDATECONVERT)
3. HPDATEDIFF: returns the number of days between to given dates 
4. HPDATEOFFSET: returns a date that is plus or minus the number of days from the source date
5. HPDATEVALIDATE: verifies that the date conforms to a supported date format
6. A new 32-bit HPCALENDAR format (HPCALENDAR, HPFMTCALENDAR).

There are a couple of things you should be sure to do correctly when using these intrinsics. HP’s documentation shows that some parameters on some intrinsics need to be passed by value (see my examples in Figure 1 with DATE-CODE, DAYS-DIFF and DATE-CUTOFF). You do this by putting the \ backward slashes around the variable name.

The other thing that can be confusing is the DATE-CUTOFF. This defines a “split” year. If anything is below this value, it will be translated to the next century. In other words, if the value of DATE-CUTOFF is 50, and you are using a 2 digit year of 00..49, then it will be resolved as 2000..2049, and those in the range of 50..99 will be 1950..1999.

If you use a value of -1, then the intrinsic will pick up the value of the predefined system variable HPSPLITYEAR. This method lets you control the value outside of your program, so I use a DATE-CUTOFF of -1 to stay modular.

The other thing to note is DATE-CODE, which indicates the style of the date that you are working with. I am using 15 because it works with both YYMMDD and YYYYMMDD format.

I’m including some code examples below for the variable declarations, as well as results of running the program MYDATE which uses the functions. 

01 DATE-CODE            PIC S9(9)  COMP VALUE 15.
01 DATE-RESULT          PIC S9(9)  COMP VALUE 0.
01 DATE-STATUS.
  03 S-INFO            PIC S9(4)  COMP VALUE 0.
  03 S-SUBSYS          PIC S9(4)  COMP VALUE 0.
01 DATE-CUTOFF          PIC S9(9)  COMP VALUE -1.
01 FORMAT-LEN           PIC S9(9)  COMP VALUE 20.
01 FROM-DATE            PIC 9(8)   COMP.
01 THRU-DATE            PIC 9(8)   COMP.
01 DAYS-DIFF            PIC S9(9)  COMP.
01 FORMAT-TYPE          PIC X(20).

   CALL INTRINSIC "HPDATEFORMAT" USING \DATE-CODE\,
                                       FROM-DATE,
                                       HOLD-FORMAT,
                                       FORMAT-DATE,
                                       FORMAT-LEN,
                                       DATE-STATUS,
                                       \DATE-CUTOFF\.
      IF S-INFO <> 0
         DISPLAY "Error in HPDATEFORMAT".

      CALL INTRINSIC "HPDATEDIFF" USING \DATE-CODE\,
                                        FROM-DATE,
                                        THRU-DATE,
                                        DUMMY-VAL,
                                        DATE-STATUS,
                                        \DATE-CUTOFF\.
      IF S-INFO <> 0
        DISPLAY "Error in HPDATEDIFF".

       CALL INTRINSIC "HPDATEOFFSET" USING \DATE-CODE\,
                                            FROM-DATE,
                                           \DAYS-DIFF\,
                                           THRU-DATE,
                                           DATE-STATUS,
                                           \DATE-CUTOFF\.
      IF S-INFO <> 0
        DISPLAY "Error in HPDATEOFFSET".

        CALL INTRINSIC "HPDATEVALIDATE" USING \DATE-CODE\,
                                              FROM-DATE,
                                              \DATE-CUTOFF\
                                       GIVING DATE-RESULT.
      IF S-INFO <> 0
        DISPLAY "Error in HPDATEVALIDATE".


RUN MYDATE

Enter date in YYMMDD or YYYYMMDD format: 19980317
Enter date format string: MM/DD/YY
Formatted date is 03/17/98
Julian date is 01998076

Enter From date: 19980101
Enter Thru date: 19980501
Number of days = +000000120

Enter start date: 19980801
Enter day offset: -31
New date is 19980701

Posted by Ron Seybold at 12:35 PM in Hidden Value, Homesteading | Permalink | Comments (0)

July 27, 2018

Worst Practices: Shouldn't Happen to a Dog

By Scott Hirsh

Chaplin-A-Dogs-LifeThere is a saying in Washington about Washington: “If you want a friend, get a dog.” Ha! We system managers should be so lucky. We can’t even be our own best friend.

It’s sad but true: we system managers won’t cut ourselves any slack. We repeatedly put ourselves in jeopardy, often making the same mistakes time after time. We even break all the rules we impose on others. Don’t believe me? See if you recognize any of these examples.

1. Hand crafted system management

Ah yes, the good old days. Peace, love and tear gas (I never inhaled). But here’s a news flash, sunshine: for system managers, the ’60s are dead. Predictable, repeatable tasks can and should be automated. If you can script it, you can schedule it. And if you can schedule it, you can automate it. So what are you waiting for? Do you like (take your pick): streaming jobs by hand; adjusting fences and priorities by hand; reading $STDLISTs; staring at the console waiting for that one important message? For this you went to college?

And yet, we (or our management) come up with lots of lame excuses for running a stone-age operation. Can’t afford the automation products, don’t trust automation, can’t trap every error, blah blah blah. Those excuses may fly when you’re small, but suddenly you have more systems, bigger systems and manual management turns your shop into burn-out central. Now there’s turnover costs, downtime costs, opportunity costs.

Oh, and by the way, it’s much more expensive to implement automated management in a large, busy environment than it is to grow automated management from a smaller environment. Perhaps some of us are just adrenaline junkies, or we fear not being needed. Get over it and automate already.

2. The disappearing act

A close personal friend of mine — okay, it was me — once made a change to Security/3000’s SECURCON file, then left for an all-day meeting about 40 miles away. Guess what? None of the application users could log on after my change. Way back then, my pager almost vibrated off my belt from that one. And it made for some interesting meetings when I got back.

I have seen lots of cases where a system manager made a configuration change, installed a patch, or fussed with SYSSTART or UDCs, then immediately went home. Big mistake. If you’re lucky, you live near your data center and can zip right back to repair the carnage that was discovered right away. If you’re not lucky, first you don’t discover your mistake until the worst possible moment — say, around the heaviest usage period the next day — and then you’re forced to take the system down to fix the problem. Ouch.

3. A lack of planning on my part does constitute an emergency on your part

A variation on No. 1. We are the eternal optimists. No matter how invasive the procedure, everything will work out perfectly, right? How many PowerPatches must we install before we realize we must leave adequate time for testing the patched system and perhaps back that sucka out? No really, this time HP (or your favorite vendor) has learned from past mistakes and has a bullet-proof update. No need to leave a cushion for collateral damage. Right.

Every decent system administration book offers the same advice: Don’t do anything you can’t undo. Make a backup copy of whatever you’re changing. Keep track of the steps you followed. Be prepared to back out whatever you’re doing. Because that contingency time can inflate your update schedule by hours, it’s unlikely you can safely make a system change at any time other than weekends or holidays. 

4. I’ve got a secret

You make changes but don’t tell anyone about them. Let’s be charitable and say your changes worked as planned. Unfortunately, nobody knew you were going to make the change. I have seen a change as innocuous as modifying the system prompt have unintended consequences (Reflection scripts looked for the old prompt and now wouldn’t work). The term “system” implies interrelationships. Anything we do has a ripple effect. When we don’t tell others that we’re about to make a change — “they wouldn’t let me do it if I told them!” — we don’t do ourselves any favors. I would love to hear other war stories under this category (hint, hint).

5. Trust no one

This probably explains all the peripherals you’ve bought that don’t work with your HP 3000. But isn’t the HP 3000 the most open system in the universe? A disk drive is a disk drive, right? The vendor told me the printer would work (and it costs much less than that HP printer). We do love our work, don’t we? And we do get excited by all the possibilities of the technology.

But sometimes — most times? — when the opportunity looks too good to be true, it is. And what a hassle it is when we’re stuck with a device, bought and paid for, that we must get to work with our system. Now. Because we’re out of space. Because the CFO doesn’t like spending $25K for a big paperweight.

Another aspect of this issue arises with replacement parts. No names please, but I have seen systems with non-certified disk drives. Sure they work — until there’s a power failure. The customer didn’t know they had this exposure because their maintenance company didn’t think it was worth mentioning. Do your homework, and watch out for little green men with maintenance kits.

And last, but not least, is taking “expert” information at face value. My first experience on the HP rack (running a Series 70) was with an HP SE who told me how to shortcut an OS update. Sounded good, I could use the extra time because I was updating on a Wednesday night (see No. 3). Before I knew it, I was staring at this message on the console: “Volume table destroyed, must reload.”

After that, I dropped SE support, figuring I was quite capable of destroying my system without high priced assistance. If you don’t feel confident about what you’ve been told, post to the 3000-L and see what your peers have to say.

6. The odd couple

For every system management Oscar Madison, leaving old files around to clog up and slow down his system or creating his own collection of foo, temp, K or Q files, there is a Felix Unger counterpart out there, obsessively tidying up. Both personality types have been known to shoot themselves in the foot.

The slobs make their lives miserable by never archiving files, which eventually bites them when they run out of space and the backup takes ever-longer. They also suffer from having multiple versions of all kinds of things on disk, running the risk of executing the wrong version or accessing the wrong file. And of course there are performance and security penalties for a messy system.

But the fastidious system manager also has issues. For one thing, being too diligent about cleaning up can result in missing files. Here is a case where automation can be a negative. Jobs that run every so often, archiving files that haven’t been accessed for a certain amount of time, can wind up archiving a file just before you need it.

Or, in my case, I once archived a file in the VESOFT account that hadn’t been accessed in years, only to discover it was some kind of special file that had to be there, even though it was never accessed (go figure).

Yes, it’s still good to be conscientious about keeping your system tidy. Just don’t overdo it.

You deserve a break today

If we can just step back and catch ourselves in dysfunctional behavior, we can start giving ourselves a break. We should not need to carry a pager, cell phone and laptop with us on vacation — for those brave enough to take a vacation, that is. We should not spend most of our time while out on the road on our phones, explaining how to recover our systems or where critical files are hidden. We should not expect to get raises when we spend so much of our professional time performing tasks that an entry-level employee can handle. By cleaning up our acts, we can stop reacting to self-inflicted busy work, which will free up time for more important tasks — like reading the NewsWire.

Posted by Ron Seybold at 07:58 PM in Hidden Value | Permalink | Comments (0)

July 13, 2018

Fine-Tune: Resetting your LDEV 21 Console

I have a 959 system at my site and there are times when I can't get the remote console port on LDEV 21 to work. How do I troubleshoot this problem and reset the console port? 

1. Is the port configured and available?

a) Check to be sure the system recognizes the port

:showdev 21

LDEV     AVAIL
     21     AVAIL

b) Is the SYSGEN configuration okay? 

:sysgen  sysgen>io
io> ld 21

LDEV:21  DEVNAME:  OUTDEV:21  MODE:  JAID
**ID: A1703-60003-CONSOLE-TERMINAL 
RSIZE:        40   DEVTYPE: TERM
**PATH: 56/56.1   MPETYPE: 16   MPESUBTYPE:  0
CLASS: TERM

c) Is the User Port configured in NMMGR?

:nmmgr
then ...

OPEN CONF, DTS, USER PORT

Logical Device [21  ]  (1 - 1800)
Line Speed [2400  ]  (300, 1200, 9600, or 19200 bps)
Modem Type [1] (0-NONE, 1-US, 2-European, 3 - V22.bis)
Parity [NONE] (None, Even, Odd, 0's, or 1's)

2. Is the access port enabled, configured correctly and unlocked? On the local console type in CTRL-B to get the CM> prompt. The REMOTE settings are displayed at the bottom of the console screen.

a) Check/Change the configuration

cm> CA
Bit rate:               2400 bits/sec
Protocol:               Bell
System identification:

b) Enable Remote

cm> ER

c) Unlock Remote and raise the DTR signal on the modem

cm> U

d) Go back to command mode (:).

cm> CO

3. If you still cannot dial into the remote console, there are two utilities in sysdiag you can try.  Modmutil will do a self test on the modem, and consolan will reset the port.

a) To test the modem:

:sysdiag
dui> modmutil
mu> diag
diag> autotest
diag> exit
mu> exit

b) To reset the modem

:sysdiag
dui> CONSOLAN pdev=nn/nn section=2(23)

Continue? YES
Reset local/remote?  REMOTE

Posted by Ron Seybold at 05:08 PM in Hidden Value, Homesteading | Permalink | Comments (0)

July 06, 2018

Using MPE/iX to send SFTP files

I have a script that uses FTP to send files to a site which we open by IP address. We've been asked to change to SFTP (port 22) and use the DNS name instead of an IP address, and I don't believe the 3000 supports that. Does it? If so, how?

Allegro's Donna Hofmeister replies:

I'm not sure you want to do SFTP on port 22. That's the SSH port. SFTP is meant to use port 115. Have a look at one of our white papers on how to do SFTP on MPE.

If you are going to use DNS, you must have your 3000 configured for that. It's easily done. 

However, if you've never done anything on your 3000 to make it act like a real computer (oh -- that's right, it is a real computer and fully capable of using DNS), this can turn into a can o'worms.

To configure for 'DNS lite' it's probably simplest to do the following

1. copy hostsamp.net to hosts.net

2. edit hosts.net to make sure it has

127.0.0.1 loopback
1.2.3.4   name    <--- where 1.2.3.4 and name are corrected to the system you want to connect to

3. copy the NSSWSAMP.net to nsswitch.net

4. edit nsswitch.net to have this line:

hosts : files[SUCCESS=return NOTFOUND=continue]

With this done, the 3000 sorta kinda acts like it's using DNS (because it's looking the the hosts file for how to translate 'name' into '1.2.3.4')

Tony Summers provides a caveat:

One warning. The upgrade from FTP to sFTP (or SSH FTP etc) can involve more change to your scripts that you expect. What we do for FTP (originally on the HP 3000, and now on our HP-UX server) is build a text file with the commands (the sample below, edited)

cat FTPT0070
open ftpserver.site.co.uk
user USERNAME PASSWORD
ascii
get /export/002_iccm_extract_1161.csv ICR21161

quit

The file is then presented to the FTP client. On the HP 3000 it was something like....

RUN FTP.ARPA.SYS < FTPT0070 > FTPS0070  

Then both the output file, FTPS0070, and any JCWs set by the FTP program were inspected to test the success of the FTP session.

cat FTPS0070

Connected to xxxxxx.co.uk

220 Welcome to FTP service - xxxx.
331 Please specify the password.
230 Login successful.
200 Switching to ASCII mode.
200 PORT command successful. Consider using PASV. 550 Failed to open file.
221 Goodbye.

In particular, the 3-digit status codes were analyzed, looking for error codes like "550." If you do something similar in your FTP scripts, then all I can say is welcome to a very different world.

Karsten Brøndum adds:

Here's a completely different approach. 

Depending on your skills in Java, there is a nice LPGL package called ftp4j that I have used a couple of times. (By the way, ftp4j will do both SFTP and FTP). I've found it way easier than to fiddle with files with text files containing commands, especially when it comes to error handling.

Posted by Ron Seybold at 04:15 PM in Hidden Value | Permalink | Comments (0)

June 15, 2018

Heartbeat at the center of CPU boost

Newswire Classic

By Gilles Schipper

The activity light on the 3000's LDEV 1 was abnormally high, and we noticed very sluggish response time, even though only the console was signed on and no batch jobs were executing. Having no idea what the problem was — and absent any tools such as Glance to shine a light on the situation — we began to revert to the previous configuration, software and hardware.

Only a week later, with some analysis of NM log files, were we able to establish what was going on. The performance problem was related to the 3000's transceivers. SQL heartbeat was disabled for all of them. The result was that the CPU was being inundated with an overwhelming amount of IO requests in order to log the missing heartbeat event in the NM log file.

This unnecessary and voluminous IO was enough to bring the system to its knees — even absent any other activity. In today's HP 3000 environment, this serious CPU wastage problem can be overlooked, because faster CPUs could render the problem relatively less noticeable. But I would venture to guess that there is a lot of the "wasted IO" that is affecting a large number of HP 3000s out there.

Fortunately, there is a very simple way to recognize whether the problem exists, and also a simple cure. To determine if you have this problem, simply type the following command and look at the reply that follows:

:listf [email protected],2

ACCOUNT=  SYS         GROUP=  PUB

FILENAME  CODE  ------------LOGICAL RECORD-------  ----SPACE----
                  SIZE  TYP    EOF    LIMIT R/B  SECTORS #X MX

H000000A*           1W  FB     5      66010   1      256  1  *
H000000B*           1W  FB     0      66010   1        0  0  *
H0909A5A*           1W  FB     5      66010   1      256  1  *

H0909A5B*           1W  FB     0      66010   1        0  0  *
H13ECEEA*           1W  FB     5      66010   1      256  1  *
H13ECEEB*           1W  FB     0      66010   1        0  0  *
H15F669A            1W  FB     5      66010   1      256  1  *
H15F669B            1W  FB     0      66010   1        0  0  *
HASTAT    NMPRG   128W  FB     347      347   1      352  1  8
HAUTIL    NMPRG   128W  FB     424      424   1      432  1  8
HP32209B  PROG    128W  FB     15        15   1       16  1  1

Notice the OPEN files (the ones with the associated asterisk suffixing the file name) that are 1W in size. There are two such files associated with each configured DTC, file name starting with the letter H, followed by six characters that represent the last six characters of the DTC MAC address, followed by the letter A or B. The EOF for these files should be 0 and 5 for the respective "A" and "B" files.

Otherwise your CPU is being subjected to high-volume unnecessary IO, requiring CPU attention. The solution is to simply enable SQL heartbeat for each transceiver attached to each DTC. This is done via a small white jumper switch that you should see at the side of each transceiver.

Voila, you've just achieved a significant no-cost CPU upgrade.

There is also another method of eliminating this excessive CPU overhead that involves using NMMGR to uncheck as many logging events as you can for each DTC, revalidating and rebooting.

But the SQL-heartbeat enable method is a surer bet.

Posted by Ron Seybold at 08:40 PM in Hidden Value | Permalink | Comments (0)

June 08, 2018

Fine-Tune: Making the 3000's ports report

I have a port in an HP 3000 and I want to know the application that is currently using that port. Is there any command that can show me the applications accessing a particular port?

Kevin Miller replied:

:sockinfo.net.sys

Enter ‘c’ for ‘call sockets.’ Listeners are shown in port order.

The port for telnet on our 3000 is set to a different value then 23, but it is set to 23 on our HP Unix server. When I try to telnet from the 3000 to HP-UX I get the following message: Trying... telnet: Unable to connect to remote host. If I switch the port for telnet to 23 on the 3000, it works great.

My question is: Can I run telnet on two different ports on either box so that I can maintain my non-standard port on the 3000, but still allow telnet to run between the two boxes? If not, is there another way to make this work?

Jeff Kell replied:

Just ‘telnet your.3000.name nnn’ where ‘nnn’ is your ‘nonstandard’ port.

How do I point network printer configurations to specific ports on (external) multi-port JetDirect (or equivalent) boxes?

Gilles Schipper replied:

You need to add the tcp_port_number option, in NPCONFIG, as follows:

(network_address = 128.250.232.40 tcp_port_number = 9100) # for port 1
(network_address = 128.250.232.40 tcp_port_number = 9101) # for port 2
(network_address = 128.250.232.40 tcp_port_number = 9102) # for port 3

(Please note that everything on each line after and including the “#” represents a comment.)

My HP 3000 is set up for full access to the Internet. The telnet connection works fine, but I also see that VT-MGR also works. I know that inetdsec is used for restricting access for ip, http, ftp and so on. Is there something in NMMGR to restrict VT-MGR access, or do you use inetdsec for that also?

Chris Bartram replied:

Just an option logon UDC that checks the CIVars set for the IP address and hostname of the originator.

We’ve got a DLT4000 tape drive I’d like to connect to a Series 957 and use them for database, incremental, and full backups. Can I simply hook a DLT4000 drive to the SE-SCSI port on the MFIO card, set its SCSI address, and add the device as an HPC1521B?

Gilles Schipper replied:

It should be no problem at all. The DLT4000 SE SCSI device can also be utilized as a boot device on the 957. You should use the device ID of DLT4000 and not HPC1521B. You should consider using the device ID of HPC1521B as a workaround to any restore problem. It would be best to use device ID DLT4000 and test to ensure good restore performance, and only resort to device ID HPC1521B if the restore speed is NOT satisfactory.

Posted by Ron Seybold at 08:52 PM in Hidden Value, Homesteading | Permalink | Comments (0)

June 01, 2018

Recovering a 3000 password: some ideas

I have an administrator who decided to change passwords on MANAGER.SYS. Now what's supposed to be the new password isn't working. Maybe he mis-keyed it, or just mis-remembered it. Any suggestions, other than a blindfold and cigarette, or starting down the migration path?

The GOD program, a part of MPEX, has SM capability — so it will allow you to do a LISTUSER MANAGER.SYS;PASS=

If your operator can log onto operator.sys:

file xt=mytape;dev=disc
file syslist=$stdlist
store command.pub;*xt;directory;show

While using your favorite editor or other utility, search for the string: "ALTUSER MANAGER  SYS"

You will notice: PAS=<the pwd> which is your clue.

It's said that a logon to the MGR.TELSUP account can unlock the passwords. The Telsup account usually has SM capability, if it wasn't changed.

Posted by Ron Seybold at 09:57 PM in Hidden Value, Migration | Permalink | Comments (0)

May 25, 2018

Fine-Tune: Locking databases into lookups

Editor's Note: Monday is a holiday to commemorate Memorial Day, so we're celebrating here with time away from the keyboard. We'll be back with a new report May 30.

Lock files databaseWe’re migrating from our 3000 legacy applications to an ERP system hosted on another environment. Management has decreed the HP 3000 apps must still be available for lookups, but nobody should be able to enter new data or modify existing data. Should I do the simplest thing and change all of the databases so that the write class list is empty?

Doug Werth replies:

One way to do this is to write a program in the language of your choice that does a DBOPEN followed by a DBLOCK of each database (this will require MR capability). Then the program goes into an infinite loop calling the PAUSE intrinsic. Any program that tries to update the database will fail to achieve a lock, rendering the databases read-only. Programs that call conditional locks will come back immediately with a failed lock. Unconditional locks will hang.

This has been a very successful solution I have used on systems where a duplicate copy of the databases is kept for reporting and/or shadowing using IMAGE log files.

Steve Dirickson agrees with the poster of the question:

Since very few developers write their apps to check the subsystem write flag that you can set with DBUTIL, changing the classes is your best bet. Make sure you do so by changing the current M/W classes to R/R so the existing passwords will still work for DBOPEN, and only actual put/update/delete operations will fail.

The Big Picture: If protection is required for the database, that protection should reside in the database if at all possible. As mentioned, this is easy with IMAGE.

I am putting a new disk drive into my 3000 configuration, one that doesn't have an HP label on it. It's 500GB and a great value. What patches should I be sure to have installed on MPE/iX 7.5?

MPEMXT1
Large Disk: FSCHECK SYNCACCOUNTING fix for HFS files

MPEMXT3
Large Disk: limit maximum SCSI disk size to a half-terabyte

MPEMXT4
Large Disk: SSM changes for disk space allocation and accounting

MPEMXT7
Large Disk: Discfree changes to correct sector counts

MPEMXU3
Large Disk: REPORT "FORMAT=LONG" enhancement. MPEMXU3 includes patch MPEMXT2 which is another Large Disk/Files patch. MPEMXT2 provides changes to the ALTACCT, NEWACCT, ALTGROUP, NEWGROUP commands.

MPEMXU6
Large Disk: CATALOG.PUB.SYS changes for CIERR messages

MPEMXU7
Large Disk: CICAT and CICATERR.PUB.SYS changes

The critical one is MPEMXT3, which protects from other problems.

Posted by Ron Seybold at 06:09 PM in Hidden Value, Homesteading, Migration | Permalink | Comments (0)

May 18, 2018

Fine-Tune: Setting up a 3000 as file server

I would like to set up an HP 3000 as a file server. In one of my accounts I want to have a share for my 100 users pointing to a separate directory in this account. The homes section in smb.conf normally points to the home group of the user, which is the same for all of them and is not helpful. Is there another way of solving the problem, or must I configure more than the 100 shares?

Mark Wonsil replies:

I saw a clever little trick in Unix that should work on MPE:

[%U]
path = /ACCT/SHARES/%U

This creates a share name that is the same as the username and then it points the files to a directory under the SHARES group.

How do I set my prompt setting in the startup script?

John Burke replies:

Here’s what I do for my prompt:
SETVAR HPPROMPT,”<SASHA: “+&
“!!HPJOBNAME,!!HPUSER.!!HPACCOUNT,!!HPGROUP> “+&
“!!HPDATEF !!HPTIMEF <!!HPCWD>”+CHR(13)+CHR(10)+”[!!HPCMDNUM]:”

This yields, for example,
<SASHA: JPB,MGR.SYSADMIN,PUB> THU, FEB 20, 2003 11:15 PM </SYSADMIN/PUB>
[7]:

A disk drive has failed on a user volume. How can I determine the accounts and groups on that user volume?

John Clogg replies:

Try REPORT @[email protected];ONVS=<volset>

Jeff Woods adds:

In addition to the suggestion to use “:REPORT @[email protected];ONVS=volset” (which may fail because it’s actually trying to look at the group entries on the volume set) you can do a “:LISTGROUP @[email protected]” and scan the listing for groups where HOMEVS is your uservolumesetname. The advantage of LISTGROUP is that it uses only the directory entries on the system volume set. You may want to redirect the output of LISTGROUP to a file and then search that rather than trying to scan the listing directly.

Posted by Ron Seybold at 04:43 PM in Hidden Value | Permalink | Comments (0)

May 11, 2018

How to Create Cause and Effect on MPE

Causality-iconHP 3000s took a big step forward with the introduction of a fresh intrinsic in 1995. Intrinsics are a wonderful thing to power HP 3000 development and enhancement. There was a time when file information was hard to procure on a 3000, and JOBINFO came into full flower with MPE/iX 5.0, back in 1994. "The high point in MPE software was the JOBINFO intrinsic," said Olav Kappert, an MPE pro who could measure well: his 3000 experience began in 1979. JOBINFO sits just about at the end of the 456-page MPE/iX Intrinsics Manual published in '94.

Fast-forward 24 years later and people still ask about how they can add features to an application. The Obtaining File Information section of an MPE KSAM manual holds an answer to what seems like an advanced problem. That KSAM manual sits in one of several Web corners for MPE manuals, a link on Team NA Consulting's page. Here's an example of a question where INFO intrinsics can play cause and effect.

I'm still using our HP 3000, and I have access to the HP COBOL compiler. We haven't migrated and aren't intending to. How can I use the characteristics of an input file as HPFOPEN parameters to create an output file? I want that output file to be an exact replica of the input file. I want to do this without knowing anything about the input file until it is opened by the COBOL program. 

I've tried using FFILEINFO and FLABELINFO to capture the characteristics of the input file, once I've opened it. After I get the opens/reads/writes working, I want to be able to alter the capacity of the output file.

Francois Desrochers said, "How about calling FFILEINFO on the input file to retrieve all the attributes you may need? Then apply them to the output file HPFOPEN call."

Donna Hofmeister added 

Have a look at the Using KSAM XL and KSAM 64 manual (Ed. note: link courtesy of Team NA Consulting). Chapters 3 and 4 seem to cover the areas you have questions about. Listfile,5 seems to be a rightly nifty thing.

But rather than beat yourself silly trying to get devise a pure COBOL solution, you might be well advised to augment what you're doing with some CI scripts that you call from your program.

In a lively tech discussion on the 3000-L list, Olav Kappert added, 

Since you want to do this without knowing anything about the input file until it is opened by the COBOL program, the only way is to use one of the MPE intrinsics to determine all the characteristics of the file in question. Then do a command build after parsing that information.

Michael Anderson added details on how the 3000's CI scripting can build upon the fundamentals of file information and COBOL.

I like Donna's plan.This is a strategy that will also help whenever you want similar functionality on a NON-MPE platform. Also, although COBOL is very capable, an external script might be a better tool. You don't always need a hammer.

This is hypothetical, to try to make a point. From your MPE CI prompt, type HELP FINFO. You should be able to set some variables (SETVAR FILEA "XXX"), and using FINFO add some more variables. Then from COBOL using HPCIGETVAR, string together a BUILD command (with a bigger LIMIT maybe), and call "HPCICOMMAND". You could string the build command from a command, into a single variable, then COBOL only needs to HPCIGETVAR once.

You can also write a script to do everything you want, and call HPCICOMMAND to run the script, pass it parms. It's pretty cool, and it makes your COBOL application more portable. (Same program, different script).

For example: On MPE I once wrote (using COBOL) a small utility to CALL DBINFO, extract all the meta-data from any IMAGE database, and then create, and write to the NEW KSAM COPYLIB, ending up with all the COBOL copylib modules needed for all datasets for any database, including call statements and working storage. My point to all this: I used CI scripting to create and write to the copylib.

I actually used ECHO to write the copylib ksam file from a CI script. Now, seeing how I work more on HP-UX and Linux, plus OpenCOBOL and Eloquence, I should be able to compile this same program on Linux with minimal modifications, only changing the external script.

I use this method to access SQL databases, and much more, using OpenCOBOL and the Tcl/Tk developer exchange (a repository for Tcl, a flexible language with a small core and many uses that can be adapted in ways). This way I can run the same program, same script almost anywhere, no matter, Windows, Mac, or Unix.

Eric Sand, another veteran of the 3000, commented that this kind of challenge really shows off the range of possibility for solving development problems. "You can create almost any cause and effect in MPE that you can imagine," he said. "Reading about your concern gave me a little rush, as I mentally organized what I wanted to do to address your issue."

Posted by Ron Seybold at 08:24 PM in Hidden Value | Permalink | Comments (0)

May 04, 2018

How Details and Masters Get the Job Done

Masters and DetailsA Hidden Value question was posed about how manual and automatic masters work in TurboIMAGE. Roy Brown gave a fine tutorial on how these features do their jobs for MPE and the 3000 -- as well as how a detail dataset might have zero key fields.

Manual masters can contain data which you define, like Detail sets can, along with a single Key field. Automatic masters contain only the Key field. In both cases, there can be only one record for a given key value in a Master dataset.

A Detail dataset contains data fields plus zero, one, or many key fields. There can be as many records as you like for a given key value, and these form a chain accessible from the Master record key value. This chain may be sorted, or it may just be in chronological order of adding the records.

Brown explained that "where there are keys, referential integrity demands that there are no Detail record entries with a key field that is not found in either a Manual or Automatic master, both Key name and Key value. So a Detail data set with Key fields that are not present in a Master record would be a sign of a seriously corrupted database."

However, I doubt this is the case, and when you do a QUERY FORM command, you will see which fields in Detail datasets are Keys, which fields are used to establish Sort orders, and which fields are data pure and simple.

From the Key name, you can determine which Master set links the keys.

As I said above, it is possible to have a Detail dataset with no keys, but these usually contain only a very few records, since direct access to them without keys is cumbersome, and you would otherwise have to trawl right through one to find any given entry.

So a Detail dataset with thousands of unconnected entries would be very unlikely.

The FORM output will allow you to check how the Detail dataset that you think might have unconnected entries is actually linked in.

Brown's explanation flowed from the following question and answer in that Hidden Value article.

I want to generate a listing of data sets, data item names, and their relationships from my TurboIMAGE database (master, One detail data set has thousands of entries which do not appear to be connected to any master. I cannot remember the difference between manual and automatic masters.

Francois Desrochers replied to use Query's FORM command.

RUN QUERY.PUB.SYS
B=dbname
PASSWORD = >> password
MODE = >> 5
FORM

Manual masters: programs have to explicitly add entries before you can add related entries in detail sets. Programs have to explicitly delete entries when there are no related detail entries left. In other words, you have to do master dataset maintenance.

Automatic masters: entries are automatically created when a related detail set entry is created. Entries in the master are automatically removed when the last related detail entry is deleted. IMAGE takes care of the maintenance.

Consultant Ron Horner added, "The difference between a manual master and auto master is the following:

1. You have to add records to the manual master that contain the key data for any detail datasets that are linked to the master.

2. When working with automatic masters, you don't have to write data to them at all. IMAGE takes care of populating the master.

Krikor Gullekian also noted that "With QUERY you can check the databases as long as you know the password." [Ed. note: This password advice is true, except when you're the database owner. No password is required then, only a semicolon.]

Posted by Ron Seybold at 08:40 PM in Hidden Value, Homesteading | Permalink | Comments (0)

April 27, 2018

Fine-Tune Friday: DDS diagnosis and tips

Series 928-LXWe have a tape device that is not responding; that is, we put the tape in, but it is not coming online. I also see that a user is logged into the system using the LDEV assigned to the tape drive. SHOWDEV TAPE also does not list the device.

Gilles Schipper replies:

I’ve seen this before for DDS drives, Probably during your most recent reboot, there was a (possibly temporary) malfunction with your tape drive’s power supply such that its existence was not recognized during the boot up process. That would normally result in a “device unavailable” condition and the subsequent disabling of that logical device number.

I have noticed instances where that LDEV number is actually made available to the logon device number pool (for subsequent assignment for logon session device numbers). Long story short, the solution appears to be a power cycle, START NORECOVERY reboot.

After shutting down and powering off the CPU and all devices, run ODE to ensure all devices are recognized before START NORECOVERY. Failure to recognize the device at that point should lead to further investigation of the power supply, SCSI device number setting, or other hardware malfunction. If this situation happens frequently, I would first suspect a problem with the power supply of that device.

Get rid of that internal DDS tape drive

By John Burke

People complain of problems with internal DDS tape drives in systems located in remote areas with little onsite expertise, problems that lead to frequent drive replacements and downtime. It reminds me of the old vaudeville joke where the patient comes to the doctor with a complaint, “Doc, it hurts when I do this.” The doctor replies, “Then don’t do that.”

HP 3000 gurus have cautioned for years that people should not use internal tape or disk drives in 9x7, 9x8 or 9x9 production systems. The most likely failure is a tape drive and the next most likely failure is a disk drive. Everything else in the system cabinet could easily run for a decade without needing service or replacing. [Editor's note: John's advice came in 2004, so a decade-plus is definitely bonus time.] When an internal tape or disk drive fails you are looking at serious downtime while the case is opened and the drive is replaced. A common urban legend says that the primary boot device (LDEV 1) and the secondary boot device (usually LDEV 7) must be internal. Not true.

Bite the bullet now. Remove, or at least disconnect (both power and data cables) all internal drives. At the least, replace the internal DDS drive with an external DDS3 or DDS4 drive. In the case of the DDS drive, you will not even need to make any configuration changes if you set the SCSI ID to 0 on the external drive.

Usually, the internal DDS drive is at SCSI ID 0 (for a 9x7, this is 52.0.0; for a 9x8, this is 56/52.0.0; and, for a 9x9, it is something like 10/4/20.0.0). If you do not want to open the case even to disconnect the drives, you can probably set the SCSI ID on the external DDS drive to 1 since this is usually not used. On 9x9s, SCSI ID 2 is used for the CD-ROM. Disk drive addresses will vary with the system, but even if you replace the internal disk drives with external JBOD, you are still ahead of the game. Remember, if you have to change SCSI IDs, you will have to change your SYSGEN configuration and your boot device paths.

Someone also asked whether if you changed the boot path you should immediately create a new SLT. Technically, the answer is “No” since the SLT contains no information about boot paths. However, if you have not created an SLT since the device was added (and why not?), then by all means create a new SLT. It should also be noted that DDS drives are notorious for not being able to read tapes created on other DDS drives.

So, if you do not think you have time to create a new SLT, at least use CHECKSLT to verify you can read your existing SLT on your new drive. If you cannot read your existing SLT, then make time to create a new SLT. Your standard procedures should include regularly creating and SLT and checking it.

Posted by Ron Seybold at 04:26 PM in Hidden Value, Homesteading | Permalink | Comments (0)

April 20, 2018

What Does HP's Disc Brand Mean?

By John Burke

HP emblemAfter reading Jim Hawkins’ reply to my SCSI is SCSI article, I was reminded about HP’s 4Gb disk drive fiasco. These branded drives had a nasty habit of failing after being powered off after they’d been running for a while. The problems were not limited to the HP 3000 versions, either.

At one point we got so frustrated we just replaced all 4Gb drives with the much more reliable 9Gb drives. I never blamed HP for these failures, or the failures of the 4Gb drives on my HP 3000 — even though all were purchased from HP, and had HP stamped all over them. The failures were the fault of the manufacturer, and no amount of certification testing would likely have shown the problem. But the failures made me wonder: What does HP certification and HP branding mean?

In Hawkins’ reply, he puts great emphasis on the statement that “In the SCSI peripheral market, Industry Standard is really defined as ‘works on a PC.’ Unfortunately, the requirements for single-user PCs are not always in alignment with those of multi-user servers.” Maybe inside HP the desktops look different, but I have never seen a company use SCSI peripherals as a standard for desktop Wintel systems.

At my last employer, we had approximately 1,200 desktops, and not a single one had a SCSI disk drive. SCSI disks are used primarily in the multi-user server market, not the desktop market. While Hawkins says some interesting things in the rest of his article, these two sentences tend to prejudice the reader against everything else he says.

Unfortunately, Hawkins’ best argument came out in private correspondence: “Putting newer disks inside a 9x7, 9x8 or 9x9 may overtax the power supply and/or ‘cook’ your CPU or memory.” However, most of us outside HP have been advising against using internal drives in production machines for many years because of the obvious maintenance headaches. It still amazes me how many people believe you have to have at least one internal drive in an HP 3000.

The debate seems like it highlights at least four things going on.

1. Does HP certification of a disk drive have value? And, if so, how much? The work that HP does to certify disk drives for the HP 3000 clearly has value. It is up to the customer to decide the worth and he will decide with his checkbook. This certification is an area where HP has historically done a poor job in communicating value to its customers. Hawkins’ information should have been made more public years ago.

2. Does the listing of a drive in IODFAULT.PUB.SYS imply its certification by HP? I think it is reasonable for a customer to look at IODFAULT, pick out a “supported” drive, and think he can buy it from whoever will give him the combination of price and service that meets his needs. For anything but the newer systems, you only need two generic drives listed in IODFAULT (one SE and one FWD). So what is a customer to make of the drives listed in IODFAULT.PUB.SYS?

3. Does HP branding imply HP-specific firmware? So why then do the HP drives almost always report the original manufacturer’s model number instead of HP’s? Again, this leads to the customer assuming he can buy a STxxxxxx and it is the same as the drive he buys from HP.

4. Are all HP-branded drives equal? I had an HP-branded drive that I pulled from a server that I got happily spinning in my 9x7 at one time. And, yes, of course the duty cycle in this 3000 was extremely light. But I am also relatively sure the HP part number (as opposed to drive, whose reported model number is in IODFAULT) was never sold as compatible with the HP 3000. It sounds like this HP-branded drive is just as risky as any non-HP branded drive.

It was never my intention in the original article to bash HP, or the fine people who continue to be associated with the 3000 division. Perhaps I should have reworded the title to read, “If you feel abandoned by your vendor, then take comfort in the fact that in most cases, SCSI is SCSI.” But that doesn’t exactly roll off the old tongue. Yet, in reality, this is what most of HP’s argument has been about: those few cases when “SCSI is SCSI” is not true. It should also be noted that my original article was aimed at those HP 3000 sites planning to homestead for some period of time.

After considering Hawkins’ response to my original article and numerous private messages, my position can now be stated like this: With the exception of the “hot” drive issue, any name-brand manufacturer SCSI drive you can electrically connect to your HP 3000 will likely work. If it survives your own testing (mount it as a separate user volume, and bang on it for awhile before moving it into production) then you should have little to worry about.

Posted by Ron Seybold at 03:18 PM in Hidden Value, Homesteading | Permalink | Comments (0)

April 13, 2018

Fine-Tune: Net config file care and feeding

I’m replacing my Model 10 array with a Model 20 on MPEXL_SYSTEM_VOLUME_SET, so it'll require a reinstall. What’s the best way to reinstate my network config files? Just restore NMCONFIG and NPCONFIG? I'm hoping I can use my old CSLT to re-add all my old non-Nike drives and mod the product IDs in Sysgen—or do I have to add them manually after using the factory SLT?

Gilles Schipper replies:

Do the following steps:
- using your CSLT to install onto LDEV 1
- modify your i/o to reflect new/changed config.
- reboot
- use volutil to add non-LDEV1 volumes appropriately
- restore directory or directories from backup
- preform system reload from full backup - using the keep, create, olddate, partdb,show=offline options in the restore command
- reboot again

No need for separate restores of specific files.

Making backups while network services are running

Advice from James Hofmeister

The most common problem with performing backups in the past was that network configuration files were held open for READ/WRITE when the network was up. 3000 sites found they had no backup copy of the network configuration file NMCONFIG.pub.sys when it was time to install (reload) from backup tapes. I tested this on 7.0 building a CSLT and storing @.pub.sys, @.mpexl.sys, @.net.sys, @.arpa.sys on the same tape, and verified all of the network files including the configuration files were backed up.

Another problem from older systems was NETCP.net.sys was found missing in action following a install (reload) — and after it was recovered and restored from another source, then another system reboot was required to initiate NETCP. NETCP is now included on SLTs. 

Will the network function normally while backups are in progress? The answer to this is Your Mileage Will Vary. The building of a CSLT and the STORE process consume significant CPU, memory and IO resources.

From a networking perspective, TCP/IP networks are not guaranteed to maintain network connections in the event of severe system performance degradation. An acceptable level of CPU and IO performance is required to support TCP's ability to acknowledge the packets it has received (if a packet is not acknowledged it will be retransmitted as per the remote hosts configuration).

Also, an acceptable level of system bus performance is required to support the network hardware DMA to system memory -- if busy during a DMA attempt, the frame is dropped (store from disk to tape or from disk to disk consumes significant system bus band width).

Posted by Ron Seybold at 07:38 PM in Hidden Value | Permalink | Comments (0)

March 26, 2018

Upgrade your hardware to homestead longer

Hardware toolsKeeping storage devices fresh is a key step in maintaining a datacenter that uses HP's 3000 hardware. Newer 3000s give you more options. Our net.digest columnist John Burke shared advice that's still good today while planning the future for a 3000s that will remain online for some time to come. Maybe not until 2028, but for awhile.

If you can, replace your older machines with the A-Class or N-Class models. Yes, the A-Class and some N-Class systems suffer from CPU throttling. (That's HP’s term. Some outside HP prefer CPU crippling.) However, even with the CPU throttling, most users will see significant improvement simply by moving to the A-Class or N-Class.

Both the A-Class and N-Class systems use the PCI bus. PCI cards are available for the A- and N-Class for SE-SCSI, FW-SCSI and Ultra-3 SCSI (LVD). You can slap in many a drive manufactured today, made by any vendor. SCSI is SCSI. Furthermore, with MPE/iX 7.5, PCI fiber channel adaptors are also supported, further expanding your choices.

If you are going to homestead on the older systems, or expect to use the older systems for a number of years to come, you have several options for storage solutions. For your SE-SCSI adaptors, you can use the new technology-old interface 18Gb and 36Gb Seagate drives. For your FW-SCSI (HVD) adaptors, since no one makes HVD drives anymore, you have to use a conversion solution. [You could of course replace your FW-SCSI adaptors with SE-SCSI adaptors, but this would reduce capacity and throughput.]

One possibility is to use an LVD-HVD converter and hang a string of new LVD drives off each of your FW-SCSI adaptors. HP and other vendors have sold routers that allow you to connect from FW-SCSI adaptors to Fibre Channel resources such as SANs. It's one way to accomplish something essential: get rid of those dusty old HP 6000 enclosures, disasters just waiting to happen.

As for tape drives, move away from DDS and use DLT (4000/7000/8000) with DLT IV tapes. Whatever connectivity problems there are can be dealt with just like the disk drives. If you have an A-Class or N-Class machine, LTO or SuperDLT both use LVD connections. If you have a non-PCI machine, anything faster that a DLT 8000 is wasted anyway because of the architecture lineups with 3000 IO.

Posted by Ron Seybold at 08:08 PM in Hidden Value, Homesteading | Permalink | Comments (0)

March 23, 2018

Moving, Yes — Volumes to Another 3000

Newswire Classic

By John Burke

Here is a shortened version of the revised checklist for moving user volumes physically from one system to another without a RESTORE

  • Get the new system up and running, even if it only has one disk drive

(if you've purchased additional new drives, do not configure them with VOLUTIL yet)

  • Analyze and document the configuration on the old system, making any necessary configuration changes on the new system and creating an SLT for the new system;
  • Backup and verify the system volume set and the user volumes separately (be sure to use the DIRECTORY option on all your STOREs);
  • VSCLOSE all the user volume sets on the old machine;
  • Move all the peripherals over to the new machine. On a START NORECOVERY, the user volumes should mount. The drives that were on the system volume set on the old system and any new drives added should now be configured in using VOLUTIL;
  • RESTORE the system volume set:

RESTORE *T;@[email protected]@;KEEP;OLDDATE;
SHOW=OFFLINE;FILES=n;DIRECTORY

  • RENAME the following three files if they exist to something else:

SYSSTART.PUB.SYS
NMCONFIG.PUB.SYS
COMMAND.PUB.SYS (udc configuration)

  • RESTORE the above three files from your tape with the DEV=1 option.

The OS requires that some files, for example SYSSTART, be on LDEV 1;

  • RUN NMMGR against NMCONFIG.PUB.SYS, then using NMMGR, change the path for the LANIC, if necessary, and make any other necessary NMMGR configuration changes.
  • Validate NETXPORT and DTS/LINK, which should automatically cross validate with SYSGEN.
  • START NORECOVERY

Posted by Ron Seybold at 03:10 PM in Hidden Value, Homesteading | Permalink | Comments (0)

March 16, 2018

Fine-tune Friday: SCSI Unleashed

Seagate 73GB driveAlthough disk technology has made sweeping improvements since HP's 3000 hardware was last built, SCSI devices are still being sold. The disk drives on the 15-year-old servers are the most likely point of hardware failure. Putting in new components such as the Seagate 73-GB U320 SCSI 10K hard drive starts with understanding the nature of the 3000's SCSI.

As our technical editor John Burke wrote, using a standard tech protocol means third parties like Seagate have products ready for use in HP's 3000 iron.

SCSI is SCSI

Extend the life of your HP 3000 with non-HP peripherals

By John Burke

This article will address two issues and examine some options that should help you run your HP 3000 for years to come. The first issue: you need to use only HP-branded storage peripherals. The second issue: because you have an old (say 9x7, 9x8 or even 9x9) system, then you are stuck using both old technology and just plain old peripherals. Both are urban legends and both are demonstrably false.

There is nothing magical about HP-branded peripherals

Back in the dark ages when many of us got our first exposure to MPE and the HP 3000, when HP actually made disk drives, there was a reason for purchasing an HP disk drive: “sector atomicity.” 9x7s and earlier HP 3000s had a battery that maintained the state of memory for a limited time after loss of power. In my experience, this was usually between 30 minutes and an hour.

These systems, however, also depended on special firmware in HP-made HP-IB and SCSI drives (sector atomicity) to ensure data integrity during a power loss. If power was restored within the life of the internal battery, the system started right back up where it left off, issuing a “Recover from Powerfail” message with no loss of data. It made for a great demo.

Ah, but you say all your disk drives have an HP label on them? Don’t be fooled by labels. Someone else, usually Seagate, made them. HP may in some cases add firmware to the drives so they work with certain HP diagnostics, but other than that, they are plain old industry standard drives. Which means that if you are willing to forego HP diagnostics, you can purchase and use plain old industry standard disk drives and other peripherals with your HP 3000 system.

Connect just about anything to anything

SCSI stands for Small Computer System Interface. It comes in a variety of flavors with a bewildering set of names attached such as SCSI-2, SCSI-3, SE-SCSI, FW-SCSI, HVD, LVD SCSI, Ultra SCSI, Ultra2 SCSI, Ultra3 SCSI, Ultra4 SCSI, Ultra-160, Ultra-320, etc. Pretty intimidating stuff.

Don’t despair though. Pretty much any kind of SCSI device can be connected to any other with the appropriate intermediary hardware. Various high quality adaptors and cables can be obtained from Paralan (www.paralan.com) or Granite Digital (www.granitedigital.com).

So, SCSI really is SCSI. It is a well-known, well-understood, evolving standard that makes it very easy to integrate and use all sorts of similar devices. MPE and the HP 3000 are rather behind the times, however, in supporting specific SCSI standards. Support for LVD SCSI was added with the A- and N-Class systems—and with MPE/iX 7.5, these same systems would support Fibre Channel (FC). 

Let’s concentrate on the SE-SCSI and FW-SCSI interfaces, both seemingly older than dirt, and disk and tape storage devices. But first, suppose you replace an old drive in your system, where should you put it? The 9x7s, 9x8s and 9x9s all have internal drive cages of varying sizes. It is tempting to fill up these bays with newer drives and, if space is at a critical premium, go ahead.

However, if you can, heed the words of Gavin Scott.

I’d recommend putting the new drives in an external case rather than inside the system, since that gives you much more flexibility and eliminates any hassles associated with installing the drive inside the cabinet. It’s the same SCSI interface that you’d be plugging into, so apart from saving the money for the case and cable, there’s no functional difference. With the external case you can control the power of the drive separately, watch the blinking lights, move the drive from system to system (especially useful if you set it up as its own volume set), etc.

At sites such as Granite Digital you can buy any number of rack mount, desktop and tower enclosures for disk systems. Here is another urban legend; LDEV 1 must be an internal drive. False. Or, the boot tape device has to be internal. False. You cannot tell by the path whether a drive is internal or external, and the path is the only thing MPE knows (or cares) about the physical location of the drive.

Okay, there are some limits

Once you come to terms with the fact that you can use almost any SCSI disk drive in your HP 3000, dealing with SE SCSI is a piece of cake and a whole world of possibilities opens up. With the right cable or adapter (see Paralan or Granite Digital) you are in business.

But just because you can connect the latest LVD drive to your SE-SCSI adaptor, should you? Probably not, because you are still limited by the speed of the SE adaptor and so are just wasting your money. Now that you know you do not need the specific HP drives you once bought, you can pick up used or surplus drives ridiculously cheap. [Ed. note: the 73 GB drive at the top of the article is $129.]

Seagate created new technology drives with the old technology 50-pin SE-SCSI interface, the 18Gb model ST318418N and the 36Gb model ST336918N.

FW-SCSI is more problematic than SE-SCSI because no one even makes FW-SCSI (HVD) disk drives any more and you need more than just a simple cable or adapter to connect newer drives to an HVD adaptor. In fact, from the Paralan site, “HVD SCSI was rendered obsolete in the SPI-3 document of SCSI-3.”

So, what is one to do? Most systems with FW-SCSI adaptors need them for the increased throughput and capacity they provide over SE-SCSI. Paralan and others make HVD-LVD converters. The Paralan MH17 is a standalone converter that allows you to connect a string of LVD disk drives to an HP FW-SCSI adaptor. Pretty cool.

If you're on a Fibre Channel (FC) SAN environment and you would like to store your HP 3000 data on the SAN, then only the PCI-Bus A- and N-Class systems (under MPE/iX 7.5) support native Fibre Channel.

A quick word about configuring your new storage peripherals: Do not get confused by the seemingly endless list of peripherals in IODFAULT.PUB.SYS. And, do not worry if your particular disk or tape drive is not listed in IODFAULT.PUB.SYS. Part of the SCSI standard allows for the interrogation of the device for such things as ID, size, etc. DSTAT ALL shows the disk ID returned by the drive, not what you entered in SYSGEN.

When configuring in a new drives, just use an ID that is close. In fact, there is really no need for any more than two entries for disk drives in IODFAULT, one for SE drives and one for HVD drives so as to automatically configure in the correct driver. The same is true for tape drives.

Summary

Disk drives and tape drives are the devices most likely to fail in your HP 3000 system. The good news is that you do not need to be stuck using old technology, nor are you limited to HP only peripherals. The bottom line is you have numerous options to satisfy your HP 3000 storage needs, both now and into the future.

Special thanks go to Denys Beauchemin, who contributed significant material to this article.

Posted by Ron Seybold at 07:41 PM in Hidden Value, Migration | Permalink | Comments (0)

March 09, 2018

Fine-Tune Friday: Account Management 101

Newswire Classic

By Scott Hirsh

Ledger-bookAs we board the train on our trip through HP 3000 System Management Hell, our first stop, Worst Practice #1, must be Unplanned Account Structure. By account structure I am referring to the organization of accounts, groups, files and users. I maintain that the worst of the worst practices is the failure to design an account structure, then put it into practice and stick with it. If instead you wing it, as most system managers seem to do, you ensure more work for yourself now and in the future. In other words, you are trapped in System Management Hell.

What’s the big deal about account structure? The account structure is the foundation of your system, from a management perspective. Account structure touches on a multitude of critical issues: security, capacity planning, performance, and disaster recovery, to name a few. On an HP 3000, with all of two levels to work with (account and group), planning is even more important than in a hierarchical structure where the additional levels allow one to get away with being sloppy (although strictly speaking, not planning your Unix account structure will ultimately catch up with you, too). In other words, since we have less to work with on MPE, making the most of what we have is compelling.

As system managers, when not dozing off in staff meetings, the vast majority of our time is spent on account structure-related activities: ensuring that files are safely stored in their proper locations, accessible only to authorized users; ensuring there is enough space to accommodate existing file growth as well as the addition of new files; and occasionally, even today, file placement or disk fragmentation can become a performance issue, so we must take note of that.

In the unlikely event of a problem, we must know where everything is and be able to find backup copies if necessary. Periodically we are asked (perhaps with no advance notice) to accommodate new accounts, groups, users and applications. We must respond quickly, but not recklessly, as this collection of files under our management is now ominously referred to as a “corporate asset.”

You wouldn’t build a house without a design and plans, you wouldn’t build an application without some kind of specifications, so why do we HP 3000 system managers ignore the need for some kind of consistent logic to the way we organize our systems?

A logical, adaptable, documented account structure is a huge time saver in many respects. As most of us now manage multiple systems, we have no time to waste chasing down lost files, working with convoluted file sets, struggling to keep access under control or reacting to full volume sets.

I once had a conversation with a co-worker who was an avid outdoorsman. He was discussing rock climbing and I asked him about exciting rock climbing experiences. His reply: “In rock climbing, anything exciting is bad.” I would say the same thing about system management. By getting your account structure under control, you build a solid system management foundation that translates into much more pleasant work.

If this were a “best practices” column, we would discuss the best ways to clean up your system’s account structure. But this is worst practices, so let’s look at the no-nos.

No naming standards,
bad naming standards

Oscar Wilde once said, “Consistency is the last resort of the unimaginative.” Do you think he was referring to HP 3000 system management? If so, not much has changed since Oscar’s day.

• In one account the jobs are located in group JCL. In another account, group JOBS. The developers keep “special” jobs in a group you never heard of in the critical application account. And just to make things more interesting, all your so-called “production” jobs are kept in an account called JCL, containing all kinds of groups, including “TEMP.”

By having consistency across accounts I control, I can easily find what I need when I need it. If jobs are always in the same group across accounts, I can LISTF @[email protected], etc. Backups/recoveries are easier, updates are easier, training new operators is easier. Sure, consistency is boring, but we must resist the lure of adrenaline.

• I’m going out on a limb here, but my guess is that your UDCs, the few you have left, are in a different place in every account. Why is that? And your system UDC (singular) is located in the SYS account, right? Because it’s the SYStem UDC, of course! Maybe it’s not such a bad thing to have another, non-SYS account for globally accessible files. What’s the catch? The system UDC file needs to be in the system volume set, for obvious reasons (learned that one the hard way).

• An MPE file name consists of a whopping maximum of eight characters. That should make every character count, right? So why do jobs that live in a group called JCL or an account called JCL all start with the letter J? File that under the department of redundancy department.

• We manage the systems, so we make the rules, right? Wrong. If we want the rules followed, if we want the best rules possible, we must get input and buy-in from all the others who will be expected to honor our rules. Ignoring users when it’s time to develop naming standards and other system policies is a classic Worst Practice, and a good way to ensure continued chaos. And don’t forget that upper management will need to be involved when a little “gentle” persuasion is required.

Scott Hirsh is former chairman of the SIG-SYSMAN Special Interest Group.

Posted by Ron Seybold at 06:51 PM in Hidden Value, Homesteading | Permalink | Comments (0)