December 07, 2018

Memory and Disk Rules for Performance

Concentration
NewsWire Classic

By Jeff Kubler

You need to get management support for your efforts to keep your systems performing at their best. Memory and disk are two components of your performance picture under MPE/iX. Main Memory is the scratch pad for all the work that the CPU performs. Every item of data that the CPU needs to perform calculations on or updating to must be brought into Main Memory.

CPU used to manage Main Memory: The CPU must manage memory. It must cycle through the memory pages, marking some as Overlay Candidates (this means that new data from disk may be placed here), noting that some are in continued use, and swapping others out to virtual or what is called transient storage. Swapping to disk occurs when data is in continued use but a higher priority process needs room for its data. To accommodate this higher priority process and its need for memory space, the Memory Manager will swap the memory for the lower priority process out to disk. The more activity the Memory Manager performs, the more CPU it takes to do this. Therefore it is the percentage of CPU used to manage memory that we use as a measurement.

Page Faults per Second: A Page Fault occurs each time a memory object is not found in memory. The threshold for the number of Page Faults per second that can be incurred before a memory problem is indicated varies with the size and the power of the CPU. Larger machines can handle more Page Faults per second while a smaller box will encounter problems with far fewer.

An exceptional number of Page Faults should never be used as the sole indicator of memory problems but when observed should be tested with the memory manager percentage. If both agree, you have a memory shortage. There are some strange things that I have observed with Page Faults, so it does not stand alone as an indicator of memory shortage.

The number of Page Faults per second and the amount of CPU needed to manage Memory are always evaluated in conjunction with each other. That is to say the high Page Fault Rate will not be considered a problem if the Memory Manager Percentage is not above 4 percent.

The Disk Environment is usually referred to as Secondary Storage. This is where all the data needed for system use is stored. Since Main Memory is not large enough to store all of the data that will be needed by all the processes, there must be a location for this larger pool of data. In the MPE/iX environment a great attempt was made to limit the impact of the Disk Environment so that it could not be the bottleneck that it once was in the Classic environment. Even though the Disk Environment does not have the significance it once had, this area can still be a bottleneck. As the CPU speeds increase, bottlenecks will become more significant.

Several different factors can affect the Disk Environment. One of these is data locality. Data locality pertains to two different types. There is data locality within Image datasets and data locality across the disk itself.

Data locality across Disk: This refers to the location of separate pieces of files (called extents). When files are placed on the disk, they can be placed in contiguous sectors or sections of files, or they can be placed in non-contiguous locations or even on many different disks. When files are not in contiguous locations they are said to be fragmented. The advantage of contiguous location is that greater efficiencies are allowed in retrieving data. When files need to be read, the head movement of the disk drive is minimal if files are in contiguous locations. The head moves to the location and the retrieval begins.

As the disk fills up the system cannot find one contiguous location to build any new file. Therefore, the system breaks the file up into extents and places the file wherever it can. A system reload will put files back into contiguous location (usually back on the location of the files file label) or products such as Lund Performance Solutions De-Frag/X can be used to put the files back into contiguous location.

Operating systems allocate disk space in chunks as they create and expand files and transient disk space (swap areas, etc.). When files are purged, these chunks are released for reuse. Over time the disc space may end up fragmented into many small pieces, which can slow the performance and the reliability of the system.

To observe and correct MPE fragmentation on MPE, you can use the De-Frag/X product from Lund Performance Software or the Contigvol command of Volutil. The latter is stable and reliable, but requires multiple passes to get the best results.

Data locality within IMAGE data sets is the other area of major concern. There there are two different types of datasets to be concerned with, detail datasets and automatic or master sets.

The Detail Datasets: this type of set holds the day to day data input. Detail sets begin with nothing in them. When records are added 1 is added to something called the high-water-mark, a number that tells how many records have been in the set, and the record is placed in the set.

The problem is that IMAGE automatically reuses space that is given up when a record is deleted. This space is often called the delete chain. New records are placed in the most recent location available on the "delete chain." This means that new records are not in the same physical locality as the rest of the records and may be far removed from the other records.

The ideal state for a detail database is one where the detail entries are sorted by the key field. This allows the data to be retrieved in the smallest amount of IO's making efficient use of the MPE systems pre-fetching of data. When this is not the case we can measure the dataset lack of efficiency with something called the Elongation factor. This is simply a measure of how many more IOs the user must perform to retrieve desired data.

The Master Datasets have unique identifiers (field names). There are two types of master sets, a manual master and an automatic master set. Manual masters have user-entered master entries while automatic masters have automatic entries placed in them to accommodate access to detail records. The issue of importance to performance here is something called the hashing algorithm. This is the method used by the database to calculate the location of the next record placed in the database. The intent is to cause the master set to be as equally distributed as possible.

The hashing algorithm uses the size of the set in its calculation. A poor size or a size that is not large enough will result in an unequally distributed database. A poor size is most easily described as one that does not consist of a prime number. This means that when the hashing algorithm calculates a location there is a higher potential that a record will already exist in that location. When this happens a secondary position must be calculated. When secondaries are placed in another block within the database, another I/O must occur to retrieve needed data. Since IO to disk is the slowest type of access, we want to avoid this at all costs.

Posted by Ron Seybold at 02:10 PM in Hidden Value, Homesteading | Permalink | Comments (0)

Get e-mail notice when the NewsWire blog gets a new entry. Just say "Blog Me" in a message to [email protected].

November 16, 2018

Fine-tune: 3000 support rescues, MPE/iX version matrix, network printer software

Rescued-boat-people
Steve Douglass of United Technologies Aerospace Systems writes, "We have an A-Class 400-100 machine that would only stay up about an hour before it autobooted. This machine was simply used for archived data lookup from an old ERP system. After trying simple fixes like reseating memory and checking connections we still had the same problem."

"We had no support agreement, and no one wanted to pay for a third-party support company to perform a diagnosis and fix, so we powered the system off. Of late there is interest in resurrecting this machine, and someone may be willing to foot the bill. We've researched and found Pivital Solutions and the Ideal Computer Services Group. Are there other recommendations?

John Clogg reports

We currently use Sherlock Services and are happy with the support they provide. I have also used Ideal Services and can recommend them with confidence.

Jim Maher of Saratoga Computers adds

We still service all of the HP 1000, 3000 and e3000 systems. Call anytime.

We replaced a printer recently and we can't get the new one to play nice with the 3000.  It's a LaserJet M608. When sending output to it, it prints a page or two and hangs. The spool file remains in a "print" state. The only way to reset it is to do a STOPSPOOL followed by a couple of ABORTIOs. The next time I start the spooler, the same thing happens, regardless of what I'm printing. What things should I check?

Tracy Johnson says

Try adding SNMP_SUPPORTED = FALSE (or TRUE)  You have a 50/50 chance either way. Sometimes you just have recalcitrant printers that won't cooperate with the HP3000. Consider getting Espul from Richard Corn or Minisoft's licensed version called Netprint.

Jim English adds

We use Netprint and eFormz from Minisoft. The eFormz is installed on a Windows server. Not all of our printers go through Netprint, just the ones that print forms or barcodes. We recently installed a newer HP printer and had the same issue you did. I set it up in Netprint and eFormz and it works great now.

Netprint by itself may solve your issue. I set up the printer in eFormz to print receipt travelers, which may have barcodes on them.

Is there a support matrix document that shows the HP 3000 boxes and what versions of MPE they can run? I'm trying to find all the 3000 boxes that support MPE/iX 6.0.

Donna Hofmeister reports

All 9x8, 9x7 and 99x boxes support 6.0. No A-Class or N-Class 3000s support 6.0.

Posted by Ron Seybold at 06:30 AM in Hidden Value, Homesteading | Permalink | Comments (0)

November 09, 2018

Fine-Tune: Test for disasters in any season

Test-siren
NewsWire Classic

Editor's Note: In October of 2001 the world worked in the aftermath of 9/11 attacks. Our Worst Practices columnist Scott Hirsh wrote this advice about the need to test for disasters. Another crisis was going to rise up for 3000 owners just a few weeks after this article appeared, this one triggered by HP. Regardless of where your datacenter is focused, it's always a good practice to test.

This Is Not a Test

By Scott Hirsh

For those of us in the United States entrusted with a company’s information resources, the events of September 11 changed everything. Before our business continuity or disaster recovery plans were primarily concerned with so-called “acts of God.” But we must now plan for the most improbable human acts imaginable. Who among us, prior to September 11, had a plan that took into account multiple high-rise office buildings being destroyed within minutes of each other? As you read this, the insurance industry is revising its assumptions. Likewise, we must now reconsider our approach to managing and protecting the assets for which we are responsible. Never before has the probability of actually needing to execute our recovery plans been so great.

As of this writing there have already been numerous business continuity and disaster recovery articles in the computer press. By now we understand the distinction between keeping the business going – not just IT, but also the whole business – and recovering after some (hopefully minor) interruption. And we’ve covered the issue of risk, where all the trade-offs and costs are negotiated. This whole topic was explored anew in the last few months, but it is still worthwhile to emphasize some early lessons of the attacks, from which we are still recovering.

It Had Better Work

Worst Practice 1: Trying to Fake It — I was visiting a friend’s datacenter recently, where I was told about a recent audit. This friend’s company spent the whole time trying to fake all the audit criteria: disaster recovery preparedness, security, audit trails, etc. At the risk of sounding like your parents, whom does this behavior really hurt? An audit is an ideal opportunity to validate all the necessary hard work required to run a professional datacenter. And should you ever be subjected to attack, electronic or otherwise, you know that your datacenter will survive.

If you didn’t get it before, you’d better get it now: Faking it is unacceptable. Chances are, at some point you will be required to do a real, honest-to-goodness recovery. And if you think you’re safe just because there may not be very many hijacked planes running into buildings such as yours, think again. The threats to your datacenter are diverse and numerous. And, by the way, violent weather, earthquakes and other natural disasters are still there too.

Worst Practice 2: Not Testing — Once you’re serious about continuity and recovery, not only will you plan, but you’ll test that plan often. There are lots of reasons to test your recovery capability often. Among them are: the ability to react quickly in a crisis; catching changes in your environment since your last test; accommodating changes to staff since your last test. A real recovery is a terrible time to do discovery.

Worst Practice 3: Not Documenting — One of the biggest problems with disasters is no warning. That’s why so many tests are a waste of time. Anyone can recover when you know exactly when and how. The truly prepared can recover when caught by surprise. Since you won’t get any warning – except, perhaps, with some natural disasters – you’ll want to have current, updated procedures. Since you’ll probably be on vacation (or wish you were) when disaster strikes, make sure the recovery procedures are off-site and available. If you’re the only one who knows what to do, even if you never take a day off there still won’t be enough of you to go around at crunch time.

Increasing the Odds of Recovery

Worst Practice 4: Taking Too Long — At this point in technology, there are two main ways to deal with a disaster: fail-over and reconstruction. With fail-over, you are replicating data between your main site and a recovery site. These sites can be relatively near each other – across town or perhaps in an adjoining states – or far away. This kind of remote clustering, if you will, is what the largest and most critical institutions use, and the cost is considerable. However, the cost of not doing it is considerably more.

Reconstruction is more about recovery than continuity. I am guessing that the vast majority of e3000 shops base their recovery plans on recalling tapes from a vault (e.g., Iron Mountain) to a recovery site, then restoring their data either to a bare machine or one on which only MPE has been installed. This was certainly true for my own operation, as my management always deemed this less expensive method “adequate.”

But that was then. Today, the amount of data that must be reloaded is so massive, that the time to recover renders this method all but worthless. True, your plan can call for a critical subset of data to be restored (not the entire data warehouse). But even current data can now stretch into the terabytes, once you include the applications, utilities, etc.

So the point here is to make sure your recovery methodology is practical from a business standpoint, as well as a technical standpoint. You don’t want to be in the position of estimating “just three more days” before you’re up and running.

Worst Practice 5: Not Recovering a Complete Environment — As the state of the art advances, some technology is left behind. We’ll keep it succinct here: If you need to keep an old technology alive, you may need to provide some or all of the solution yourself. Don’t expect the recovery site to stock or maintain every peripheral ever made just because you have one esoteric requirement. And don’t forget to keep backup copies of any obsolete software packages as well.

Another aspect to this issue, recently discovered at a customer site, is the fact that diverse platforms are now highly integrated. It’s not enough just to recover the e3000. The non-e3000 systems that share data feeds must also be recovered. And don’t forget any outside data sources either. Again, if you’re faking it, you can declare victory when you’ve reconstructed an e3000 at the recovery site. In reality, that only counts if the e3000 system can support the business on its own without any external feeds.

Worst Practice 6: Ignoring the Human Factor — Even the best plans don’t execute themselves. Keep in mind who will be doing what and how things will get done if key individuals are unable to perform their tasks. As we know, families come first, which is proper: so we mustn’t lose sight of our humanity in times of crisis. Any recovery is hard work. That counts double when there are casualties.

Reassess Your Assumptions

Worst Practice 7: A Defeatist Attitude — If you’ve been subjected to the “fake it” mentality, you’re probably demoralized. After all, who among us just wants to go through the motions? Well, it’s now a whole new world, and you have a really good shot at doing things right. But you need to forcefully make your case to those who didn’t take contingency planning seriously in the past. By the time you read this there may be stories about companies that unfortunately couldn’t recover from the September 11 attacks. We can emerge from this atrocity stronger if we do some honest introspection. Every rational businessperson should now be willing to do proper planning. If you can get over the bad practices of the past, you can position yourself and your business to be survivors.

Worst Practice 8: Datacenter Placement — As much as I enjoyed the view from my 29th floor datacenter, it’s pretty obvious now that datacenters don’t belong in certain places – high-rise buildings among them. Besides the obvious prohibitive cost of floor space, there are safety and security issues not obvious until recent events.

I have visited many co-location facilities in the past year, and they all had a several things in common:

1. They were in the low-rent district.

2. They were very difficult to find, as they were essentially unmarked.

3. They were very secure (at least relative to downtown datacenters), both physically and electronically.

4. They were redundant up the wazoo.

If this does not describe your datacenter, then perhaps it’s time to consider relocation. Let’s face it, even if there are good reasons why your datacenter needs to be right downtown, I’ll bet your recovery site is in the middle of nowhere. That should tell you something.

Hope for the Best

We’re currently in reactive mode. We’ve now seen one type of unimaginable act, using airliners as missiles. For those unlucky enough to be on the front lines of that atrocity, there was no way to plan for that series of events. And it’s likely that the next event will also be difficult to imagine, and hence plan for. So even the best plans require a great deal of luck, as even the best plan is useless if there is widespread devastation beyond your control. We should be honest about those aspects of business continuity and recovery that are within our control. We must be truly prepared. But we can still hope that we never need to actually use those plans. Not like we did after September 11. At least that’s the hope.

Posted by Ron Seybold at 06:14 PM in Hidden Value, Homesteading | Permalink | Comments (0)

November 02, 2018

Fine-Tune: Ensure Logical Data Consistency

Database_design_concepts
NewsWire Classic

The MPE/iX Transaction Manager for IMAGE does not guarantee logical consistency of your data. How do you ensure logical consistency? Use DBXBEGIN and DBXEND calls around all the DBPUT, DBUPDATE and DBDELETE calls that you make for your logical transaction. Yes, the definition of a logical transaction is up to the programmer.

There can be a lot of confusion about logical consistency, mostly because IMAGE kept adding logging and recovery features over its years of development. Gavin Scott gives a clear explanation of the state of affairs.

It’s amazing how much superstition exists surrounding this kind of stuff, and how many unnecessary rituals and sacrifices are performed daily to appease the mythical pantheon of data integrity gods. Real broken chains are supposed to be impossible to achieve with IMAGE on MPE/iX, no matter what application programs do, or how they are aborted, or how many times the system crashes!

The Transaction Manager provides absolute protection against internal database inconsistencies, as long as there are no bugs in the system and as long as the hardware is not corrupting data. No action or configuration is required on the part of the user.

Logical inconsistencies (order detail without an associated order header record, for example) can easily be created by aborting an application that’s in the middle of performing a database update that spans multiple records. Of course, IMAGE doesn’t care whether your data is logically correct or not, that’s the job of application programmers.

Using DBBEGIN/DBEND will have no effect whatsoever on logical integrity, unless you actually run DBRECOV to roll forward or roll back the database to a consistent point every time you abort a program or suffer any other failure.

By using DBXBEGIN/DBXEND XM style transactions, you can extend IMAGE’s guarantee of physical integrity to the logical integrity of your database. The system will ensure that no matter what happens, either all changes inside a DBX transaction will be applied, or none of them will be. Of course, it’s still possible to use this feature incorrectly (locking strategies are non-trivial as you need to lock the data that you read as well as that which you intend to write in many cases).

HP introduced a feature, far back in the MPE V days, called Intrinsic-Level Recovery (ILR). ILR can still can be enabled for a database. This was sort of a mini-XM that forced updates to disk each time an Intrinsic call completed in order to ensure structural integrity of the database in the face of system failures.

I believe that on MPE/iX, enabling ILR for a database does something really nasty like forcing an XM post after every update intrinsic call, which is a serious performance problem. ILR is no longer required on MPE/iX as XM will ensure integrity without it. With ILR you might be guaranteed that every committed transaction will survive a system abort, whereas without it XM might end up having to roll back the last fraction of a second’s worth of transactions. For almost any application this difference is negligible. Do not turn ILR on!

There are more complexities if your application performs transactions that affect multiple databases or databases and non-database files. It’s possible to do multi-database IMAGE transactions, but only if the databases reside on the same volume set, I believe.

Posted by Ron Seybold at 01:44 PM in Hidden Value, Homesteading | Permalink | Comments (0)

October 26, 2018

Command file tests 3000s for holidays

Holiday-Calendar-Pages
Holiday season is coming up. It's already upon us all at the grocery stores, where merchandising managers have cartons of Thanksgiving decorations waiting their turn. The Halloween stuff has to clear away first.

Community contributor Dave Powell has improved upon a command file created by Tracy Pierce to deliver a streamlined way to tell an HP 3000 about upcoming holidays. Datetest tells whether a day is a holiday. "I finally needed something like that," Powell says, "but I wanted the following main changes:

1:  Boolean function syntax, so I could say :if  holiday()  then instead of

:xeq datetest
:if WhichVariableName = DontRememberWhatValue then

and also because I just think user-functions are cool.

2. Much easier to add or disable specific holidays according to site-specific policies or even other countries’ rules. (Then disable Veterans Day, Presidents Day and MLK Day, because my company doesn’t take them.)

3. Make it easy to add special one-off holidays like the day before/after Christmas at the last minute when the company announces them.

Along the way, I also added midnight-protection and partial input date-checking, and made it more readable, at least to me.

Powell, who's contributed plenty of command files to the community through the HP 3000 newsgroup, says that most of the fun came in the day-of-week calculation.

I didn’t understand that part of Tracy’s script, or trust myself to adapt it without messing up, so I found a second method and used both, with a warning if the results didn’t agree. Surprise, surprise, they disagree about 12/25/2100, although they agree on dates I tested within the expected lifespan of MPE. So I shoveled in a third formula and found a day-of-week calculator spreadsheet, both of which agree with the second method. So anyone who uses Tracy’s original command file and plans to still run it in 2100 might need to make a change.

He offered what he called a preliminary version of the new datetest, which has been checked by Allegro's Steve Cooper


option nolist
parm CCYYMMDD    =   ""
if   bound (HOL_ERRORS)    or    bound (HOL_DAY)
    deletevar   [email protected]
endif
setvar   HOL_ERRORS  0
if   "!CCYYMMDD"     =   ""
    setvar  HOL_CYMD    HPYYYYMMDD
    setvar  HOL_DAY     !HPDAY
    if  HOL_CYMD        <>  HPYYYYMMDD
#            if the date has changed, we just hit midnite and the
#            day-of-week we just set might be the new day; in this
#            case set the date & day-of-week again, and we should
#            be ok (unless the following 2 commands take 24 hours :)
        setvar  HOL_CYMD    HPYYYYMMDD
        setvar  HOL_DAY     !HPDAY
    endif
else
    setvar  HOL_CYMD    "!CCYYMMDD"
    if  not numeric (HOL_CYMD)
        echo **date parm, if entered, must be numeric**
        setvar  HOL_ERRORS  HOL_ERRORS + 1
    endif
    if  len (HOL_CYMD)   <>  8
        echo **date parm must be exactly 8 digits, unless omitted**
        setvar  HOL_ERRORS  HOL_ERRORS + 1
    elseif  numeric (HOL_CYMD)
        if  rht (HOL_CYMD, 2) > "31"
            echo **last 2 digits of date parm can't be more than 31**
            setvar  HOL_ERRORS  HOL_ERRORS + 1
        elseif  rht (HOL_CYMD, 2) = "00"
            echo **last 2 digits of date parm can't be "00"**
            setvar  HOL_ERRORS  HOL_ERRORS + 1
        endif
        if  str (HOL_CYMD, 5, 2) > "12"
            echo **bytes 5 & 6 of date parm can't be more than 12**
            setvar  HOL_ERRORS  HOL_ERRORS + 1
        elseif  str (HOL_CYMD, 5, 2) = "00"
            echo **characters 5 & 6 of date parm can't be "00"**
            setvar  HOL_ERRORS  HOL_ERRORS + 1
        endif
    endif
    if  HOL_ERRORS      >   0
        echo **exiting because the date-parm was not a valid**
        echo **8-digit date in yyyymmdd format **
        return FALSE
    endif
endif

#    -------------------------------------------------------
#    do not casually modify above here
#
#    Take any special / unofficial holidays here
#    OK to replace any dates that are past with the date of a
#    holiday the company just announced (Jewish new year,
#    days before / after Christmas & New Years, etc, etc)

if   HOL_CYMD="20080929"  or  HOL_CYMD="20081008" &
or   HOL_CYMD="20081226"  or  HOL_CYMD="20090102"
    echo It's a special company holiday :)
    return  TRUE
endif

#    do not casually modify below here
#    -------------------------------------------------------

setvar   HOL_YYYY    str (HOL_CYMD, 1, 4)
setvar   HOL_MM      str (HOL_CYMD, 5, 2)
setvar   HOL_DD      str (HOL_CYMD, 7, 2)

#
#    Set day of week, unless already set because processing "today"
#
if   not     bound (HOL_DAY)
#    1st, the method in the original "datetest" command file
    setvar  HOL_DAY str("000031059090120151181212243273304334", &
            !HOL_MM * 3 - 2, 3)
    setvar  HOL_DAY     !HOL_DAY + !HOL_DD
    IF  !HOL_MM > 2    and   ( !HOL_YYYY / 4 * 4 = !HOL_YYYY )
        setvar  HOL_DAY      HOL_DAY + 1
    ENDIF
    setvar  HOL_YWK     !HOL_YYYY - 1
    setvar  HOL_DAY     !HOL_DAY + ( !HOL_YWK / 400 ) * 146097
    setvar  HOL_YWK     !HOL_YWK  mod  400
    setvar  HOL_DAY     !HOL_DAY - ( !HOL_YWK / 100 ) * 36524
    setvar  HOL_YWK     !HOL_YWK mod 100
    setvar  HOL_DAY     !HOL_DAY + ( !HOL_YWK / 4 ) * 1461
    setvar  HOL_YWK     !HOL_YWK mod 4
    setvar  HOL_DAY     !HOL_DAY + ( !HOL_YWK * 365 )
    setvar  HOL_DAY     ( HOL_DAY mod 7 ) + 1
    deletevar HOL_YWK

#    Next, the method posted to the 3000-l by Mike Hornsby 06/04/2004
#    except, add 1 at the end because his was 0-6 and we need
#    1-7.
    setvar  HOL_XYR !HOL_YYYY-((12-!HOL_MM)/10)
    setvar  HOL_XMONTH !HOL_MM+(((12-!HOL_MM)/10)*12)
    setvar  HOL_XDAY !HOL_DD+(!HOL_XMONTH*2)+(((!HOL_XMONTH+1)*6)/10)
    setvar  HOL_XLEAP_YR (HOL_XYR/4) - (HOL_XYR/100) + (HOL_XYR/400)
    setvar  HOL_XDAY (HOL_XDAY+HOL_XYR+HOL_XLEAP_YR+1) mod 7  +  1

#    Next, day-of-week with my adaption of a "Zeller" formula
#    off the internet.
    if  HOL_MM      <   "03"
        setvar  HOL_ZMONTH  !HOL_MM  +  12
        setvar  HOL_ZYEAR   !HOL_YYYY   -   1
    else
        setvar  HOL_ZMONTH  !HOL_MM
        setvar  HOL_ZYEAR   !HOL_YYYY
    endif
    setvar  HOL_ZDAY    ( &
        ((13 * HOL_ZMONTH + 3) / 5)  +  !HOL_DD  +  HOL_ZYEAR &
    +   (HOL_ZYEAR/4) - (HOL_ZYEAR/100) + (HOL_ZYEAR/400) &
    +   1 )     mod 7   +   1

#    Now, see if the day-of-week calcs agree
    if  HOL_DAY     <>  HOL_XDAY &
    or  HOL_DAY     <>  HOL_ZDAY &
    or  HOL_ZDAY    <>  HOL_XDAY
        setvar  HOL_ERRORS  HOL_ERRORS + 1
        echo **day-of-week error**
        echo    HOL_DAY   =   !HOL_DAY
        echo    HOL_XDAY  =   !HOL_XDAY
        echo    HOL_ZDAY  =   !HOL_ZDAY
    endif
    setvar  HOL_DAY     HOL_ZDAY
    deletevar   [email protected],  [email protected]
ENDIF

#
#    Now check for specific regular holidays, month-by-month.
if   HOL_MM  =   "01"
    if  HOL_DD  =   "01"
        echo It's New Years Day
        return  TRUE
    endif
    if  ( !HOL_DAY=2  and  !HOL_DD>=15  and  !HOL_DD<=21 )
        echo (It's Martin Luther King day - but do we get it?)
#        return  TRUE
    endif
    return  FALSE
elseif   HOL_MM  =   "02"
    if  (!HOL_DAY=2  and  !HOL_DD>=15  and  !HOL_DD<=21)
        echo (It's President's Day - but do we get it?)
#        return  TRUE
    endif
    return  FALSE
elseif   HOL_MM  =   "05"
    if  (!HOL_DAY=2  and  !HOL_DD>=25  and  !HOL_DD<=31)
        echo It's Memorial Day
        return  TRUE
    endif
    return  FALSE
elseif   HOL_MM  =   "07"
    if  HOL_DD  =   "04"
        echo It's July 4th
        return  TRUE
    endif
    return  FALSE
elseif   HOL_MM  =   "09"
    if  ( !HOL_DAY=2  and  !HOL_DD>=1  and  !HOL_DD<=7 )
        echo It's Labor Day
        return  TRUE
    endif
    return  FALSE
elseif   HOL_MM  =   "11"
    if  HOL_DD  =   "11"
        echo (it's Veterans Day - but do we get it ?)
#        return  TRUE
    endif
    if  ( !HOL_DAY=5  and  !HOL_DD>=22  and  !HOL_DD<=28 )
        echo It's Thanksgiving
        return  TRUE
    endif
    if  ( !HOL_DAY=6  and  !HOL_DD>=23  and  !HOL_DD<=29 )
        echo It's the day after Thanksgiving
        return  TRUE
    endif
    return  FALSE
elseif   HOL_MM  =   "12"
    if  HOL_DD  =   "25"
        echo It's Christmas
        return  TRUE
    endif
    return  FALSE
endif

Posted by Ron Seybold at 05:57 AM in Hidden Value, Homesteading | Permalink | Comments (0)

October 19, 2018

Fine-Tune: Get the right time for a battery

CMOS-clock-battery
Two weeks from now the world will manage the loss of an hour, as Daylight Saving time ends. The HP 3000 does time shifting of its system clock automatically, thanks to patches HP built during 2007. But what about the internal clock of a computer that might be 20 years old? Components fail after awhile.

The 3000's internal time is preserved using a small battery, according to the experts out on the 3000 newsgroup. This came to light in a discussion about fixing a clock gone slow. A few MPE/iX commands and a trip to Radio Shack can maintain a 3000's sense of time.

"I thought the internal clock could not be altered," said Paul English. "Our server was powered off for many months, and maybe the CMOS battery went flat." The result was that English's 3000 showed Greenwich Mean Time as being four years off reality. CTIME reported for his server:

* Greenwich Mean Time : THU, JUN 17, 2004, 11:30 AM   *
* GMT/MPE offset      : +-19670:30:00                 *
* MPE System Time     : THU, SEP 10, 2009,  2:00 PM   *

Yup, that's a bad battery, said Pro 3k consultant Mark Ranft. "It is cheap at a specialty battery store," he said, "and can be replaced easily, if you have some hardware skills and a grounding strap." Radio Shack offers the needed battery.

But you can also alter the 3000's clock which tracks GMT, he added.

"The internal clock can be set or reset at bootup (the method varies depending on the hardware), or by using the MPE SETCLOCK date=xx/xx/xx;time;NOW command, in conjuction with SETCLOCK ;CANCEL.  Follow these by the SHOWCLKS command. It usually takes me a couple of attempts to get it, but you should be able to straighten this out without even having to reboot."

A few customers warned that utility software will sometimes fail to start up if a bad battery has pulled the internal clock too far off the system clock. Tracy Johnson explained:

Collateral damage may include some third party software going non-operational. I have at least one software package whose license goes bad when the offset gets too large (think years).  When I fix the offset to a reasonable number (within a day or two), then the software works again.

Posted by Ron Seybold at 08:01 PM in Hidden Value, Homesteading | Permalink | Comments (0)

October 12, 2018

Friday Fine-Tune: Speeding up backups

Spinning-wheels
We have a DLT tape drive. Lately it wants to take 6-7 hours to do a backup instead of its usual two or less.  But not every night,  and not on the same night every week.  I have been putting in new tapes now, but it still occurs randomly. I have cleaned it. I can restore from the tapes no problem. It doesn’t appear to be fighting some nightly process for CPU cycles. Any ideas on what gives?

Giles Schipper replies

Something that may be causing extended backup time is excessive IO retries, as the result of deteriorating tapes or tape drive.

One way to know is to add the ;STATISTICS option to your STORE command. This will show you the number of IO retries as well as the actual IO rate and actual volume of data output.

Another possibilty is that your machine is experiencing other physical problems resulting in excessive logging activity and abnormal CPU interrupt activity — which is depleting your system resources resulting in extended backup times.

Check out the following files in the following Posix directories:

/var/stm/logs/os/*
/var/stm/logs/sys/*

If they are very large, you indeed may have a hardware problem — one that is not "breaking" your machine, but simply "bending" it.

Posted by Ron Seybold at 07:25 PM in Hidden Value | Permalink | Comments (0)

September 21, 2018

Fine Tune: Storing in Parallel and to Tapes

Does the MPE/iX Store-to-Disc option allow for a ‘parallel store,’ analogous to a parallel store to tape? For example, when a parallel store to tape is performed, the store writes to two or more tape drives at the same time. Is there a parallel store-to-disc option that allows for the store to write to two or more disc files at the same time (as opposed to running multiple store-to-disc jobs)?

Gavin Scott and Joe Taylor reply

Yes, the same syntax for parallel stores works for disk files as well as tape files. I really don’t know if you would get any benefit from this, but if you went to the trouble of building your STD files on specific disks, then it might be worthwhile.

What is the recommended life or max usage of DLT tapes?

Half a million passes is the commonly used number for DLT III. One thing to remember is that when they talk about the number of passes (500,000 passes), it does not mean number of tape mounts.

For SuperDLT tapes, the tape is divided into 448 physical tracks of 8 channels each giving 56 logical tracks. This means that when you write a SuperDLT tape completely you will have just completed 56 passes. If you read the tape completely, you will have done another 56 passes.

The DLTIV tapes (DLT7000/8000) have a smaller number of physical and logical tracks, but the principle is the same. The number of passes for DLTIIIXT and DLT IV tapes is 1,000,000. The shelf life is 30 years for the DLT III XT and DLT IV tapes and 20 for the DLT III.

Our DDS drive gets cleaned regularly. Our tapes in rotation are fairly old, too. However, we are receiving this error even when we use brand new tapes. 

STORE ENCOUNTERED MEDIA WRITE ERROR ON LDEV 7 (S/R 1454)

The new tapes are Fuji media, not HP like our old ones.

John Burke replies:

Replace that drive. DDS drives are notorious for failing. Also, the drive cannot tell whether or not you are using branded tapes. I’ve used Fuji DDS tapes and have found them to be just as good as HP-branded tapes (note that HP did not actually manufacture the tapes). I have also gotten into the habit of replacing DDS tapes after about 25 uses. When compared to the value of a backup, this is a small expense to pay.

Posted by Ron Seybold at 07:52 PM in Hidden Value, Homesteading | Permalink | Comments (0)

September 14, 2018

Use Command Interpreter to program fast

NewsWire Classic

By Ken Robertson

An overworked, understaffed data processing department is all too common in today’s ever belt-tightening, down-sizing and de-staffing companies.

Running-shoesAn ad-hoc request may come to the harried data processing manager. She may throw her hands up in despair and say, “It can’t be done. Not within the time frame that you need it in.” Of course, every computer-literate person knows deep down in his heart that every programming request can be fulfilled, if the programmer has enough hours to code, debug, test, document and implement the new program. The informed DP manager knows that programming the Command Interpreter (CI) can sometimes reduce that time, changing the “impossible deadline” into something more achievable.

Getting Data Into and Out of Files

So you want to keep some data around for a while? Use a file! Well, you knew that already, I’ll bet. What you probably didn’t know is that you can get data into and out of files fairly easily, using IO re-direction and the print command. IO re-direction allows input or output to be directed to a file instead of to your terminal. IO re-direction uses the symbols ">", ">>" and "<". Use ">" to re-direct output to a temporary file. (You can make the file permanent if you use a file command.) Use ">>" to append output to the file. Finally, use "<" to re-direct input from a file:

echo Value 96 > myfile
echo This is the second line >> myfile
input my_var < myfile
setvar mynum_var str("!my_var",7,2)
setvar mynum_var_2 !mynum_var - (6 * 9 )
echo The answer to the meaning of life, the universe
echo and everything is !mynum_var_2.

After executing the above command file, the file Myfile will contain two lines, “Value 42” and “This is the second line.” (Without quotes, of course.) The Input command uses IO re-direction to read the first record of the file, and assigns the value to the variable my_var. The first Setvar extracts the number from the middle of the string, and proceeds to use the value in an important calculation in the next line.

How can you assign the data in the second and consequent lines of a file to variables? You use the Print command to select the record that you want from the file, sending the output to a new file:

print myfile;start=2;end=2 > myfile2

You can then use the Input command to extract the string from the second file.

Rolling Your Own System Variables

It’s easy enough to create a static file of Setvar commands that gets invoked at logon time, and it’s not difficult to modify the file programmatically. For example, let’s say that you would like to remember a particular variable from session to session, such as the name of your favorite printer. You can name the file that contains the Setvars, Mygvars. It will contain the line: setvar my_printer “biglaser”

The value of this variable may change during your session, but you may want to keep it for the next time that you log on. To do this, you must replace your normal logoff procedure (the Bye or Exit command) with a command file that saves the variable in a file, and then logs you off.

byebye
purge mygvars > $null
file mygvars;save
echo setvar my_printer "!my_printer" > *mygvars
bye

Whenever you type byebye, the setvar command is written to Mygvars and you are then logged off. The default close disposition of an IO re-direction file is TEMP, which is why you have to specify a file equation. Because you are never certain that this file exists beforehand, doing a Purge ensures that it does not.

Posted by Ron Seybold at 07:14 PM in Hidden Value, Homesteading | Permalink | Comments (0)

September 07, 2018

Queue up those 3000 jobs with MPE tools

NewsWire Classic

By Shawn Gordon

A powerful feature of MPE is the concept of user-defined job queues. You can use these JOBQ commands to exert granular job control that is tightly coupled with MPE/iX. HP first introduced the commands in the 6.0 release.

For example, you only want one datacomm job to log on at a time, but there are 100 that need to run. At the same time you need to let users run their reports, and you want to allow only two compile jobs to run at a time. Normally you would set your job limit down to 1, then manually shuffle job priorities around and let jobs go. In the new multiple job queue controlled environment, you can define a DATACOMM job queue whose limit was 1, an ENDUSER job queue whose limit was 6 (for example), and a COMPILE job queue whose limit was 2. You could also set a total job limit of 20 to accommodate your other jobs that may need to run.

Three commands accommodate the job queue feature:

NEWJOBQ qname [;limit=n]
PURGEJOBQ qname
LISTJOBQ

The commands LIMIT, ALTJOB, JOB and STREAM all include the parameter ;JOBQ=.

As an example, I am going to create a new job queue called SHOWTIME that has a job limit of 1. You will notice the job card of the sample job has a JOBQ parameter at the end to specify what queue it is to execute in.

Alternatively I could have said STREAM SHOWTIME.JCL;JOBQ=SHOWTIME to put it into my job queue. Here’s the coding to do this:

NEWJOBQ SHOWTIME;LIMIT=1

!JOB SHOWTIME,MANAGER.SYS,PUB;
!JOBQ=SHOWTIME !
!SETVAR HPAUTOCONT TRUE
!
!SHOWTIME
!
!SHOWCLOCK
!
!SHOWME
!
!SHOWVAR [email protected]
!SHOWVAR [email protected]
!
!ECHO !HPDATEF
!ECHO !HPTIMEF
!
!PAUSE 300
!
!EOJ

I just streamed five copies of the job, and using the LISTJOBQ command I am able to see the default system defined job queue HPSYSJQ. I haven’t been able to find out why it indicates a limit of 3500, since my current job limit was 30. [Editor’s Note: Gavin Scott reports that “All job queues have a LIMIT that is separate from the one true system LIMIT. This includes the default HPSYSJQ. The 3500 default is a number large enough that you should never run into the case where the existence of this second, un-obvious, limit on normal jobs affects you.”]

You can see my SHOWTIME job queue with a limit of 1, with one executing and five total jobs, so four are currently in a wait state. This is obvious in the SHOWJOB command below.

listjobq

JOBQ      LIMIT     EXEC  TOTAL

HPSYSJQ   3500      12    12
SHOWTIME  1         1     5

SHOWJOB [email protected]

JOBNUM  STATE IPRI JIN  JLIST    INTRODUCED  JOB NAME

#J2     EXEC        10S LP       TUE  7:09A  NP92JOB,MGR.MINISOFT
#J3     EXEC        10R LP       TUE  7:09A  BACKG,MANAGER.VESOFT
#J4     EXEC        10S LP       TUE  7:09A  WTRSH,MGR.WTRSH
#J5     EXEC        10S LP       TUE  7:09A  MSJOB,MGR.MINISOFT
#J6     EXEC        10S LP       TUE  7:09A  MASTEROP,MANAGER.SYS
#J7     EXEC        10S LP       TUE  7:09A  VCSSERV,MGR.DIAMOND
#J8     EXEC        10S LP       TUE  7:09A  VCSCHED,MGR.DIAMOND
#J9     EXEC        10S LP       TUE  7:09A  JINETD,MANAGER.SYS
#J10    EXEC        10S LP       TUE  7:09A  JWHSERVR,MANAGER.SYS
#J12    EXEC        10S LP       TUE  7:25A  GUI3000J,MANAGER.SYS
#J19    EXEC        10S LP       TUE  8:08A  BROLMSGJ,JOBS.REVIEW
#J130   EXEC        10S LP       TUE  1:06P  SHOWTIME,MANAGER.SYS
#J131   WAIT:1   8  10S LP       TUE  1:06P  SHOWTIME,MANAGER.SYS
#J132   WAIT:2   8  10S LP       TUE  1:06P  SHOWTIME,MANAGER.SYS
#J133   WAIT:3   8  10S LP       TUE  1:06P  SHOWTIME,MANAGER.SYS
#J134   WAIT:4   8  10S LP       TUE  1:06P  SHOWTIME,MANAGER.SYS

16 JOBS (DISPLAYED):
   0 INTRO
                4 WAIT; INCL 0 DEFERRED
                12 EXEC; INCL 0 SESSIONS
                0 SUSP
JOBFENCE= 6; JLIMIT= 30; SLIMIT= 60

Now if I want to increase the job limit for my SHOWTIME job queue, I can use the following command

limit +1;jobq=showtime
altjob #j131;jobq=hpsysjq

You will probably notice that there are a number of nice enhancements to ALTJOB and LIMIT in support of the job queues, having uses outside of the job queues. For example, LIMIT now allows you to use a plus or minus value to increase or decrease the number, so you don’t have to use an absolute value. It is common to up the limit by one to allow another job to execute, but previously you had to check the current job limit, change it, then change it back. At least now you can just do +1 to let the job launch.

On the ALTJOB command, you can now specify HIPRI to cause a job to start up immediately and not have to play with limits to let it go. You can also alter the output device of the job. I did find during my tests that altering a job to a queue that had open slots didn’t seem to allow the job to release if you sent it to the system default HPSYSJQ. However, if you sent it to a user-defined job queue that had room left in it for another job to execute, then it would launch immediately.

There is another side benefit of job queues, and that is ensuring that never more than one version of a job logs on. For example, if you have some background job running and you cannot have a second copy running, but there is nothing that prevents it, you could create a job queue for it with a limit of 1 that would keep any extra copies from launching.

This is just one example of an extended use of the feature. If you try to purge a job queue that is currently in use, you will receive this message:

Cannot purge job queue as there are jobs
running/waiting in that queue. (CIERR 12251)

If you try to stream a job into a queue that does not exist you will receive the message

JOBQ parameter expected. (CIERR 12255)
Spooler internal error occurred. (CIERR 4522)

The job will be streamed regardless — however, it won’t start executing, because there is no queue for it to execute in. The major problem is that the job will stream into a WAIT state because there is no queue available for it. At this point you can’t abort it, you can’t create the queue it was intended for and have it work, you can’t alter it into the system job queue because of whatever the problem is that we described earlier. Finally you can try to create a new queue and alter it into it. The LISTJOBQ will show it as a job for that queue, but it will never start executing. The only way to get rid of the job is to shut down the system and do a START NORECOVERY.

Posted by Ron Seybold at 06:49 PM in Hidden Value, Homesteading | Permalink | Comments (0)

August 31, 2018

SFTP and the points where transfers may fail

RFC-transfer-card-coverEarlier in August a 3000 manager who relies on the Stromasys virtualized 3000 was searching for failures. Well, he was asking about the causes for failures. He wanted to know more about failures of SFTP transfers on his MPE/iX system. (We'd call it a 3000 but there's no more HP iron there at Ray Legault's shop). He gave the rundown on the problems with MPE/iX.

We send about 40 files each day most of these in the early morning. Sometimes we would have zero to fives connection failures each morning. I noticed that these failures seem to occur when two SFTP jobs ran at the same minute. I then added a "JOBQ=FINLOG" to the job card of every SFTP job I had and set the job limit to 1. This was two weeks ago and we have not had a failure yet.

Brian Edminster, who still hosts open source software for MPE/iX, checked in to offer an answer to why those SFTP jobs were failing.

I'd be willing to bet that Ray's issue at Boeing with SFTP connect failures is due to the Entropy Generator running dry. Connections take lots of entropy data — and the one that comes 'out of the box' with the SFTP client doesn't generate very much without some modifications.

If you need to make more than one connection a minute (job limit 1), this modification will likely become necessary. Let me know if you'd like some pointers on how to do this. It will require some revisions to the SFTP software. The Entropy Gathering Daemon which Mark Klein's SFTP port uses is written in Perl. It is not terribly difficult to modify to include new data sources to "stir into the pool" that is drawn from by the SFTP client.

Edminster's MPE-Opensource.org website has an SFTP quickstart bundle of all packages required to install OpenSSH on MPE/iX including SFTP, scp, and keygen.

Posted by Ron Seybold at 08:28 PM in Hidden Value, Homesteading | Permalink | Comments (0)

August 24, 2018

Kept Promises for Open Source on MPE/iX

OpensourceOpen source software developed a reputation for keeping HP 3000s online and productive, even in the face of industry requirement changes and new government regulations. Applied Technologies founder Brian Edminster has shared reports of a 3000 installation processing Point of Sale transactions, a customer which faced new PCI compliance demands. He was tasked with finding a solution to the new credit card compliance rules late in one December — with a January deadline.

“What we were struggling with was not that uncommon,” he explained. “The solution of choice was a version of the package OpenSSH, an open source implementation of a secure shell.”

OpenSSH offers publicly exchanged authentication, encrypted communication for secure file transfers, a secure shell command line, port forwarding. “It’s amazing how much you get," Edminster said, "and it’s available for many operating systems.” He's got a website devoted to the open source tools for the 3000.

At first, none of those operating system implementations included MPE/iX. OpenSSH requires a shell for the MPE/iX version; it doesn’t run at the MPE command line. But it’s been ported using OpenSSL for the HP 3000 and Perl/iX, both available from Edminster's MPE open source website.  Perl, another open source tool, “was designed for portability across platforms, and it works nicely,” he said.

OpenSSH protects from “man in the middle” security attacks by using DNS resolution, another open source utility wired into MPE/iX. Edminster recommended “the definitive guide to OpenSSH, commonly known as ‘the snail book’ from O’Reilly Press, Second Edition.”

That 3000 site where Edminster was working on POS security requirements had enabled DNS resolution across its enterprise — so Edminster was able to use a handy MPE/iX script called DNSCHECK. It’s a beautiful piece of scripting that checks, step by step, all the things necessary for name resolution to work on an HP 3000.

OpenSSH uses cryptological software to pad out blocks of data which are being transferred. The HP 3000’s random number generation routines are “not so good” for this, Edminster explained. Random number routines must have a much longer cycle length of repeats than MPE/iX provides. MPE has no random number generation built into its kernel, unlike other operating environments.

The solution is “the Entropy Gathering Daemon, which is already packaged up by Ken Hirsh with his port of OpenSSH,” Edminster reported.

Posted by Ron Seybold at 02:14 PM in Hidden Value, Homesteading, Web Resources | Permalink | Comments (0)

August 20, 2018

Following Job Lines in Emulated 3000 Life

Queueing
The Stromasys Charon software is a fact of life in the homesteading community by this year, after almost six years of field service. Lately the emulator users have been offering insights on how they're using their servers.

It's a lot like any HP 3000 has been used for the last 44 years, in some ways. Transferring files. Queueing up jobs. A few of the emulators shared their advisories not long ago.

Ray Legault at Boeing talked about his experiences with file transfers, especially an SFTP client and the SFTP "Connection refused" errors. As the Charon developers like to say, if the MPE/iX software behaves the same on the emulator as it does on 3000 hardware, even if MPE registers an error, then Charon is doing its faithful emulation job.

"We are running on a Stromasys Charon A500-200 and a A500-100 virtual machine which executes on a HP ProLiant DL 380 Gen8 3.59 GHZ CPU, with 6 cores and 64 gig of memory," Legault said.

We send about 40 files each day most of these in the early morning. Sometimes we would have zero to fives connection failures each morning. I noticed that these failures seem to occur when two SFTP jobs ran at the same minute. I then added a "JOBQ=FINLOG" to the job card of every SFTP job I had and set the job limit to 1. This was two weeks ago and we have not had a failure yet.

Another emulator user, Tony Summers of Smith & Williamson in the UK, shared queueing advice and a massive job checker (HOWMANY) that's working well for him.

"Even though we've migrated to an MPE emulator," Summers said, "we use job queues all the time so that jobs that need to run 24/7 don't bed-block the system job queue."
The alternative we've also used to create a UDC or command file that limits the number of instances of any job - example below (which partly uses a link to Posix /Unix using the SH command)

If you are looking at the sFTP failures, have you checked that the FTP server is configured to allow multiple connections?

USER DEFINED COMMAND FILE: HOWMANY.CMD
parm OK_NUMBER_of_JOBS=""

# HOWMANY
#
# HOWMANY is a command file to determine how many Jobs or Sessions are
# running with the same logon attributes as the calling Job or Session.
#
# An optional parameter can be passed to set the allowed number of running
# Jobs or Sessions with the same logon attributes.
#
# Example 1: Passing the number of 'allowed' Jobs / Sessions.
#
#    :HOWMANY 1
#
# If a job logs on and issues the HOWMANY command above, HOWMANY will check
# how many Jobs are running with the same logon attributes. The '1' tells
# HOWMANY that only '1' job should be running. Therefore, if HOWMANY
# determines that more than '1' is running, it will cause the calling Job
# to log off.
#
# Example 2:
#
#    :HOWMANY
#
# On it's own, HOWMANY will return a variable to the calling Job or Session
# called HOWMANY_THIS_USER that will be set to the number of EXECuting
# Jobs or Sessions with the same logon attributes.
#
# Example 3:
#
# If you want to see how many Jobs or Sessions are running for another
# User Id (not the calling Job or Session), then you can pass this as a
# parameter...
#
#    :HOWMANY T990
# Or
#    :HOWMANY "T990,ALL.SWIMS"    <--Quotes required
#
# You cannot log another Job or Session off with this command

setvar HOWMANY_USER  "!HPJOBNAME,!HPUSER.!HPACCOUNT"
setvar HOWMANY_USER2 "!HPJOBNAME,!HPUSER.!HPACCOUNT"

setvar HOWMANY_INPUT "!OK_NUMBER_of_JOBS"

if HOWMANY_INPUT <> "" then
  if HOWMANY_INPUT > "A" then
     setvar HOWMANY_USER ups("!HOWMANY_INPUT")
     if pos(",","!HOWMANY_USER") = 0 then
        setvar HOWMANY_USER2 "!HOWMANY_USER,@[email protected]"
     else
        setvar HOWMANY_USER2 "!HOWMANY_USER"
     endif
     setvar HOWMANY_ALLOWED 9999
  else
     setvar HOWMANY_ALLOWED !HOWMANY_INPUT
  endif
else
  # Set to unlimited
  setvar HOWMANY_ALLOWED 9999
endif

continue
purge [email protected],temp;noconfirm >$null
build HWMNFILE;REC=-1,,B,ASCII;temp
file  HWMNFILE,oldtemp;dev=disc

showjob job=!HOWMANY_USER2;EXEC >HWMNFILE
#  next line links to the unix / posix shell
SH grep '!HOWMANY_USER' $MPE/_$MPE_JOBNUM/tmp/HWMNFILE.!HPGROUP.!HPACCOUNT >HWMN
FLE2

setvar HOWMANY_THIS_USER ![finfo("HWMNFLE2","EOF")]

if HPJOBTYPE = "S" then
  if HOWMANY_THIS_USER = 1 then
     setvar HM_IS_ARE "is"
     setvar HM_TYPE "SESSION"
  else
     setvar HM_IS_ARE "are"
     setvar HM_TYPE "SESSIONS"
  endif
else
  if HOWMANY_THIS_USER = 1 then
     setvar HM_IS_ARE "is"
     setvar HM_TYPE "JOB"
  else
     setvar HM_IS_ARE "are"
     setvar HM_TYPE "JOBS"
  endif
endif

echo There !HM_IS_ARE !HOWMANY_THIS_USER !HM_TYPE running for UserId: !HOWMANY_U
SER2

if OK_NUMBER_of_JOBS <> "" then
  if HOWMANY_THIS_USER > HOWMANY_ALLOWED then
     echo *****************************************************************
     echo Too many !HM_TYPE with this User Id running. Will now log you off
     echo *****************************************************************
     tellop HOWMANY is logging !HOWMANY_THIS_USER off
     if HPJOBTYPE = "S" then
        echo Will now log your Session off
        EOJ
     else
        echo This Job will now log off
        UDCEOJ
        EOJ
     endif
  endif
endif

purge [email protected],temp;noconfirm > $null
deletevar HM_TYPE, HM_IS_ARE, HOWMANY_ALLOWED

Posted by Ron Seybold at 08:17 PM in Hidden Value, Homesteading | Permalink | Comments (1)

August 17, 2018

Nike Arrays 101

Hard-DriveJust a few weeks ago, a 3000 manager using an A-Class server checked in on how he might connect the SC-10 arrays from Hewlett-Packard to his A500. As a West Coast service provider carried the manager toward that hardware (it can be done) it seems like a good time to review the use of storage arrays with MPE/iX systems.

Our founding net.digest editor John Burke covered this ground in the years after HP announced it was cutting off its 3000 operations. While the HP label is still anathema to some, the hardware prices are sometimes too compelling. Here's Nike Arrays 101, advice still worthy on the day you're moving around arrays connected to a 3000.

By John Burke
Newswire Classic

Many 3000 homesteaders are picking up used HP Nike Model 20 disk arrays. The interest comes from the fact that there is a glut of these devices on the market — meaning they are inexpensive — and they work with older models of HP 3000s. However, there is a lot of misinformation floating around about how and when to use them. For example, one company posted the following to 3000-L:

We’re upgrading from a Model 10 to a Model 20 Nike array. I’m in the middle of deciding whether to keep it in hardware RAID configuration or to switch to MPE/iX mirroring, since I can now do it on the system volume set. It wasn’t in place when the system was first bought, so we stayed with the Nike hardware RAID. We’re considering the performance issue of keeping it Nike hardware RAID versus the safety of MPE Mirroring. You can use the 2nd Fast-Wide card on the array when using MPE mirroring, but you can’t when using Model 20 hardware RAID.

So, with hardware RAID, you have to consider the single point of failure of the controller card. If we ‘split the bus’ on the array mechanism into two separate groups of drives, and then connect a separate controller to the other half of the bus, you can’t have the hardware mirrored drive on the other controller. It must be on the same path as the ‘master’ drive because MPE sees them as a single device.

Using software mirroring you can do this because both drives are independently configured in MPE. Software mirroring adds overhead to the CPU, but it’s a tradeoff you have to decide to make. We are evaluating the options, looking for the best (in our situation) combination of efficiency, performance, fault tolerance and cost.

First of all, as a number of people pointed out, Mirrored Disk/iX does not support mirroring of the System Volume Set – never did and never will. Secondly, you most certainly can use a second FWSCSI card with a Model 20 attached to an HP 3000

Bob J. elaborated on the second controller. 

All of the drives are accessible from either controller but of course via different addresses. Your installer should set the DEFAULT ownership of drives to each controller. To improve throughput, each controller should share the load. Only one controller is necessary to address all of the drives, but where MPE falls short is not having a mechanism for auto failover of a failing controller.

In other words, sysgen reconfiguration would be necessary to run on a single controller after SP failure in a dual SP configuration. You could have alternate configurations stored on your system to cover both cases of a single failing controller but the best solution is to get it fixed when it breaks. The best news is that SP failures are not very common.

There is a mechanism in MPE for ‘failover’ called HAFO - High Availability FailOver. Unfortunately for the original poster it is only supported with XP and VA arrays and not on Nike’s or AutoRAIDs (because it does not work with those).

Andrew Popay provided some personal experience.

We have seven Nike SP20 arrays, totaling 140 discs spread across all the arrays, using a combination of RAID 1 (for performance) and RAID 5 (for capacity). We use both SP’s on all arrays, with six arrays used over three systems (two per system). One of our systems has two arrays daisy-chained. The only failures we have suffered on any of the arrays have been due to a disc mechanism failing.

We never find any issues with the hardware raiding; in fact, as a lot of people have mentioned, hardware raiding is much more preferred to software raiding. Software raiding has several issues, system volume, performance, ease of use, etc. Hardware raiding is far more resilient.

As for anyone concerned about single points of failure, I would not worry too much about the Nike arrays, I would say they are almost bullet proof. For those who require a 24x7 system and can’t afford any downtime what so ever, maybe they should consider upgrading to an N-Class, with a VA or XP. Bottom line is SP20’s are sound arrays on the HP 3000s, easy to configure, setup and maintain.

Posted by Ron Seybold at 11:39 PM in Hidden Value, Homesteading | Permalink | Comments (0)

August 10, 2018

HPCALENDAR joins 3000 intrinsics hits

Newswire Classic

Greatest-HitsTwenty years ago HP took steps forward, into the realm beyond 2028, when it released a set of COBOL-related MPE/iX intrinsics. The community is now looking into the next decade and seeing a possibility of hurdling the Dec. 31, 2027 date handling roadblock. In this Inside COBOL column from the late 1990s, Shawn Gordon took readers on a quick tour of the new intrinsics — new to 1998, at least — that would make the 3000 easier to program for the future. He even wrote a sample program employing the improved data handling.

In 2018 the information might seem more history lesson than operational instruction guide. But when a long-running mission critical app needs repairs, knowing the full set of date capabilities might help. Gordon even mentions that using the official intrinsics will help maintain programs written 20 years earlier. Enough time has passed by now that any new programs at the time of the article would be 20 years old.

3000 managers have always had a sharp focus on coding for long life of applications. 

By Shawn Gordon

Since Year 2000 is rapidly approaching, I'll review the date intrinsics that HP gave us in MPE/iX 5.5 starting with PowerPatch 4.

As I've done a lot of Y2K consulting it seems everyone has written their own date routines. Most I have seen will break by Y2K. My goal in my consulting was to implement an HP-supplied solution, making it easier to support YYMMDD as well as YYYYMMDD date functions during the conversion process.

My only negative comment about these intrinsics is that I wish they had been created with the introduction of the Spectrum series of HP 3000s (PA-RISC systems). I could have used them then, too.

Six new intrinsics are available. All of the parameters for all new intrinsics are now 32-bit. This means they will work for as long as anyone reading this will ever care. I feel it’s important to standardize on these new HP-supplied intrinsics. They will make it a lot safer than trying to maintain some piece of code that was probably written 20 years ago. With code that old, it’s likely that nobody remembers how it works.

Here’s the lineup of intrinsics:

1. HPDATECONVERT: converts dates from one supported format to another 
2. HPDATEFORMAT: converts a date into a display type (I usually use this instead of HPDATECONVERT)
3. HPDATEDIFF: returns the number of days between to given dates 
4. HPDATEOFFSET: returns a date that is plus or minus the number of days from the source date
5. HPDATEVALIDATE: verifies that the date conforms to a supported date format
6. A new 32-bit HPCALENDAR format (HPCALENDAR, HPFMTCALENDAR).

There are a couple of things you should be sure to do correctly when using these intrinsics. HP’s documentation shows that some parameters on some intrinsics need to be passed by value (see my examples in Figure 1 with DATE-CODE, DAYS-DIFF and DATE-CUTOFF). You do this by putting the \ backward slashes around the variable name.

The other thing that can be confusing is the DATE-CUTOFF. This defines a “split” year. If anything is below this value, it will be translated to the next century. In other words, if the value of DATE-CUTOFF is 50, and you are using a 2 digit year of 00..49, then it will be resolved as 2000..2049, and those in the range of 50..99 will be 1950..1999.

If you use a value of -1, then the intrinsic will pick up the value of the predefined system variable HPSPLITYEAR. This method lets you control the value outside of your program, so I use a DATE-CUTOFF of -1 to stay modular.

The other thing to note is DATE-CODE, which indicates the style of the date that you are working with. I am using 15 because it works with both YYMMDD and YYYYMMDD format.

I’m including some code examples below for the variable declarations, as well as results of running the program MYDATE which uses the functions. 

01 DATE-CODE            PIC S9(9)  COMP VALUE 15.
01 DATE-RESULT          PIC S9(9)  COMP VALUE 0.
01 DATE-STATUS.
  03 S-INFO            PIC S9(4)  COMP VALUE 0.
  03 S-SUBSYS          PIC S9(4)  COMP VALUE 0.
01 DATE-CUTOFF          PIC S9(9)  COMP VALUE -1.
01 FORMAT-LEN           PIC S9(9)  COMP VALUE 20.
01 FROM-DATE            PIC 9(8)   COMP.
01 THRU-DATE            PIC 9(8)   COMP.
01 DAYS-DIFF            PIC S9(9)  COMP.
01 FORMAT-TYPE          PIC X(20).

   CALL INTRINSIC "HPDATEFORMAT" USING \DATE-CODE\,
                                       FROM-DATE,
                                       HOLD-FORMAT,
                                       FORMAT-DATE,
                                       FORMAT-LEN,
                                       DATE-STATUS,
                                       \DATE-CUTOFF\.
      IF S-INFO <> 0
         DISPLAY "Error in HPDATEFORMAT".

      CALL INTRINSIC "HPDATEDIFF" USING \DATE-CODE\,
                                        FROM-DATE,
                                        THRU-DATE,
                                        DUMMY-VAL,
                                        DATE-STATUS,
                                        \DATE-CUTOFF\.
      IF S-INFO <> 0
        DISPLAY "Error in HPDATEDIFF".

       CALL INTRINSIC "HPDATEOFFSET" USING \DATE-CODE\,
                                            FROM-DATE,
                                           \DAYS-DIFF\,
                                           THRU-DATE,
                                           DATE-STATUS,
                                           \DATE-CUTOFF\.
      IF S-INFO <> 0
        DISPLAY "Error in HPDATEOFFSET".

        CALL INTRINSIC "HPDATEVALIDATE" USING \DATE-CODE\,
                                              FROM-DATE,
                                              \DATE-CUTOFF\
                                       GIVING DATE-RESULT.
      IF S-INFO <> 0
        DISPLAY "Error in HPDATEVALIDATE".


RUN MYDATE

Enter date in YYMMDD or YYYYMMDD format: 19980317
Enter date format string: MM/DD/YY
Formatted date is 03/17/98
Julian date is 01998076

Enter From date: 19980101
Enter Thru date: 19980501
Number of days = +000000120

Enter start date: 19980801
Enter day offset: -31
New date is 19980701

Posted by Ron Seybold at 12:35 PM in Hidden Value, Homesteading | Permalink | Comments (0)

July 27, 2018

Worst Practices: Shouldn't Happen to a Dog

By Scott Hirsh

Chaplin-A-Dogs-LifeThere is a saying in Washington about Washington: “If you want a friend, get a dog.” Ha! We system managers should be so lucky. We can’t even be our own best friend.

It’s sad but true: we system managers won’t cut ourselves any slack. We repeatedly put ourselves in jeopardy, often making the same mistakes time after time. We even break all the rules we impose on others. Don’t believe me? See if you recognize any of these examples.

1. Hand crafted system management

Ah yes, the good old days. Peace, love and tear gas (I never inhaled). But here’s a news flash, sunshine: for system managers, the ’60s are dead. Predictable, repeatable tasks can and should be automated. If you can script it, you can schedule it. And if you can schedule it, you can automate it. So what are you waiting for? Do you like (take your pick): streaming jobs by hand; adjusting fences and priorities by hand; reading $STDLISTs; staring at the console waiting for that one important message? For this you went to college?

And yet, we (or our management) come up with lots of lame excuses for running a stone-age operation. Can’t afford the automation products, don’t trust automation, can’t trap every error, blah blah blah. Those excuses may fly when you’re small, but suddenly you have more systems, bigger systems and manual management turns your shop into burn-out central. Now there’s turnover costs, downtime costs, opportunity costs.

Oh, and by the way, it’s much more expensive to implement automated management in a large, busy environment than it is to grow automated management from a smaller environment. Perhaps some of us are just adrenaline junkies, or we fear not being needed. Get over it and automate already.

2. The disappearing act

A close personal friend of mine — okay, it was me — once made a change to Security/3000’s SECURCON file, then left for an all-day meeting about 40 miles away. Guess what? None of the application users could log on after my change. Way back then, my pager almost vibrated off my belt from that one. And it made for some interesting meetings when I got back.

I have seen lots of cases where a system manager made a configuration change, installed a patch, or fussed with SYSSTART or UDCs, then immediately went home. Big mistake. If you’re lucky, you live near your data center and can zip right back to repair the carnage that was discovered right away. If you’re not lucky, first you don’t discover your mistake until the worst possible moment — say, around the heaviest usage period the next day — and then you’re forced to take the system down to fix the problem. Ouch.

3. A lack of planning on my part does constitute an emergency on your part

A variation on No. 1. We are the eternal optimists. No matter how invasive the procedure, everything will work out perfectly, right? How many PowerPatches must we install before we realize we must leave adequate time for testing the patched system and perhaps back that sucka out? No really, this time HP (or your favorite vendor) has learned from past mistakes and has a bullet-proof update. No need to leave a cushion for collateral damage. Right.

Every decent system administration book offers the same advice: Don’t do anything you can’t undo. Make a backup copy of whatever you’re changing. Keep track of the steps you followed. Be prepared to back out whatever you’re doing. Because that contingency time can inflate your update schedule by hours, it’s unlikely you can safely make a system change at any time other than weekends or holidays. 

4. I’ve got a secret

You make changes but don’t tell anyone about them. Let’s be charitable and say your changes worked as planned. Unfortunately, nobody knew you were going to make the change. I have seen a change as innocuous as modifying the system prompt have unintended consequences (Reflection scripts looked for the old prompt and now wouldn’t work). The term “system” implies interrelationships. Anything we do has a ripple effect. When we don’t tell others that we’re about to make a change — “they wouldn’t let me do it if I told them!” — we don’t do ourselves any favors. I would love to hear other war stories under this category (hint, hint).

5. Trust no one

This probably explains all the peripherals you’ve bought that don’t work with your HP 3000. But isn’t the HP 3000 the most open system in the universe? A disk drive is a disk drive, right? The vendor told me the printer would work (and it costs much less than that HP printer). We do love our work, don’t we? And we do get excited by all the possibilities of the technology.

But sometimes — most times? — when the opportunity looks too good to be true, it is. And what a hassle it is when we’re stuck with a device, bought and paid for, that we must get to work with our system. Now. Because we’re out of space. Because the CFO doesn’t like spending $25K for a big paperweight.

Another aspect of this issue arises with replacement parts. No names please, but I have seen systems with non-certified disk drives. Sure they work — until there’s a power failure. The customer didn’t know they had this exposure because their maintenance company didn’t think it was worth mentioning. Do your homework, and watch out for little green men with maintenance kits.

And last, but not least, is taking “expert” information at face value. My first experience on the HP rack (running a Series 70) was with an HP SE who told me how to shortcut an OS update. Sounded good, I could use the extra time because I was updating on a Wednesday night (see No. 3). Before I knew it, I was staring at this message on the console: “Volume table destroyed, must reload.”

After that, I dropped SE support, figuring I was quite capable of destroying my system without high priced assistance. If you don’t feel confident about what you’ve been told, post to the 3000-L and see what your peers have to say.

6. The odd couple

For every system management Oscar Madison, leaving old files around to clog up and slow down his system or creating his own collection of foo, temp, K or Q files, there is a Felix Unger counterpart out there, obsessively tidying up. Both personality types have been known to shoot themselves in the foot.

The slobs make their lives miserable by never archiving files, which eventually bites them when they run out of space and the backup takes ever-longer. They also suffer from having multiple versions of all kinds of things on disk, running the risk of executing the wrong version or accessing the wrong file. And of course there are performance and security penalties for a messy system.

But the fastidious system manager also has issues. For one thing, being too diligent about cleaning up can result in missing files. Here is a case where automation can be a negative. Jobs that run every so often, archiving files that haven’t been accessed for a certain amount of time, can wind up archiving a file just before you need it.

Or, in my case, I once archived a file in the VESOFT account that hadn’t been accessed in years, only to discover it was some kind of special file that had to be there, even though it was never accessed (go figure).

Yes, it’s still good to be conscientious about keeping your system tidy. Just don’t overdo it.

You deserve a break today

If we can just step back and catch ourselves in dysfunctional behavior, we can start giving ourselves a break. We should not need to carry a pager, cell phone and laptop with us on vacation — for those brave enough to take a vacation, that is. We should not spend most of our time while out on the road on our phones, explaining how to recover our systems or where critical files are hidden. We should not expect to get raises when we spend so much of our professional time performing tasks that an entry-level employee can handle. By cleaning up our acts, we can stop reacting to self-inflicted busy work, which will free up time for more important tasks — like reading the NewsWire.

Posted by Ron Seybold at 07:58 PM in Hidden Value | Permalink | Comments (0)

July 13, 2018

Fine-Tune: Resetting your LDEV 21 Console

I have a 959 system at my site and there are times when I can't get the remote console port on LDEV 21 to work. How do I troubleshoot this problem and reset the console port? 

1. Is the port configured and available?

a) Check to be sure the system recognizes the port

:showdev 21

LDEV     AVAIL
     21     AVAIL

b) Is the SYSGEN configuration okay? 

:sysgen  sysgen>io
io> ld 21

LDEV:21  DEVNAME:  OUTDEV:21  MODE:  JAID
**ID: A1703-60003-CONSOLE-TERMINAL 
RSIZE:        40   DEVTYPE: TERM
**PATH: 56/56.1   MPETYPE: 16   MPESUBTYPE:  0
CLASS: TERM

c) Is the User Port configured in NMMGR?

:nmmgr
then ...

OPEN CONF, DTS, USER PORT

Logical Device [21  ]  (1 - 1800)
Line Speed [2400  ]  (300, 1200, 9600, or 19200 bps)
Modem Type [1] (0-NONE, 1-US, 2-European, 3 - V22.bis)
Parity [NONE] (None, Even, Odd, 0's, or 1's)

2. Is the access port enabled, configured correctly and unlocked? On the local console type in CTRL-B to get the CM> prompt. The REMOTE settings are displayed at the bottom of the console screen.

a) Check/Change the configuration

cm> CA
Bit rate:               2400 bits/sec
Protocol:               Bell
System identification:

b) Enable Remote

cm> ER

c) Unlock Remote and raise the DTR signal on the modem

cm> U

d) Go back to command mode (:).

cm> CO

3. If you still cannot dial into the remote console, there are two utilities in sysdiag you can try.  Modmutil will do a self test on the modem, and consolan will reset the port.

a) To test the modem:

:sysdiag
dui> modmutil
mu> diag
diag> autotest
diag> exit
mu> exit

b) To reset the modem

:sysdiag
dui> CONSOLAN pdev=nn/nn section=2(23)

Continue? YES
Reset local/remote?  REMOTE

Posted by Ron Seybold at 05:08 PM in Hidden Value, Homesteading | Permalink | Comments (0)

July 06, 2018

Using MPE/iX to send SFTP files

I have a script that uses FTP to send files to a site which we open by IP address. We've been asked to change to SFTP (port 22) and use the DNS name instead of an IP address, and I don't believe the 3000 supports that. Does it? If so, how?

Allegro's Donna Hofmeister replies:

I'm not sure you want to do SFTP on port 22. That's the SSH port. SFTP is meant to use port 115. Have a look at one of our white papers on how to do SFTP on MPE.

If you are going to use DNS, you must have your 3000 configured for that. It's easily done. 

However, if you've never done anything on your 3000 to make it act like a real computer (oh -- that's right, it is a real computer and fully capable of using DNS), this can turn into a can o'worms.

To configure for 'DNS lite' it's probably simplest to do the following

1. copy hostsamp.net to hosts.net

2. edit hosts.net to make sure it has

127.0.0.1 loopback
1.2.3.4   name    <--- where 1.2.3.4 and name are corrected to the system you want to connect to

3. copy the NSSWSAMP.net to nsswitch.net

4. edit nsswitch.net to have this line:

hosts : files[SUCCESS=return NOTFOUND=continue]

With this done, the 3000 sorta kinda acts like it's using DNS (because it's looking the the hosts file for how to translate 'name' into '1.2.3.4')

Tony Summers provides a caveat:

One warning. The upgrade from FTP to sFTP (or SSH FTP etc) can involve more change to your scripts that you expect. What we do for FTP (originally on the HP 3000, and now on our HP-UX server) is build a text file with the commands (the sample below, edited)

cat FTPT0070
open ftpserver.site.co.uk
user USERNAME PASSWORD
ascii
get /export/002_iccm_extract_1161.csv ICR21161

quit

The file is then presented to the FTP client. On the HP 3000 it was something like....

RUN FTP.ARPA.SYS < FTPT0070 > FTPS0070  

Then both the output file, FTPS0070, and any JCWs set by the FTP program were inspected to test the success of the FTP session.

cat FTPS0070

Connected to xxxxxx.co.uk

220 Welcome to FTP service - xxxx.
331 Please specify the password.
230 Login successful.
200 Switching to ASCII mode.
200 PORT command successful. Consider using PASV. 550 Failed to open file.
221 Goodbye.

In particular, the 3-digit status codes were analyzed, looking for error codes like "550." If you do something similar in your FTP scripts, then all I can say is welcome to a very different world.

Karsten Brøndum adds:

Here's a completely different approach. 

Depending on your skills in Java, there is a nice LPGL package called ftp4j that I have used a couple of times. (By the way, ftp4j will do both SFTP and FTP). I've found it way easier than to fiddle with files with text files containing commands, especially when it comes to error handling.

Posted by Ron Seybold at 04:15 PM in Hidden Value | Permalink | Comments (0)

June 15, 2018

Heartbeat at the center of CPU boost

Newswire Classic

By Gilles Schipper

The activity light on the 3000's LDEV 1 was abnormally high, and we noticed very sluggish response time, even though only the console was signed on and no batch jobs were executing. Having no idea what the problem was — and absent any tools such as Glance to shine a light on the situation — we began to revert to the previous configuration, software and hardware.

Only a week later, with some analysis of NM log files, were we able to establish what was going on. The performance problem was related to the 3000's transceivers. SQL heartbeat was disabled for all of them. The result was that the CPU was being inundated with an overwhelming amount of IO requests in order to log the missing heartbeat event in the NM log file.

This unnecessary and voluminous IO was enough to bring the system to its knees — even absent any other activity. In today's HP 3000 environment, this serious CPU wastage problem can be overlooked, because faster CPUs could render the problem relatively less noticeable. But I would venture to guess that there is a lot of the "wasted IO" that is affecting a large number of HP 3000s out there.

Fortunately, there is a very simple way to recognize whether the problem exists, and also a simple cure. To determine if you have this problem, simply type the following command and look at the reply that follows:

:listf [email protected],2

ACCOUNT=  SYS         GROUP=  PUB

FILENAME  CODE  ------------LOGICAL RECORD-------  ----SPACE----
                  SIZE  TYP    EOF    LIMIT R/B  SECTORS #X MX

H000000A*           1W  FB     5      66010   1      256  1  *
H000000B*           1W  FB     0      66010   1        0  0  *
H0909A5A*           1W  FB     5      66010   1      256  1  *

H0909A5B*           1W  FB     0      66010   1        0  0  *
H13ECEEA*           1W  FB     5      66010   1      256  1  *
H13ECEEB*           1W  FB     0      66010   1        0  0  *
H15F669A            1W  FB     5      66010   1      256  1  *
H15F669B            1W  FB     0      66010   1        0  0  *
HASTAT    NMPRG   128W  FB     347      347   1      352  1  8
HAUTIL    NMPRG   128W  FB     424      424   1      432  1  8
HP32209B  PROG    128W  FB     15        15   1       16  1  1

Notice the OPEN files (the ones with the associated asterisk suffixing the file name) that are 1W in size. There are two such files associated with each configured DTC, file name starting with the letter H, followed by six characters that represent the last six characters of the DTC MAC address, followed by the letter A or B. The EOF for these files should be 0 and 5 for the respective "A" and "B" files.

Otherwise your CPU is being subjected to high-volume unnecessary IO, requiring CPU attention. The solution is to simply enable SQL heartbeat for each transceiver attached to each DTC. This is done via a small white jumper switch that you should see at the side of each transceiver.

Voila, you've just achieved a significant no-cost CPU upgrade.

There is also another method of eliminating this excessive CPU overhead that involves using NMMGR to uncheck as many logging events as you can for each DTC, revalidating and rebooting.

But the SQL-heartbeat enable method is a surer bet.

Posted by Ron Seybold at 08:40 PM in Hidden Value | Permalink | Comments (0)

June 08, 2018

Fine-Tune: Making the 3000's ports report

I have a port in an HP 3000 and I want to know the application that is currently using that port. Is there any command that can show me the applications accessing a particular port?

Kevin Miller replied:

:sockinfo.net.sys

Enter ‘c’ for ‘call sockets.’ Listeners are shown in port order.

The port for telnet on our 3000 is set to a different value then 23, but it is set to 23 on our HP Unix server. When I try to telnet from the 3000 to HP-UX I get the following message: Trying... telnet: Unable to connect to remote host. If I switch the port for telnet to 23 on the 3000, it works great.

My question is: Can I run telnet on two different ports on either box so that I can maintain my non-standard port on the 3000, but still allow telnet to run between the two boxes? If not, is there another way to make this work?

Jeff Kell replied:

Just ‘telnet your.3000.name nnn’ where ‘nnn’ is your ‘nonstandard’ port.

How do I point network printer configurations to specific ports on (external) multi-port JetDirect (or equivalent) boxes?

Gilles Schipper replied:

You need to add the tcp_port_number option, in NPCONFIG, as follows:

(network_address = 128.250.232.40 tcp_port_number = 9100) # for port 1
(network_address = 128.250.232.40 tcp_port_number = 9101) # for port 2
(network_address = 128.250.232.40 tcp_port_number = 9102) # for port 3

(Please note that everything on each line after and including the “#” represents a comment.)

My HP 3000 is set up for full access to the Internet. The telnet connection works fine, but I also see that VT-MGR also works. I know that inetdsec is used for restricting access for ip, http, ftp and so on. Is there something in NMMGR to restrict VT-MGR access, or do you use inetdsec for that also?

Chris Bartram replied:

Just an option logon UDC that checks the CIVars set for the IP address and hostname of the originator.

We’ve got a DLT4000 tape drive I’d like to connect to a Series 957 and use them for database, incremental, and full backups. Can I simply hook a DLT4000 drive to the SE-SCSI port on the MFIO card, set its SCSI address, and add the device as an HPC1521B?

Gilles Schipper replied:

It should be no problem at all. The DLT4000 SE SCSI device can also be utilized as a boot device on the 957. You should use the device ID of DLT4000 and not HPC1521B. You should consider using the device ID of HPC1521B as a workaround to any restore problem. It would be best to use device ID DLT4000 and test to ensure good restore performance, and only resort to device ID HPC1521B if the restore speed is NOT satisfactory.

Posted by Ron Seybold at 08:52 PM in Hidden Value, Homesteading | Permalink | Comments (0)

June 01, 2018

Recovering a 3000 password: some ideas

I have an administrator who decided to change passwords on MANAGER.SYS. Now what's supposed to be the new password isn't working. Maybe he mis-keyed it, or just mis-remembered it. Any suggestions, other than a blindfold and cigarette, or starting down the migration path?

The GOD program, a part of MPEX, has SM capability — so it will allow you to do a LISTUSER MANAGER.SYS;PASS=

If your operator can log onto operator.sys:

file xt=mytape;dev=disc
file syslist=$stdlist
store command.pub;*xt;directory;show

While using your favorite editor or other utility, search for the string: "ALTUSER MANAGER  SYS"

You will notice: PAS=<the pwd> which is your clue.

It's said that a logon to the MGR.TELSUP account can unlock the passwords. The Telsup account usually has SM capability, if it wasn't changed.

Posted by Ron Seybold at 09:57 PM in Hidden Value, Migration | Permalink | Comments (0)

May 25, 2018

Fine-Tune: Locking databases into lookups

Editor's Note: Monday is a holiday to commemorate Memorial Day, so we're celebrating here with time away from the keyboard. We'll be back with a new report May 30.

Lock files databaseWe’re migrating from our 3000 legacy applications to an ERP system hosted on another environment. Management has decreed the HP 3000 apps must still be available for lookups, but nobody should be able to enter new data or modify existing data. Should I do the simplest thing and change all of the databases so that the write class list is empty?

Doug Werth replies:

One way to do this is to write a program in the language of your choice that does a DBOPEN followed by a DBLOCK of each database (this will require MR capability). Then the program goes into an infinite loop calling the PAUSE intrinsic. Any program that tries to update the database will fail to achieve a lock, rendering the databases read-only. Programs that call conditional locks will come back immediately with a failed lock. Unconditional locks will hang.

This has been a very successful solution I have used on systems where a duplicate copy of the databases is kept for reporting and/or shadowing using IMAGE log files.

Steve Dirickson agrees with the poster of the question:

Since very few developers write their apps to check the subsystem write flag that you can set with DBUTIL, changing the classes is your best bet. Make sure you do so by changing the current M/W classes to R/R so the existing passwords will still work for DBOPEN, and only actual put/update/delete operations will fail.

The Big Picture: If protection is required for the database, that protection should reside in the database if at all possible. As mentioned, this is easy with IMAGE.

I am putting a new disk drive into my 3000 configuration, one that doesn't have an HP label on it. It's 500GB and a great value. What patches should I be sure to have installed on MPE/iX 7.5?

MPEMXT1
Large Disk: FSCHECK SYNCACCOUNTING fix for HFS files

MPEMXT3
Large Disk: limit maximum SCSI disk size to a half-terabyte

MPEMXT4
Large Disk: SSM changes for disk space allocation and accounting

MPEMXT7
Large Disk: Discfree changes to correct sector counts

MPEMXU3
Large Disk: REPORT "FORMAT=LONG" enhancement. MPEMXU3 includes patch MPEMXT2 which is another Large Disk/Files patch. MPEMXT2 provides changes to the ALTACCT, NEWACCT, ALTGROUP, NEWGROUP commands.

MPEMXU6
Large Disk: CATALOG.PUB.SYS changes for CIERR messages

MPEMXU7
Large Disk: CICAT and CICATERR.PUB.SYS changes

The critical one is MPEMXT3, which protects from other problems.

Posted by Ron Seybold at 06:09 PM in Hidden Value, Homesteading, Migration | Permalink | Comments (0)

May 18, 2018

Fine-Tune: Setting up a 3000 as file server

I would like to set up an HP 3000 as a file server. In one of my accounts I want to have a share for my 100 users pointing to a separate directory in this account. The homes section in smb.conf normally points to the home group of the user, which is the same for all of them and is not helpful. Is there another way of solving the problem, or must I configure more than the 100 shares?

Mark Wonsil replies:

I saw a clever little trick in Unix that should work on MPE:

[%U]
path = /ACCT/SHARES/%U

This creates a share name that is the same as the username and then it points the files to a directory under the SHARES group.

How do I set my prompt setting in the startup script?

John Burke replies:

Here’s what I do for my prompt:
SETVAR HPPROMPT,”<SASHA: “+&
“!!HPJOBNAME,!!HPUSER.!!HPACCOUNT,!!HPGROUP> “+&
“!!HPDATEF !!HPTIMEF <!!HPCWD>”+CHR(13)+CHR(10)+”[!!HPCMDNUM]:”

This yields, for example,
<SASHA: JPB,MGR.SYSADMIN,PUB> THU, FEB 20, 2003 11:15 PM </SYSADMIN/PUB>
[7]:

A disk drive has failed on a user volume. How can I determine the accounts and groups on that user volume?

John Clogg replies:

Try REPORT @[email protected];ONVS=<volset>

Jeff Woods adds:

In addition to the suggestion to use “:REPORT @[email protected];ONVS=volset” (which may fail because it’s actually trying to look at the group entries on the volume set) you can do a “:LISTGROUP @[email protected]” and scan the listing for groups where HOMEVS is your uservolumesetname. The advantage of LISTGROUP is that it uses only the directory entries on the system volume set. You may want to redirect the output of LISTGROUP to a file and then search that rather than trying to scan the listing directly.

Posted by Ron Seybold at 04:43 PM in Hidden Value | Permalink | Comments (0)

May 11, 2018

How to Create Cause and Effect on MPE

Causality-iconHP 3000s took a big step forward with the introduction of a fresh intrinsic in 1995. Intrinsics are a wonderful thing to power HP 3000 development and enhancement. There was a time when file information was hard to procure on a 3000, and JOBINFO came into full flower with MPE/iX 5.0, back in 1994. "The high point in MPE software was the JOBINFO intrinsic," said Olav Kappert, an MPE pro who could measure well: his 3000 experience began in 1979. JOBINFO sits just about at the end of the 456-page MPE/iX Intrinsics Manual published in '94.

Fast-forward 24 years later and people still ask about how they can add features to an application. The Obtaining File Information section of an MPE KSAM manual holds an answer to what seems like an advanced problem. That KSAM manual sits in one of several Web corners for MPE manuals, a link on Team NA Consulting's page. Here's an example of a question where INFO intrinsics can play cause and effect.

I'm still using our HP 3000, and I have access to the HP COBOL compiler. We haven't migrated and aren't intending to. How can I use the characteristics of an input file as HPFOPEN parameters to create an output file? I want that output file to be an exact replica of the input file. I want to do this without knowing anything about the input file until it is opened by the COBOL program. 

I've tried using FFILEINFO and FLABELINFO to capture the characteristics of the input file, once I've opened it. After I get the opens/reads/writes working, I want to be able to alter the capacity of the output file.

Francois Desrochers said, "How about calling FFILEINFO on the input file to retrieve all the attributes you may need? Then apply them to the output file HPFOPEN call."

Donna Hofmeister added 

Have a look at the Using KSAM XL and KSAM 64 manual (Ed. note: link courtesy of Team NA Consulting). Chapters 3 and 4 seem to cover the areas you have questions about. Listfile,5 seems to be a rightly nifty thing.

But rather than beat yourself silly trying to get devise a pure COBOL solution, you might be well advised to augment what you're doing with some CI scripts that you call from your program.

In a lively tech discussion on the 3000-L list, Olav Kappert added, 

Since you want to do this without knowing anything about the input file until it is opened by the COBOL program, the only way is to use one of the MPE intrinsics to determine all the characteristics of the file in question. Then do a command build after parsing that information.

Michael Anderson added details on how the 3000's CI scripting can build upon the fundamentals of file information and COBOL.

I like Donna's plan.This is a strategy that will also help whenever you want similar functionality on a NON-MPE platform. Also, although COBOL is very capable, an external script might be a better tool. You don't always need a hammer.

This is hypothetical, to try to make a point. From your MPE CI prompt, type HELP FINFO. You should be able to set some variables (SETVAR FILEA "XXX"), and using FINFO add some more variables. Then from COBOL using HPCIGETVAR, string together a BUILD command (with a bigger LIMIT maybe), and call "HPCICOMMAND". You could string the build command from a command, into a single variable, then COBOL only needs to HPCIGETVAR once.

You can also write a script to do everything you want, and call HPCICOMMAND to run the script, pass it parms. It's pretty cool, and it makes your COBOL application more portable. (Same program, different script).

For example: On MPE I once wrote (using COBOL) a small utility to CALL DBINFO, extract all the meta-data from any IMAGE database, and then create, and write to the NEW KSAM COPYLIB, ending up with all the COBOL copylib modules needed for all datasets for any database, including call statements and working storage. My point to all this: I used CI scripting to create and write to the copylib.

I actually used ECHO to write the copylib ksam file from a CI script. Now, seeing how I work more on HP-UX and Linux, plus OpenCOBOL and Eloquence, I should be able to compile this same program on Linux with minimal modifications, only changing the external script.

I use this method to access SQL databases, and much more, using OpenCOBOL and the Tcl/Tk developer exchange (a repository for Tcl, a flexible language with a small core and many uses that can be adapted in ways). This way I can run the same program, same script almost anywhere, no matter, Windows, Mac, or Unix.

Eric Sand, another veteran of the 3000, commented that this kind of challenge really shows off the range of possibility for solving development problems. "You can create almost any cause and effect in MPE that you can imagine," he said. "Reading about your concern gave me a little rush, as I mentally organized what I wanted to do to address your issue."

Posted by Ron Seybold at 08:24 PM in Hidden Value | Permalink | Comments (0)

May 04, 2018

How Details and Masters Get the Job Done

Masters and DetailsA Hidden Value question was posed about how manual and automatic masters work in TurboIMAGE. Roy Brown gave a fine tutorial on how these features do their jobs for MPE and the 3000 -- as well as how a detail dataset might have zero key fields.

Manual masters can contain data which you define, like Detail sets can, along with a single Key field. Automatic masters contain only the Key field. In both cases, there can be only one record for a given key value in a Master dataset.

A Detail dataset contains data fields plus zero, one, or many key fields. There can be as many records as you like for a given key value, and these form a chain accessible from the Master record key value. This chain may be sorted, or it may just be in chronological order of adding the records.

Brown explained that "where there are keys, referential integrity demands that there are no Detail record entries with a key field that is not found in either a Manual or Automatic master, both Key name and Key value. So a Detail data set with Key fields that are not present in a Master record would be a sign of a seriously corrupted database."

However, I doubt this is the case, and when you do a QUERY FORM command, you will see which fields in Detail datasets are Keys, which fields are used to establish Sort orders, and which fields are data pure and simple.

From the Key name, you can determine which Master set links the keys.

As I said above, it is possible to have a Detail dataset with no keys, but these usually contain only a very few records, since direct access to them without keys is cumbersome, and you would otherwise have to trawl right through one to find any given entry.

So a Detail dataset with thousands of unconnected entries would be very unlikely.

The FORM output will allow you to check how the Detail dataset that you think might have unconnected entries is actually linked in.

Brown's explanation flowed from the following question and answer in that Hidden Value article.

I want to generate a listing of data sets, data item names, and their relationships from my TurboIMAGE database (master, One detail data set has thousands of entries which do not appear to be connected to any master. I cannot remember the difference between manual and automatic masters.

Francois Desrochers replied to use Query's FORM command.

RUN QUERY.PUB.SYS
B=dbname
PASSWORD = >> password
MODE = >> 5
FORM

Manual masters: programs have to explicitly add entries before you can add related entries in detail sets. Programs have to explicitly delete entries when there are no related detail entries left. In other words, you have to do master dataset maintenance.

Automatic masters: entries are automatically created when a related detail set entry is created. Entries in the master are automatically removed when the last related detail entry is deleted. IMAGE takes care of the maintenance.

Consultant Ron Horner added, "The difference between a manual master and auto master is the following:

1. You have to add records to the manual master that contain the key data for any detail datasets that are linked to the master.

2. When working with automatic masters, you don't have to write data to them at all. IMAGE takes care of populating the master.

Krikor Gullekian also noted that "With QUERY you can check the databases as long as you know the password." [Ed. note: This password advice is true, except when you're the database owner. No password is required then, only a semicolon.]

Posted by Ron Seybold at 08:40 PM in Hidden Value, Homesteading | Permalink | Comments (0)

April 27, 2018

Fine-Tune Friday: DDS diagnosis and tips

Series 928-LXWe have a tape device that is not responding; that is, we put the tape in, but it is not coming online. I also see that a user is logged into the system using the LDEV assigned to the tape drive. SHOWDEV TAPE also does not list the device.

Gilles Schipper replies:

I’ve seen this before for DDS drives, Probably during your most recent reboot, there was a (possibly temporary) malfunction with your tape drive’s power supply such that its existence was not recognized during the boot up process. That would normally result in a “device unavailable” condition and the subsequent disabling of that logical device number.

I have noticed instances where that LDEV number is actually made available to the logon device number pool (for subsequent assignment for logon session device numbers). Long story short, the solution appears to be a power cycle, START NORECOVERY reboot.

After shutting down and powering off the CPU and all devices, run ODE to ensure all devices are recognized before START NORECOVERY. Failure to recognize the device at that point should lead to further investigation of the power supply, SCSI device number setting, or other hardware malfunction. If this situation happens frequently, I would first suspect a problem with the power supply of that device.

Get rid of that internal DDS tape drive

By John Burke

People complain of problems with internal DDS tape drives in systems located in remote areas with little onsite expertise, problems that lead to frequent drive replacements and downtime. It reminds me of the old vaudeville joke where the patient comes to the doctor with a complaint, “Doc, it hurts when I do this.” The doctor replies, “Then don’t do that.”

HP 3000 gurus have cautioned for years that people should not use internal tape or disk drives in 9x7, 9x8 or 9x9 production systems. The most likely failure is a tape drive and the next most likely failure is a disk drive. Everything else in the system cabinet could easily run for a decade without needing service or replacing. [Editor's note: John's advice came in 2004, so a decade-plus is definitely bonus time.] When an internal tape or disk drive fails you are looking at serious downtime while the case is opened and the drive is replaced. A common urban legend says that the primary boot device (LDEV 1) and the secondary boot device (usually LDEV 7) must be internal. Not true.

Bite the bullet now. Remove, or at least disconnect (both power and data cables) all internal drives. At the least, replace the internal DDS drive with an external DDS3 or DDS4 drive. In the case of the DDS drive, you will not even need to make any configuration changes if you set the SCSI ID to 0 on the external drive.

Usually, the internal DDS drive is at SCSI ID 0 (for a 9x7, this is 52.0.0; for a 9x8, this is 56/52.0.0; and, for a 9x9, it is something like 10/4/20.0.0). If you do not want to open the case even to disconnect the drives, you can probably set the SCSI ID on the external DDS drive to 1 since this is usually not used. On 9x9s, SCSI ID 2 is used for the CD-ROM. Disk drive addresses will vary with the system, but even if you replace the internal disk drives with external JBOD, you are still ahead of the game. Remember, if you have to change SCSI IDs, you will have to change your SYSGEN configuration and your boot device paths.

Someone also asked whether if you changed the boot path you should immediately create a new SLT. Technically, the answer is “No” since the SLT contains no information about boot paths. However, if you have not created an SLT since the device was added (and why not?), then by all means create a new SLT. It should also be noted that DDS drives are notorious for not being able to read tapes created on other DDS drives.

So, if you do not think you have time to create a new SLT, at least use CHECKSLT to verify you can read your existing SLT on your new drive. If you cannot read your existing SLT, then make time to create a new SLT. Your standard procedures should include regularly creating and SLT and checking it.

Posted by Ron Seybold at 04:26 PM in Hidden Value, Homesteading | Permalink | Comments (0)

April 20, 2018

What Does HP's Disc Brand Mean?

By John Burke

HP emblemAfter reading Jim Hawkins’ reply to my SCSI is SCSI article, I was reminded about HP’s 4Gb disk drive fiasco. These branded drives had a nasty habit of failing after being powered off after they’d been running for a while. The problems were not limited to the HP 3000 versions, either.

At one point we got so frustrated we just replaced all 4Gb drives with the much more reliable 9Gb drives. I never blamed HP for these failures, or the failures of the 4Gb drives on my HP 3000 — even though all were purchased from HP, and had HP stamped all over them. The failures were the fault of the manufacturer, and no amount of certification testing would likely have shown the problem. But the failures made me wonder: What does HP certification and HP branding mean?

In Hawkins’ reply, he puts great emphasis on the statement that “In the SCSI peripheral market, Industry Standard is really defined as ‘works on a PC.’ Unfortunately, the requirements for single-user PCs are not always in alignment with those of multi-user servers.” Maybe inside HP the desktops look different, but I have never seen a company use SCSI peripherals as a standard for desktop Wintel systems.

At my last employer, we had approximately 1,200 desktops, and not a single one had a SCSI disk drive. SCSI disks are used primarily in the multi-user server market, not the desktop market. While Hawkins says some interesting things in the rest of his article, these two sentences tend to prejudice the reader against everything else he says.

Unfortunately, Hawkins’ best argument came out in private correspondence: “Putting newer disks inside a 9x7, 9x8 or 9x9 may overtax the power supply and/or ‘cook’ your CPU or memory.” However, most of us outside HP have been advising against using internal drives in production machines for many years because of the obvious maintenance headaches. It still amazes me how many people believe you have to have at least one internal drive in an HP 3000.

The debate seems like it highlights at least four things going on.

1. Does HP certification of a disk drive have value? And, if so, how much? The work that HP does to certify disk drives for the HP 3000 clearly has value. It is up to the customer to decide the worth and he will decide with his checkbook. This certification is an area where HP has historically done a poor job in communicating value to its customers. Hawkins’ information should have been made more public years ago.

2. Does the listing of a drive in IODFAULT.PUB.SYS imply its certification by HP? I think it is reasonable for a customer to look at IODFAULT, pick out a “supported” drive, and think he can buy it from whoever will give him the combination of price and service that meets his needs. For anything but the newer systems, you only need two generic drives listed in IODFAULT (one SE and one FWD). So what is a customer to make of the drives listed in IODFAULT.PUB.SYS?

3. Does HP branding imply HP-specific firmware? So why then do the HP drives almost always report the original manufacturer’s model number instead of HP’s? Again, this leads to the customer assuming he can buy a STxxxxxx and it is the same as the drive he buys from HP.

4. Are all HP-branded drives equal? I had an HP-branded drive that I pulled from a server that I got happily spinning in my 9x7 at one time. And, yes, of course the duty cycle in this 3000 was extremely light. But I am also relatively sure the HP part number (as opposed to drive, whose reported model number is in IODFAULT) was never sold as compatible with the HP 3000. It sounds like this HP-branded drive is just as risky as any non-HP branded drive.

It was never my intention in the original article to bash HP, or the fine people who continue to be associated with the 3000 division. Perhaps I should have reworded the title to read, “If you feel abandoned by your vendor, then take comfort in the fact that in most cases, SCSI is SCSI.” But that doesn’t exactly roll off the old tongue. Yet, in reality, this is what most of HP’s argument has been about: those few cases when “SCSI is SCSI” is not true. It should also be noted that my original article was aimed at those HP 3000 sites planning to homestead for some period of time.

After considering Hawkins’ response to my original article and numerous private messages, my position can now be stated like this: With the exception of the “hot” drive issue, any name-brand manufacturer SCSI drive you can electrically connect to your HP 3000 will likely work. If it survives your own testing (mount it as a separate user volume, and bang on it for awhile before moving it into production) then you should have little to worry about.

Posted by Ron Seybold at 03:18 PM in Hidden Value, Homesteading | Permalink | Comments (0)

April 13, 2018

Fine-Tune: Net config file care and feeding

I’m replacing my Model 10 array with a Model 20 on MPEXL_SYSTEM_VOLUME_SET, so it'll require a reinstall. What’s the best way to reinstate my network config files? Just restore NMCONFIG and NPCONFIG? I'm hoping I can use my old CSLT to re-add all my old non-Nike drives and mod the product IDs in Sysgen—or do I have to add them manually after using the factory SLT?

Gilles Schipper replies:

Do the following steps:
- using your CSLT to install onto LDEV 1
- modify your i/o to reflect new/changed config.
- reboot
- use volutil to add non-LDEV1 volumes appropriately
- restore directory or directories from backup
- preform system reload from full backup - using the keep, create, olddate, partdb,show=offline options in the restore command
- reboot again

No need for separate restores of specific files.

Making backups while network services are running

Advice from James Hofmeister

The most common problem with performing backups in the past was that network configuration files were held open for READ/WRITE when the network was up. 3000 sites found they had no backup copy of the network configuration file NMCONFIG.pub.sys when it was time to install (reload) from backup tapes. I tested this on 7.0 building a CSLT and storing @.pub.sys, @.mpexl.sys, @.net.sys, @.arpa.sys on the same tape, and verified all of the network files including the configuration files were backed up.

Another problem from older systems was NETCP.net.sys was found missing in action following a install (reload) — and after it was recovered and restored from another source, then another system reboot was required to initiate NETCP. NETCP is now included on SLTs. 

Will the network function normally while backups are in progress? The answer to this is Your Mileage Will Vary. The building of a CSLT and the STORE process consume significant CPU, memory and IO resources.

From a networking perspective, TCP/IP networks are not guaranteed to maintain network connections in the event of severe system performance degradation. An acceptable level of CPU and IO performance is required to support TCP's ability to acknowledge the packets it has received (if a packet is not acknowledged it will be retransmitted as per the remote hosts configuration).

Also, an acceptable level of system bus performance is required to support the network hardware DMA to system memory -- if busy during a DMA attempt, the frame is dropped (store from disk to tape or from disk to disk consumes significant system bus band width).

Posted by Ron Seybold at 07:38 PM in Hidden Value | Permalink | Comments (0)

March 26, 2018

Upgrade your hardware to homestead longer

Hardware toolsKeeping storage devices fresh is a key step in maintaining a datacenter that uses HP's 3000 hardware. Newer 3000s give you more options. Our net.digest columnist John Burke shared advice that's still good today while planning the future for a 3000s that will remain online for some time to come. Maybe not until 2028, but for awhile.

If you can, replace your older machines with the A-Class or N-Class models. Yes, the A-Class and some N-Class systems suffer from CPU throttling. (That's HP’s term. Some outside HP prefer CPU crippling.) However, even with the CPU throttling, most users will see significant improvement simply by moving to the A-Class or N-Class.

Both the A-Class and N-Class systems use the PCI bus. PCI cards are available for the A- and N-Class for SE-SCSI, FW-SCSI and Ultra-3 SCSI (LVD). You can slap in many a drive manufactured today, made by any vendor. SCSI is SCSI. Furthermore, with MPE/iX 7.5, PCI fiber channel adaptors are also supported, further expanding your choices.

If you are going to homestead on the older systems, or expect to use the older systems for a number of years to come, you have several options for storage solutions. For your SE-SCSI adaptors, you can use the new technology-old interface 18Gb and 36Gb Seagate drives. For your FW-SCSI (HVD) adaptors, since no one makes HVD drives anymore, you have to use a conversion solution. [You could of course replace your FW-SCSI adaptors with SE-SCSI adaptors, but this would reduce capacity and throughput.]

One possibility is to use an LVD-HVD converter and hang a string of new LVD drives off each of your FW-SCSI adaptors. HP and other vendors have sold routers that allow you to connect from FW-SCSI adaptors to Fibre Channel resources such as SANs. It's one way to accomplish something essential: get rid of those dusty old HP 6000 enclosures, disasters just waiting to happen.

As for tape drives, move away from DDS and use DLT (4000/7000/8000) with DLT IV tapes. Whatever connectivity problems there are can be dealt with just like the disk drives. If you have an A-Class or N-Class machine, LTO or SuperDLT both use LVD connections. If you have a non-PCI machine, anything faster that a DLT 8000 is wasted anyway because of the architecture lineups with 3000 IO.

Posted by Ron Seybold at 08:08 PM in Hidden Value, Homesteading | Permalink | Comments (0)

March 23, 2018

Moving, Yes — Volumes to Another 3000

Newswire Classic

By John Burke

Here is a shortened version of the revised checklist for moving user volumes physically from one system to another without a RESTORE

  • Get the new system up and running, even if it only has one disk drive

(if you've purchased additional new drives, do not configure them with VOLUTIL yet)

  • Analyze and document the configuration on the old system, making any necessary configuration changes on the new system and creating an SLT for the new system;
  • Backup and verify the system volume set and the user volumes separately (be sure to use the DIRECTORY option on all your STOREs);
  • VSCLOSE all the user volume sets on the old machine;
  • Move all the peripherals over to the new machine. On a START NORECOVERY, the user volumes should mount. The drives that were on the system volume set on the old system and any new drives added should now be configured in using VOLUTIL;
  • RESTORE the system volume set:

RESTORE *T;@[email protected]@;KEEP;OLDDATE;
SHOW=OFFLINE;FILES=n;DIRECTORY

  • RENAME the following three files if they exist to something else:

SYSSTART.PUB.SYS
NMCONFIG.PUB.SYS
COMMAND.PUB.SYS (udc configuration)

  • RESTORE the above three files from your tape with the DEV=1 option.

The OS requires that some files, for example SYSSTART, be on LDEV 1;

  • RUN NMMGR against NMCONFIG.PUB.SYS, then using NMMGR, change the path for the LANIC, if necessary, and make any other necessary NMMGR configuration changes.
  • Validate NETXPORT and DTS/LINK, which should automatically cross validate with SYSGEN.
  • START NORECOVERY

Posted by Ron Seybold at 03:10 PM in Hidden Value, Homesteading | Permalink | Comments (0)

March 16, 2018

Fine-tune Friday: SCSI Unleashed

Seagate 73GB driveAlthough disk technology has made sweeping improvements since HP's 3000 hardware was last built, SCSI devices are still being sold. The disk drives on the 15-year-old servers are the most likely point of hardware failure. Putting in new components such as the Seagate 73-GB U320 SCSI 10K hard drive starts with understanding the nature of the 3000's SCSI.

As our technical editor John Burke wrote, using a standard tech protocol means third parties like Seagate have products ready for use in HP's 3000 iron.

SCSI is SCSI

Extend the life of your HP 3000 with non-HP peripherals

By John Burke

This article will address two issues and examine some options that should help you run your HP 3000 for years to come. The first issue: you need to use only HP-branded storage peripherals. The second issue: because you have an old (say 9x7, 9x8 or even 9x9) system, then you are stuck using both old technology and just plain old peripherals. Both are urban legends and both are demonstrably false.

There is nothing magical about HP-branded peripherals

Back in the dark ages when many of us got our first exposure to MPE and the HP 3000, when HP actually made disk drives, there was a reason for purchasing an HP disk drive: “sector atomicity.” 9x7s and earlier HP 3000s had a battery that maintained the state of memory for a limited time after loss of power. In my experience, this was usually between 30 minutes and an hour.

These systems, however, also depended on special firmware in HP-made HP-IB and SCSI drives (sector atomicity) to ensure data integrity during a power loss. If power was restored within the life of the internal battery, the system started right back up where it left off, issuing a “Recover from Powerfail” message with no loss of data. It made for a great demo.

Ah, but you say all your disk drives have an HP label on them? Don’t be fooled by labels. Someone else, usually Seagate, made them. HP may in some cases add firmware to the drives so they work with certain HP diagnostics, but other than that, they are plain old industry standard drives. Which means that if you are willing to forego HP diagnostics, you can purchase and use plain old industry standard disk drives and other peripherals with your HP 3000 system.

Connect just about anything to anything

SCSI stands for Small Computer System Interface. It comes in a variety of flavors with a bewildering set of names attached such as SCSI-2, SCSI-3, SE-SCSI, FW-SCSI, HVD, LVD SCSI, Ultra SCSI, Ultra2 SCSI, Ultra3 SCSI, Ultra4 SCSI, Ultra-160, Ultra-320, etc. Pretty intimidating stuff.

Don’t despair though. Pretty much any kind of SCSI device can be connected to any other with the appropriate intermediary hardware. Various high quality adaptors and cables can be obtained from Paralan (www.paralan.com) or Granite Digital (www.granitedigital.com).

So, SCSI really is SCSI. It is a well-known, well-understood, evolving standard that makes it very easy to integrate and use all sorts of similar devices. MPE and the HP 3000 are rather behind the times, however, in supporting specific SCSI standards. Support for LVD SCSI was added with the A- and N-Class systems—and with MPE/iX 7.5, these same systems would support Fibre Channel (FC). 

Let’s concentrate on the SE-SCSI and FW-SCSI interfaces, both seemingly older than dirt, and disk and tape storage devices. But first, suppose you replace an old drive in your system, where should you put it? The 9x7s, 9x8s and 9x9s all have internal drive cages of varying sizes. It is tempting to fill up these bays with newer drives and, if space is at a critical premium, go ahead.

However, if you can, heed the words of Gavin Scott.

I’d recommend putting the new drives in an external case rather than inside the system, since that gives you much more flexibility and eliminates any hassles associated with installing the drive inside the cabinet. It’s the same SCSI interface that you’d be plugging into, so apart from saving the money for the case and cable, there’s no functional difference. With the external case you can control the power of the drive separately, watch the blinking lights, move the drive from system to system (especially useful if you set it up as its own volume set), etc.

At sites such as Granite Digital you can buy any number of rack mount, desktop and tower enclosures for disk systems. Here is another urban legend; LDEV 1 must be an internal drive. False. Or, the boot tape device has to be internal. False. You cannot tell by the path whether a drive is internal or external, and the path is the only thing MPE knows (or cares) about the physical location of the drive.

Okay, there are some limits

Once you come to terms with the fact that you can use almost any SCSI disk drive in your HP 3000, dealing with SE SCSI is a piece of cake and a whole world of possibilities opens up. With the right cable or adapter (see Paralan or Granite Digital) you are in business.

But just because you can connect the latest LVD drive to your SE-SCSI adaptor, should you? Probably not, because you are still limited by the speed of the SE adaptor and so are just wasting your money. Now that you know you do not need the specific HP drives you once bought, you can pick up used or surplus drives ridiculously cheap. [Ed. note: the 73 GB drive at the top of the article is $129.]

Seagate created new technology drives with the old technology 50-pin SE-SCSI interface, the 18Gb model ST318418N and the 36Gb model ST336918N.

FW-SCSI is more problematic than SE-SCSI because no one even makes FW-SCSI (HVD) disk drives any more and you need more than just a simple cable or adapter to connect newer drives to an HVD adaptor. In fact, from the Paralan site, “HVD SCSI was rendered obsolete in the SPI-3 document of SCSI-3.”

So, what is one to do? Most systems with FW-SCSI adaptors need them for the increased throughput and capacity they provide over SE-SCSI. Paralan and others make HVD-LVD converters. The Paralan MH17 is a standalone converter that allows you to connect a string of LVD disk drives to an HP FW-SCSI adaptor. Pretty cool.

If you're on a Fibre Channel (FC) SAN environment and you would like to store your HP 3000 data on the SAN, then only the PCI-Bus A- and N-Class systems (under MPE/iX 7.5) support native Fibre Channel.

A quick word about configuring your new storage peripherals: Do not get confused by the seemingly endless list of peripherals in IODFAULT.PUB.SYS. And, do not worry if your particular disk or tape drive is not listed in IODFAULT.PUB.SYS. Part of the SCSI standard allows for the interrogation of the device for such things as ID, size, etc. DSTAT ALL shows the disk ID returned by the drive, not what you entered in SYSGEN.

When configuring in a new drives, just use an ID that is close. In fact, there is really no need for any more than two entries for disk drives in IODFAULT, one for SE drives and one for HVD drives so as to automatically configure in the correct driver. The same is true for tape drives.

Summary

Disk drives and tape drives are the devices most likely to fail in your HP 3000 system. The good news is that you do not need to be stuck using old technology, nor are you limited to HP only peripherals. The bottom line is you have numerous options to satisfy your HP 3000 storage needs, both now and into the future.

Special thanks go to Denys Beauchemin, who contributed significant material to this article.

Posted by Ron Seybold at 07:41 PM in Hidden Value, Migration | Permalink | Comments (0)

March 09, 2018

Fine-Tune Friday: Account Management 101

Newswire Classic

By Scott Hirsh

Ledger-bookAs we board the train on our trip through HP 3000 System Management Hell, our first stop, Worst Practice #1, must be Unplanned Account Structure. By account structure I am referring to the organization of accounts, groups, files and users. I maintain that the worst of the worst practices is the failure to design an account structure, then put it into practice and stick with it. If instead you wing it, as most system managers seem to do, you ensure more work for yourself now and in the future. In other words, you are trapped in System Management Hell.

What’s the big deal about account structure? The account structure is the foundation of your system, from a management perspective. Account structure touches on a multitude of critical issues: security, capacity planning, performance, and disaster recovery, to name a few. On an HP 3000, with all of two levels to work with (account and group), planning is even more important than in a hierarchical structure where the additional levels allow one to get away with being sloppy (although strictly speaking, not planning your Unix account structure will ultimately catch up with you, too). In other words, since we have less to work with on MPE, making the most of what we have is compelling.

As system managers, when not dozing off in staff meetings, the vast majority of our time is spent on account structure-related activities: ensuring that files are safely stored in their proper locations, accessible only to authorized users; ensuring there is enough space to accommodate existing file growth as well as the addition of new files; and occasionally, even today, file placement or disk fragmentation can become a performance issue, so we must take note of that.

In the unlikely event of a problem, we must know where everything is and be able to find backup copies if necessary. Periodically we are asked (perhaps with no advance notice) to accommodate new accounts, groups, users and applications. We must respond quickly, but not recklessly, as this collection of files under our management is now ominously referred to as a “corporate asset.”

You wouldn’t build a house without a design and plans, you wouldn’t build an application without some kind of specifications, so why do we HP 3000 system managers ignore the need for some kind of consistent logic to the way we organize our systems?

A logical, adaptable, documented account structure is a huge time saver in many respects. As most of us now manage multiple systems, we have no time to waste chasing down lost files, working with convoluted file sets, struggling to keep access under control or reacting to full volume sets.

I once had a conversation with a co-worker who was an avid outdoorsman. He was discussing rock climbing and I asked him about exciting rock climbing experiences. His reply: “In rock climbing, anything exciting is bad.” I would say the same thing about system management. By getting your account structure under control, you build a solid system management foundation that translates into much more pleasant work.

If this were a “best practices” column, we would discuss the best ways to clean up your system’s account structure. But this is worst practices, so let’s look at the no-nos.

No naming standards,
bad naming standards

Oscar Wilde once said, “Consistency is the last resort of the unimaginative.” Do you think he was referring to HP 3000 system management? If so, not much has changed since Oscar’s day.

• In one account the jobs are located in group JCL. In another account, group JOBS. The developers keep “special” jobs in a group you never heard of in the critical application account. And just to make things more interesting, all your so-called “production” jobs are kept in an account called JCL, containing all kinds of groups, including “TEMP.”

By having consistency across accounts I control, I can easily find what I need when I need it. If jobs are always in the same group across accounts, I can LISTF @[email protected], etc. Backups/recoveries are easier, updates are easier, training new operators is easier. Sure, consistency is boring, but we must resist the lure of adrenaline.

• I’m going out on a limb here, but my guess is that your UDCs, the few you have left, are in a different place in every account. Why is that? And your system UDC (singular) is located in the SYS account, right? Because it’s the SYStem UDC, of course! Maybe it’s not such a bad thing to have another, non-SYS account for globally accessible files. What’s the catch? The system UDC file needs to be in the system volume set, for obvious reasons (learned that one the hard way).

• An MPE file name consists of a whopping maximum of eight characters. That should make every character count, right? So why do jobs that live in a group called JCL or an account called JCL all start with the letter J? File that under the department of redundancy department.

• We manage the systems, so we make the rules, right? Wrong. If we want the rules followed, if we want the best rules possible, we must get input and buy-in from all the others who will be expected to honor our rules. Ignoring users when it’s time to develop naming standards and other system policies is a classic Worst Practice, and a good way to ensure continued chaos. And don’t forget that upper management will need to be involved when a little “gentle” persuasion is required.

Scott Hirsh is former chairman of the SIG-SYSMAN Special Interest Group.

Posted by Ron Seybold at 06:51 PM in Hidden Value, Homesteading | Permalink | Comments (0)

March 02, 2018

Fine-Tune Friday: One 3000 and Two Factors

RSA SecurID fobPeople are sometimes surprised where HP 3000s continue to serve. Even in 2018, mission-critical systems are performing in some Fortune 500 companies. When the death knell sounds for their applications, the axe gets swung sometimes because of security. Two-Factor security authentication is a standard now, serving things like Google accounts, iCloud data, and corporate server access.

Eighteen years ago, one HP 3000 shop was doing two-factor. The work was being coded before smartphones existed. Two-factor was delivered using a security fob in most places. Andreas Schmidt worked for Computer Sciences Corporation, which served the needs of DuPont in Bad Homburg, Germany. CSC worked with RSA Security Dynamics to create an RSA Agent that connected a 3000 to an RSA Server.

Back in that day, authentication was done with fobs like the one above. Now it's a smart device sharing the key. Schmidt summarized the work done for what he calls "the chemical company" which CSC was serving.

Two-Factor Token Authentication is a state-of-the-art process to avoid static passwords. RSA Security Dynamics provides an MPE Agent for this purpose which worked perfectly for us with Security/3000, but also with basic MPE security. The technical approach is not simple, but manageable. The main problems may arise during the rollout because of human behavior in keeping known procedures and avoiding changes, especially for security. But to stay on HP 3000 into the future, the effort is worth it, especially for better security.

The project worked better when it relied on the Security/3000 software installed on the server hosting Order Fulfillment. Two-factor security was just gaining widespread traction when this 3000 utilized it. Schmidt acknowledged that the tech work was not simple, but was manageable. When a 3000 site is faced with the alternative of developing a replacement application away from MPE/iX, or selecting an app off the shelf like SAP, creating two-factor is within the limits of possibility. Plus, it may not be as expensive as scrapping an MPE application.

Schmidt's article covers an Agent Solution created by CSC. Even 18 years ago, remaining on the 3000 was an issue worth exploring. When many outside firms access a 3000, two factor can be key.

DuPont wanted two-factor tested on its NT systems, plus the 3000.

NT and MPE were selected as pilots: NT because of the large number of servers running that environment; and MPE because of the thinking that this platform might be different from all others and more difficult to implement. However, the company also recognized the importance of running its 3000-based Order Fulfillment Process with a lot of different outside partners.

RSA’s first attempt to develop an agent for MPE was very simple: A token had to become configured for a combination of MPE-USER-ID.MPE ACCOUNT. This combination could not be reused on another token. It was not possible to use wildcards or to add SESSION-IDs or MPE-GROUP to have a complete logon string. Because of the MPE characteristic to share logons (on all levels of capabilities) this version of the agent was not what we were looking for. (More drastically: This agent could not function for the MPE platform).

The second attempt was much better: everything was changed to the chemical company’s already-existing Security/3000 setup. Now Security/3000 invokes the RSA Agent to contact the RSA Server. It transmits either the SESSION-ID or the MPE-USER-ID as the name of the token. If the token is known and allowed to access the HP 3000, the agent asks the user for the current tokencode plus PIN.

This agent also functions without Security/3000 by adding some lines to the System’s Logon UDC. This drops some additional functions in combination with Security/3000, like verifying a user profile in any case (SESSION-ID,MPE-USER-ID.MPE-ACCOUNT is defined as allowed logon in Security/3000, all others will be refused before starting anything), but it will work.

The project report details show this could be installed even before two-factor took a wide foothold in IT. Schmidt doesn't share the code in his article because it was custom work for a dedicated customer. But the process is worth a look, even if only to prove that custom code brings a 3000 into security compliance.

"One thing is essential," Schmidt wrote. "The RSA Agent for MPE does not replace the MPE password process like it does for Unix or NT. It is activated first when the HELLO string has been entered and the MPE password hurdle has been passed (Account, User, and/or Group Password) and (as an option) the basic check within Security/3000 for profile existence is passed. Now any other logon UDC functions are invoked, and this activates the RSA Agent.

Having Security/3000 in place is a good idea to replace the session passwords (if any) by supplying the tokencode.

Not having session names in place, the RSA Agent will add an additional password. I do not recommend eliminating the MPE password — it’s still a fence around your system and is needed for batch security (depending on the streaming security you have in place).

Complete details are in the NewsWire website's archived Technical Articles. Go forth and secure, if preserving an application is a better choice than locating an app replacement.

Posted by Ron Seybold at 06:52 PM in Hidden Value, Migration | Permalink | Comments (1)

February 23, 2018

Friday Fine-tune: Check on LDEV availability

Is there a way to have an HP 3000 jobstream check to see if a tape drive (LDEV) is available? I am not seeing a HP system variable that seems to list the status. I can see via a SHOWDEV that the device is available or not. I just need a jobstream to be able to do the same.

Roy Brown replies

We use a utility, apectrl.pub.orbit, that we found in our ORBIT account alongside Backup+. We use this, in a command file run as a jobstream, to check that tapes are mounted ready for our nightly backups, and, in the backup job itself, to eject them when complete.

Tom Hula says,

It's been awhile since I've messed with the 3000. I have a utility that I received from Terry Tipton many years ago that does that checking. It is called CHKTAPE. So if the tape drive is dev 7, I have CHKTAPE 7 in the jobstream and then check for the results in CHKTAPE_RESULT. We are looking for a value of 0, but here are all the results:

0 - Tape is unowned, online, at BOT and writeable
1 - Tape is unowned, online, at BOT and write protected
2 - An error occurred.  Probably an invalid device number
3 - Device is not a tape drive
4 - Device is owned by another process
5 - Device is owned by the system
6 - Tape is not online
7 - No tape in device

Terry has a reminder that the program must reside in a group with PM capability. I have been using it on all my backups since without any problems. Let me know if you are interested in getting a copy of this utility.

Alan Yeo adds,

We use the little ONLINE utility from Allegro to put a tape on-line, or back on-line; we use it to put automatically back on-line to do a verify after the store.

Donna Hofmeister replies

Here's a scripted solution to the question. MPE has the best scripting language of any OS. Thanks, Jeff (Vance)!

The following will return CI variables that can be easily used in a job:
parm tape=0,entry=main

if "!entry" = "main"
 if "!tape" = "0" or "!tape" = "?"
   echo ![basename(hpfile)] [tape_ldev_num]
   echo              req.
   echo ![basename(hpfile)] is designed to check the 
   echo !desired tape
   echo   device to see if a write-enabled tape is mounted and
   echo   available for use.
   echo
   echo The boolean variable _TAPE_READY will be returned.
   echo The string  variable _TAPE_LABEL will be returned, if available.
   return
 endif
 if numeric("!tape")
   setvar _ct_tape !tape
 else
   echo ERROR: The parameter "!tape" is not numeric
   return
 endif
 file sdtemp;rec=-80,,f,ascii;temp
 if finfo("*sdtemp","exists")
   purge sdtemp,temp
 endif
 errclear
 continue
 showdev !_ct_tape > *sdtemp
 if hpcierr <> 0
   echo !hpcierr   !hpcierrmsg
   escape !hpcierr
 endif
 setvar _ct_tape_ready false
 setvar _tape_ready false
 setvar _tape_label ''
 xeq !hpfile !_ct_tape,process < *sdtemp
 if _ct_tape_ready
   setvar _tape_ready true
 endif
 if _ct_tape_label > ''
   setvar _tape_label _ct_tape_label
 endif
 deletevar [email protected]
 purge sdtemp,temp
 reset sdtemp
elseif "!entry" = "process"
 setvar _ct_eof finfo(hpstdin,'eof')
 while setvar(_ct_eof,_ct_eof-1) >= 0
   input _ct_rec
   if numeric(word(_ct_rec))
     if pos("AVAIL    (W)",_ct_rec) > 0
       setvar _ct_tape_ready true
     endif
     setvar _ct_tape_label   repl(str(_ct_rec,43,14),' ','')
   endif
 endwhile
endif

In a job, it might look something like this:

!chktape 7
!if not _TAPE_READY
! tellop No Tape is mounted
! setjcw jcw=fatal
!endif

Posted by Ron Seybold at 09:03 PM in Hidden Value, Homesteading | Permalink | Comments (0)

February 16, 2018

Friday Fine-tune: Deleting Bad System Disks

As HP 3000s age their disks go bad, the fate of any component with moving parts. Even after replacing a faulty drive there are a few software steps to perform. Wyell Grunwald explains his problem after replacing a failed system bootup disk

Our disk was a MEMBER in MPEXL_SYSTEM_VOLUME_SET. I am trying to delete the disk off the system.  Upon startup, the 3000 says LDEV 4 is not available.  When going into SYSGEN, then IO, then DDEV 4, it gives me a warning that it is part of the system volume set — and cannot be deleted. How do I get rid of this disk?

Gilles Schipper of support provider GSA said that INSTALL is something to watch while resetting 3000 system disks.

Sounds like your install did not leave you with only a single MPEXL_SYSTEM_VOLUME_SET disk. Could it be that you have more than one system volume after INSTALL because other, non-LDEV 1 volumes were added with the AVOL command of SYSGEN — instead of the more traditional way of adding system volumes via the VOLUTIL utility?

You can check as follows:

SYSGEN
IO
LVOL

If the resulting output shows more than one volume, that's the answer.

He offers a repair solution as well.

The solution would be as follows:

1. Reboot with:

START NORECOVERY SINGLE-DISC SINGLE-USER

2. With SYSGEN, perform a DVOL for all non-LDEV1 volumes

3. HOLD, then KEEP CONFIG.SYS

4. Create new SLT.

5. Perform INSTALL from newly-created SLT.

6. Add any non-LDEV1 system volumes with VOLUTIL. This will avoid such problems in future.

If you do see only one system volume with the LVOL command, the only thing I can think of is that VOLUTIL was used to add LDEV 4 to the MPEXL_SYSTEM_VOLUME_SET after the install.

Posted by Ron Seybold at 07:43 PM in Hidden Value, Homesteading | Permalink | Comments (0)

February 09, 2018

Using a PURGEGROUP to clean volumes up

Empty file cabinetIs using PURGEGROUP a way to clean up where groups reside on multiple volume sets? I want to remove groups from Volumesets that aren't considered HOMEVS.

Tracy Johnson says

The syntax for a command on group PUB.CCC might read

PURGEGROUP PUB.CCC;ONVS=USER_SETSW

Denys Beauchemin says

The ALTGROUP, PURGEGROUP and NEWGROUP commands act on the specified volumeset after the ONVS keyword.

The HOMEVS keyword is used to specify in which volumeset the group is supposed to reside and where the files will be found in a LISTF or FOPEN.

If you have the same group.account existing on multiple volumesets and they have files in them, the entries on volumesets—other than what is in HOMEVS for that group—are invisible. If you want to get to them, you need to point the group's HOMEVS to that volumeset and then you can get to the files.

If there are no files in the group.account of other volumesets, it's not a big deal.

Keven Miller says

You could review the volume scripts available that were once on HP's Jazz server. 

Take care in using  the PURGEGROUP command. You can have files existing in the same group name, on separate volumes—which makes mounting that group a problem. So make sure the group on the volume is clean of files you might desire before the PURGEGROUP.

HP's documentation for the PURGEGROUP command has a similar warning.

In the two pages devoted to PURGEGROUP, the manual says,

Do not attempt to purge the PUB group of the SYS account. The public group of the system account, PUB.SYS, cannot be completely purged. If you specify this group in the group name parameter, all non-system and inactive files are purged, which seriously impairs the proper functioning of the entire system.

Posted by Ron Seybold at 04:09 PM in Hidden Value, Homesteading | Permalink | Comments (0)

February 02, 2018

Simplifying MPE/iX Patch Management

NewsWire Classic

By John Burke

Patching-machineEven though Patch/iX and Stage/iX were introduced with MPE/iX in 1996, these HP tools are poorly understood and generally under-used. Both are tremendous productivity tools when compared with previous techniques for applying patches to MPE/iX. Prior to the introduction of Patch/iX and Stage/iX, system managers did their patching with AUTOPAT, and you had to allow for at least a half-day of downtime. Plus, in relying on tape, you were relying on a notoriously flaky medium where all sorts of things could go wrong and create “the weekend in Hell.”

Patch/iX moves prep time out of downtime, cutting downtime in half because you create the CSLT (or staging area) during production time. Stage/iX reduces downtime to as little as 15-20 minutes by eliminating tape altogether and, furthermore, makes recovery from a bad patches as simple as a reboot.

This article is based upon the Patch Management sessions I have presented at Solutions Symposia. The complete set of 142 slides (over 100 screen shots) and 20 pages of handouts are downloadable in a file from www.burke-consulting.com. The complete presentation takes you step-by-step through the application of a PowerPatch using Patch/iX with a CSLT and the application of a downloaded reactive patch using Patch/iX and Stage/iX. Included is an example of using Stage/iX to recover from a bad patch.

Why should you care about Patch Management? Studies and surveys suggest that 50-80 percent of all HP 3000s will still be running by 2006-2009 – even HP now agrees. Keeping a system running smoothly includes knowing how to efficiently and successfully apply patches to the system. HP has committed to supplying bug fix patches to MPE/iX and its subsystems through 2006, including two PowerPatches per year. [Ed. note: The process continued through 2008.] There may also be new functionality added, either to support new devices or in response to the System Improvement Ballot (SIB).

Patch/iX is a tool for managing patches. It can be used to apply reactive patches, a PowerPatch, or an add-on SUBSYS tape with a PowerPatch. Patch/iX is actually a bundle including the PATCHIX program and a number of auxiliary files that are OS release dependent. Patch/iX allows you to:

• Qualify all patches in a set of patches;

• Install reactive patches at the same time as a PowerPatch;

• Selectively apply patches from a PowerPatch tape; and,

• Create the CSLT (or staging area for Stage/iX) while users are still on the system; i.e., when it is convenient for you without incurring downtime.

Stage/iX is an OS facility for applying and managing the application of patches. Stage/iX reduces system downtime for applying patches to the length of time required for a reboot and provides an easy and reliable method for backing out patches. Stage/iX includes an interface to Patch/iX that creates the “staging area” and two utilities:

• STAGEMAN, which allows you to manage all aspects of patch staging, including which version of the OS will be used for the next update; and,

• STAGEISL, an ISL utility available from the ISL prompt whenever the system is down. It contains a subset of STAGEMAN functionality that allows you to recover from most problems.

Steps in staging

The set of all operating system files, for example NL.PUB.SYS, etc., are considered the current Base OS. Stage/iX creates and manages staging areas, which are HFS directories that hold versions of files that are different from the Base. More than one staging area can exist at a time. Each staging area contains the difference, or delta, between the Base OS and a patched version of the OS.

When a staging area is activated on the next boot, the files in the staging area are moved into their natural locations while the Base versions of the files are saved in a Stage/iX archive HFS directory. To back out a patch, the reverse takes place and the system is restored to its original state.

Once you are satisfied with the new and patched OS, you can COMMIT the staging area to the Base, deleting the staging area directory and all archived Base files. Note that Stage/iX and Patch/iX allow new patches to be staged and applied in a cumulative fashion. In other words, if you create a new staging area while another staging area is active, the new staging area will contain all the changes between the Base and the active staging area plus all the new changes.

Whether or not you use Patch/iX and Stage/iX, the key to successful OS patching is preparation. Information is the key to preparation. The System Software Maintenance Manual (S2M2) for your particular release of MPE/iX is the bible for all patch management activities. It contains a checklist for each possible update and patch activity along with detailed sections corresponding to checklist items. A hardcopy version and a PDF version on CD usually ship with each major OS release.

A PowerPatch usually comes with some patch-specific documentation – make sure you have it available before you start.

Finally, before you ever sit down at the keyboard, create a Patch Book for the specific patch activity you will be attempting. You can do it with the hardcopy manual and a copy machine, but I prefer to use the PDF version, printing out the two-page checklist and each section that makes up the checklist to create my Patch Book.

How to apply patches

Before proceeding too far, check HPSWINFO.PUB.SYS to ensure the patch has not already been applied. [Note that Patch/iX will tell you if a patch, or even a superceding patch, has already been applied, but it only takes a moment to check HPSWINFO and it could save you some time.] Each patch has an eight character ID. For example, consider TIXMXC3B. The first three characters indicate the subsystem; in this case TIX stands for TurboIMAGE. The next four characters are internal to HP’s patch management mechanism. The final character is the version identifier; in this case “B” indicates the second version of this patch.

First off, identify the proper checklist, in this case Checklist B, and create your Patch Book. Next, review the information about the patches at the ITRC; in particular, look for any patch dependencies.

You need to make sure you have the latest version of Patch/iX installed on your target system. This is critical to your success. All sorts of bad things can happen if you use an old or incomplete version of the Patch/iX bundle. To check the version of Patch/iX, sign on as “MANAGER.SYS,INSTALL” and type in PATCHIX VERSION, The program will respond with something like: Patch/iX Version B.01.09

Once you have the current Patch/iX and your patches, you are ready to run Patch/iX and create your staging area. There are four steps in any run of Patch/iX:

• “Select Activities,” where you define what type of patching activity you want to perform. You have the choice of adding a PowerPatch, adding reactive patches from tape, adding reactive patches from download or adding SUBSYS products.

• “View Patches” (optional): You can actually view information about all the patches that have been applied previously to your system. Note that this can easily number in the hundreds for a system that is kept reasonably up to date.

• “Qualify Patches,” where Patch/iX does a lot of work to determine which patches of the set you supply it with can and/or should be applied.

• Create the stage, the tape, or both that will be used to actually change the OS.

This is all done while normal production continues and places a minimal load on your system. Once you have created your stage with Patch/iX, you run STAGEMAN to activate your staging area with the SET command. The next time you boot your system (and this can be done remotely from your home at 3 AM Sunday morning if you like), your changes will take effect. Total downtime is the time it takes to do a SHUTDOWN followed by a START NORECOVERY.

What if something goes wrong? If you have problems after successfully rebooting your system and you want to back out your patches, simply run STAGEMAN and use the SET command to make the Base the active stage, reboot your system and you are right back where you started. Suppose you cannot even boot the system successfully after setting the stage? Simply boot to the ISL prompt and use STAGEISL (see Fig. 3 below) to set the active stage to BASE, reboot and, again, you are back to where you started.

Figure 3: Using STAGEISL to recover from a SYSTEM ABORT due to a bad patch

Once you are satisfied with your changes after reasonable testing you can again run STAGEMAN and this time use the COMMIT command to make the active stage the Base and free up the disk space occupied by the old Base.

Posted by Ron Seybold at 10:41 PM in Hidden Value, Homesteading | Permalink | Comments (0)

January 26, 2018

Fine-tune: Policing logins, telnet on 3000s

How can I set up a time constraint to a particular login, or group of logins, onto the HP 3000?

If you do not have a security product, you could create a UDC using OPTION LOGON, which would check the system time (for example, < 6:00am OR > 7:00pm), then ECHO a warning to the user, and then issue BYE. You might want to include the OPTION NOBREAK as well.

How can I restrict inbound telnet by IP address?

You can limit incoming telnets to your machine by using the INETDSEC.NET.SYS file. If you haven’t made use of this file previously, there’s a sample file — INSECSMP.NET.SYS — that you can copy to INETDSEC.NET.SYS and make changes from there. You will also need to link it with the Posix name using this command:

NEWLINK /usr/adm/inetd.sec, INETDSEC.NET.SYS

Details are in HP's Configuring and Managing MPE/iX Internet Services manual.

How can I dynamically control hardware compression on my DDS drives?

The name of the command file is devctrl.mpexl.telesup. An example:

xeq devctrl.mpexl.telesup 38;compression=disable

The help command “help devctrl.mpexl.telesup” will display the parameters. 

The full syntax must be entered on a single line:

DEVCTRL.MPEXL.TELESUP dev=(ldev) eject=(enable/disable/nochange)
compression=(enable/disable/nochange) load=(online/offline/nochange)

Posted by Ron Seybold at 08:20 PM in Hidden Value | Permalink | Comments (0)

January 19, 2018

Friday Fine-tune: How to add disks and redesign HP 3000 volume sets

Disk-drive-platter-hp3000I am in serious need of some hardware and hardware setup redesign. Essentially, we have 30GB of disk all in the system volume set and want to add 20GB more and go to multiple volume sets. How do I do this?

This checklist can be used to add new disks and completely redesign the volume sets:

1. Perform two full system backups and verify each.
2. Create a new sysgen tape.
3. Check the new sysgen tape with CHECKSLT.
4. Copy @[email protected] to a separate tape and verify.
5. Verify all disk drives configured and working properly.
6. Create a BULDACCT job for each new volume set with just the accounts destined for that volume set.
7. Verify that a current full BULDACCT exists on tape.
8. Shut down the system.
9. Restart the system.
10. From the ISL prompt, INSTALL.
11. In VOLUTIL, scratch all drives except for ldev 1.
12. In VOLUTIL, do "NEWVOL volset:member# ldev# 100 100" for each volume in the system volume set (other than ldev 1).
13. In VOLUTIL, do "NEWSET volset member# ldev# 100 100" for the master volume for each new set.
14. In VOLUTIL, do "NEWVOL volset:member# ldev# 100 100" for each volume in each new
set.
15. Restore SYS account files with ;KEEP;SHOW;OLDDATE options.
16. Stream all BULDACCT jobs to create accounts structure.
17. Restore all files with ;KEEP;SHOW;OLDDATE options.
18. Spot-verify applications.
19. Once everything appears OK, run a BULDACCT.
20. Perform a full system backup.

In Step 15 watch out when you restore from a separate tape with @[email protected] You want to verify that tape. You'll like the feeling of knowing you can get at least the operating system back up without third-party backup software intervention. (In case you're still using a third-party tool for backups.) 

A few rules of thumb:

1. Don't mix unlike-size drives on a single volume set. This is mostly an operational consideration. Avoid having the small volumes in the set fill up first with plenty of space left on the larger volumes.

2. Put the more critical user accounts on faster, newer disk drives.

3. Set up the volume sets in a business-logical manner. In other words, put accounts in a volume set with other, related accounts, if possible.Try to clearly isolate the volume sets along boundaries.

4. If you're redesigning the disk environment and doing an INSTALL, be sure to have at least two verified backups.

5. Don't be afraid to have a volume set made up of drives configured on multiple controller paths. For example, you might have three single-ended IO controller cards (if it's an older 3000). On a few of your sets, you can have drives from each.

Posted by Ron Seybold at 06:49 PM in Hidden Value, Homesteading | Permalink | Comments (0)

January 12, 2018

Disaster Recovery Optimization Techniques

Newswire Classic

Editor's Note: The 3000s still in service continue to require disaster recovery processes and plans. Here's a primer on crafting what's needed.

By Gilles Schipper
Newswire Homesteading Editor

While working with a customer on the design and implementation of disaster recovery (DR) plan for their large HP 3000 system, it became apparent that the mechanics of its implementation had room for improvement.

In this specific example, the customer has a production N-Class HP 3000 in its primary location and a backup HP 3000 Series 969 system in a secondary location several hundred miles removed from the primary.

The process of implementing the DR was more manual-intensive than it needed to be. As an aside, it was completed entirely from a remote location — thanks to the Internet, VPNs and the use of the HP Secure Web Console on the 969.

One of the most labor-intensive aspects of the DR exercise was to rebuild the IO configuration of the DR machine (the 969) from the full backup tape of the production N-Class machine, which included an integrated system load tape (SLT) as part of the backup.

The ability to integrate the SLT on the same tape as the full backup is very convenient. It results in a simplified recovery procedure as well as the assurance that the SLT to be used will be as current as possible.

When rebuilding a system from scratch from a SLT/Backup, if the target system (in this case the 969) differs in architecture from the source system (N-4000) it is usually necessary to modify all the device paths and device configuration specifications with SYSGEN and then rebooting the system in order to even be able to utilize the tape drive of the target system to restore any files at all.

(This would be apart from the files restored during the INSTALL process — which does not require proper configuration of any IO component at all).

Some would argue that this system re-configuration needs to be completed only once since any future system rebuilds would require only a “data refresh” rather than a complete system re-INSTALL.

I say that this would be true only in very stable system environments where IO configurations — including network printer configurations — are static and where TurboIMAGE transaction logging is not utilized. Otherwise there could be unpleasant results and complications from using stale configurations in a real disaster recovery situation.

In any case, there really is no reason to take any chances, since the labor-intensive step of creating a proper DR target system configuration environment is achievable minus the labor-intensive part – or at least without repetition of the manual chore of re-configuring the target system each time the DR is exercised.

Unless both the production system and the DR system are architecturally similar (i.e. they belong to same HP 3000 family) the configuration of the target system (the DR machine) cloned from the source system (the production machine) will be non-trivial.

At a minimum, before data restore can begin on the DR machine, the path hierarchy of the tape drive associated with the backup tape must be re-created. Further, if the subsequent restore requires more than just the system disk, all the path components for all the disk drives must also be created.

In a real DR situation, this task can be daunting at best – particularly since it may be difficult in that event to be able to access the appropriate documentation that describes the pertinent SYSGEN configuration requirements. It be preferable to complete this configuration well in advance of the hope-it-never-happens event.

In fact, it is entirely possible to create an appropriate DR configuration environment that is (almost) completely integrated into one’s production environment.

SYSGEN IO requirements

In order to provision a potential DR HP 3000 system’s IO configuration requirements into an existing production HP3000 SLT, it is only necessary to configure all of the DR path components into the existing production system’s IO configuration.

The fact that these paths do not exist on the production (source) system is immaterial — as long as you can withstand the menacing, although perfectly innocuous — console error messages that accompany a reboot of a system so configured.

There is also the small matter of actual device numbers — and that is why I included the “almost” when mentioning “completely integrated” earlier.

Clearly, it is not possible to have duplicate device numbers when configuring both production and DR devices into the production SYSGEN IO configuration. So, in order to distinguish between the two systems (one the real production, the other virtual DR), I simply add 100 (you can choose any number) to the device numbers associated with the virtual machine. Then when actually testing or invoking the DR process, it is a simple matter to change the device numbers in a batch job designed for that purpose.

Another batch job could be pre-built that would add the appropriate disk drives and volume sets to the system’s disk pool, using VOLUTIL. These batch jobs would be included in the full backup tape and could be restored almost immediately following the INSTALL by referencing :file tape;dev=107 (to use my example of adding 100 to the corresponding virtual device)

The command :restore *tape;{fileset}; directory;olddate; keep;create;show (where {fileset} corresponds to the fileset that would include the appropriate device number change and volutil batch jobs. One could take this technique one step further in the case where the DR target machine is unknown.

In such a situation, you could create a SYSGEN IO configuration that includes path constructs for any possible virtual machine that you could think of and include them in the host configuration – adding 100 for devices associated with virtual machine 1, 200 for virtual machine 2, and so on.

Posted by Ron Seybold at 07:23 PM in Hidden Value | Permalink | Comments (0)

January 05, 2018

Friday Fine-tune: How to discover the creation date of a STORE tape

Newswire Classic

By John Burke

It is probably more and more likely that, as the years pass by, you will discover a STORE tape and wonder when it was created. Therefore it is a good idea to review how to do this. I started out writing “how to easily do this,” but realized there is nothing easy about it — since it is not well-documented and if you just want the creation date, you have to do a bit of a kludge to get it. Why not something better?

It turns out the ;LISTDIR option of RESTORE is the best you can do. But if you do not want a list of all the files on the tape, you need to feed the command the name of some dummy, non-existent file. ;LISTDIR will also display the command used to create the tape.

By the way, this only works with NMSTORE tapes. For example, when ;LISTDIR is used on a SYSDUMP tape that also stored files, you get something like this (note that even though you are using the RESTORE command, if it contains the ;LISTDIR option, nothing is actually restored):

:restore *t;dummy;listdir

>> TURBO-STORE/RESTORE VERSION C.65.19 B5151AA <<

RESTORE *t;dummy;LISTDIR
FRI, DEC 31, 2004, 3:22 PM
RESTORE SKIPPING SLT IN PROGRESS ON LDEV 7

MPE/iX MEDIA DIRECTORY
MEDIA NAME : STORE/RESTORE-HP/3000.MPEXL 
MEDIA VERSION : MPE/iX 08.50 FIXED ASCII
MEDIA NUMBER : 1

MEDIA CREATION DATE
WED, MAY 7, 2003, 7:06 AM

SYSGEN ^SLTZDUMP.INDIRECT;*SYSGTAPE;LDEV=7;
REELNUM=1;SLTDATE=52863;TIME=117839624

MEDIA CREATED WITH THE FOLLOWING OPTIONS
OPTION DIRECTORY
OPTION ONVS

Posted by Ron Seybold at 10:43 PM in Hidden Value, Homesteading | Permalink | Comments (0)

January 03, 2018

How to make a date that lasts on MPE/iX

January 1 calendar pageNow that it's 2018, there's less than 10 years remaining before HP's intrinsic for date handling on MPE/iX loses its senses. CALENDAR's upcoming problems have fixes. There's a DIY method that in-house application developers can use to make dates in 2028 read correctly, too.

The key to this DIY repair is to intercept a formatting intrinsic for CALENDAR.

CALENDAR returns two numbers: a "year" from 0 to 127 and a "day of year" from 1 to 366.
 
FMTCALENDAR takes those two numbers and turns it into a string like Monday, January 1, 1900.  It takes the "year" and adds it to 1900 and displays that. "In a sense," explains Allegro's Steve Cooper, "that's where things 'go wrong'."
If one intercepts FMTCALENDAR and replaces it with their own routine, it can say if the "year" is 0 to 50, then add 2028 to it, otherwise, add 1900 as it always did.  That would push the problem out another 50 years.
This interception task might be above your organization's pay grade. If that's true, there are 3000-focused companies that can help with that work. These kinds of repairs to applications are the beginning of life-extension for MPE/iX systems. There might be more to adjust, so it's a good idea to get some help while the community still has options for support.

Posted by Ron Seybold at 11:48 AM in Hidden Value, Homesteading | Permalink | Comments (0)

December 29, 2017

Friday Fine-Tune: Moving DDS stores to disk

Moving-van-640Editor's note: In the last two weeks 3000 owners have been asking about DDS tape storage migration and how to find 38-year-old systems. Here in the last working day for the year 2017, it seems like we're running in a time machine. Here's some help on moving old data to new media.

We're taking Monday off to celebrate the new year. Not many people figured the 3000 would have users working in that 15th year since HP stopped making the server. We'll be back Wednesday with a new story. Seems like anything can happen.

I want to restore some files from a DDS tape to a store-to-disc file. It been a while I am not sure if this is something that can be done. I need some help with the syntax.

Alan Yeo says

I think you need to restore the files from the tape and then store them to disc, as the resulting disc file needs to build a header of the files it contains.

So after restore, the store to disc syntax is something like

!SETVAR BACKUP_FILE "nameoffileyouwantocreate"
!FILE BK=!BACKUP_FILE;DEV=DISC
!FILE SYSLIST=!BACKUP_FILE;DEV=LP
!STORE fileselectionstring;*BK;SHOW;PROGRESS=5

Keven Miller adds

There is also TAPECOPY that reads STORE tapes and creates an STD (Store to Disk) on disk -- provided the STORE is all on one tape. I have a copy of the program on my website. Look for TAPECOPY, it's a tar file.

At another location on my site you can see the Text file document, and a .wrq file for using with Reflection with Labels option, or the .std  file which is a store-to-disc.

I also have Tapecpyv, an SPL version usable on both MPE/iX and MPE/V. This SPL one is the latest.

The syntax


:FILE TAPEIN;dev=7
:TAPECPYV  "TD  MYSTD"

Reads the STORE tape on dev 7 into STD file MYSTD.

Posted by Ron Seybold at 05:22 PM in Hidden Value, Homesteading | Permalink | Comments (0)

December 15, 2017

Making a 3000 respond to networks, faster

I have a new HP 3000 A-500 installation that I can't Telnet to. Ping works both ways, but I get nothing with Reflection's Telnet. What do I need to check on the 3000 to get Telnet running?

Robert Schlosser says:

Two things come to mind: Check if the JINETD job is running [run it by streaming JINETD.NET.SYS]; and if the line "telnet 23/tcp" is in your SERVICES.NET.SYS file.

Donna Hofmeister adds:

You also need to have INETDCNF.NET configured.

There's a collection of 'samp' files in .NET that in most cases need to be copied to their 'real' file name in order to make TCP/INETD networking work.

Hofmeister, one of the community's more experienced hands with the standard Unix and Posix utilities built into MPE/iX and the HP 3000, explains.

The samp files are 

BPTABSMP -- bootptab (most people don’t use)
HOSTSAMP -- hosts
INCNFSMP -- inetd configuration
INSECSMP -- inetd security 
NETSAMP  -- reachable networks
NSSWSAMP -- nsswitch
PROTSAMP -- protocol
RSLVSAMP -- DNS resolving
SERVSAMP -- services

I believe each of the files also has a counterpart in /etc which is a link to the real file in .NET.SYS. If the real files are missing from .NET.SYS then many things (including Telnet and FTP) won’t work.
Our N-Class response times have slipped into unusable measurements. Linkcontrol only shows an issue with Recv dropped: addr on one path. Our enterprise network monitoring software sends a packet that the HP 3000 cannot handle. Do I need to shutdown and restart JINETD or restart the network to have my TCP changes in NMMGR take effect?

Craig Lalley wonders:

How are your gateways defined? If you change the gateway

NSCONTROL ;UPDATE=INTERNET

then you could try deleting the wrong gateway and see if it helps. You may have a router broadcasting a wrong gateway.

Hofmeister says the problems might be in the physical layer:

Did you change NMMGR before or after the reboot? If after, you're going to want to reboot again. Your packet loss is disturbing. I'd be suspicious of a physical layer problem.

Problems in the physical layer can be addressed by replacing parts, Mark Landin advises.

  • Could be a bad network cable or connector. Replace them.
    Could be a bad network switch port. Connect the system to another port (properly configured, of course).
  • Could be a bad NIC. Swap them in the 3000 and see if the problem moves with the card.

Hofmeister points back to TCP timer issues"

On PCI (A- and N-Class) systems with 100bt cards, you're more likely to see 'recv dropped: addr' counts due to the way the card handles (or not, actually) traffic routed for a different destination.

Typically these counts are nothing to be concerned about. What is concerning are the TCP statistics.  Retransmits are almost always a function of using the default (or otherwise messed up) TCP timers.

Posted by Ron Seybold at 08:23 PM in Hidden Value | Permalink | Comments (0)

December 01, 2017

Fine-tune Friday: ODE's 3000 diagnostics

DiagnosticsOne diagnostic super-program, ODE, holds a wide range of tests for HP's 3000 hardware. These testing programs got more important once HP mothballed its Predictive Support service for the HP 3000 in 2006. Predictive would dial into a 3000, poke around to see what might be ready to fail, then report to HP's support engineers. ODE's diagnostics are a manual way to perform the same task, or fix something that's broken.

However, ODE includes programs that require a password. Stan Sieler has inventoried what was available in MPE/iX and examined each program for whether it's unlocked for customer use. That was back in the days when 3000 owners were still HP support customers. Today the 3000 owners are customers of third party support firms like Pivital Solutions, or Sieler's own Allegro. The locked programs remain in that state, more than six years after HP shuttered its support operations.

ODE's options received a run-through from Sieler.

Disk Firmware Download Utility 2 (DFDUTIL2)
Version B.02.21 (23rd Sep 2003)
No disks were found.

Note: Didn't seem to want a password. Since Seagate disks are so prevalent, one would expect some means of updating firmware on them ... if firmware updates exist.

DISKEXPT2 
Version B.00.23

Note: Needs a password

Note: although it doesn't "see" Seagate drives, you can configure them in and access them.

DISKUTIL2
Version B.00.22
No supported devices found on this system.

Note: doesn't "see" Seagate drives, and you can't configure them in.

NIKEARRY2
Version B.01.12

Needs a password

VADIAG2
Version B.01.07
Please wait while the system is scanned for Fibre Channel Adapters...
No Fibre Channel Adapters were found. The test cannot continue. Aborting.

(No password requested up to that point.)

WDIAG
Version A.01.53

Needs a password

WDIAG is the PCXW ODE-based diagnostic program. It tests the processor of the various PCXW-based systems in the offline environment. The program consists of 150 sections, 1/150, which are organized into the following groups

1. CPU data path tests, Sections 1/6 (6 sections)
2. BUS-INTERFACE tests, Sections 7/10 (4 sections)
3. CACHE tests, Sections 11/25 (15 sections)
4. TLB tests, Sections 26/34 (9 sections)
5. CPU instruction tests, Sections 35/86 (52 sections)
6. CPU extended tests, Sections 87/101  (15 sections)
7. Floating point tests, Sections 102/134 (33 sections)
8. Multiple processor tests, Sections 140/150 (11 sections)

IOTEST2 
Version B.00.35

PERFVER2
Version B.00.15

Posted by Ron Seybold at 06:09 PM in Hidden Value, Homesteading | Permalink | Comments (0)

November 03, 2017

Dealing with PCL in modern printer networks

HP 3000s generate Printer Command Language, the format syntax HP created for its line of laser printers. The 3000s were glad to get PCL abilities in their applications and utilities, but PCL is not for everybody. Multifunction devices not schooled in HP technology, such as those from Xerox, need a go-between to extend the 3000's printing.

The easiest and most complete solution to this challenge is Minisoft's NetPrint, written by 3000 output device guru Richard Corn. When we last reported on Corn's creation it was helping the Victor S. Barnes Company pass 3000 output to Ricoh multifunction printers.

But for the company which can't find $995 in a budget for that 3000-ready product, there's a commercial Windows alternative you might try to integrate into your system designs. Charles Finley of Transformix explains that the path to print outside of PCL has multiple steps.

Finley says of the fundamentals:

1. You need to get the print output from the HP 3000 to some device that is external to the HP 3000
2. You may need to intercept the PCL generated on the HP 3000 and format it for the intended device.

On the one hand, you can license the product of either Richard Corn or Minisoft to manage all this -- or if you want to use what MPE provides, you need to intercept the stream by using something that pretends to be an HP LaserJet.

In the second scenario, assuming you can connect the printers to Windows computers, you can use LPD and an interceptor of some kind. A commercial product we have used is RPM from Brooks Internet Software to accomplish the communication part of the process, plus some other PCL translator product to convert the PCL to whatever you need on the printer.

We had two projects in which, instead of the RPM product, we provided our own little interceptor (described at www.xformix.com/xprint) that does the same kind of thing as RPM. We have the Windows machine pretend that it is an HP PCL printer and configure the HP 3000 to print to it. We used other commercial software (two different products) to intercept the output intended for what it thinks is a LaserJet and format the print output so that it prints correctly.

I believe in each case the customers wanted to translate the PCL to PDF and do other stuff with it on the Windows computer before actually printing it. In one case, they wanted to store the PDF on the Windows computer and store reference data in a SQL Server database so that customers could selectively view and print the file at will.

Posted by Ron Seybold at 01:33 PM in Hidden Value, Homesteading | Permalink | Comments (0)

October 27, 2017

Advice on keys for 3000s, and KSAM files

When building a TurboIMAGE database, is it possible to have IMAGE automatically sort a segmented index for the key field?

Gilles Schipper says

No, but you can create TurboIMAGE b-tree index files which allows generic and range searches on items that are indexed - specifically master dataset key items. Only master dataset key items can be associated with b-tree index files.

You can find out more starting at Chapter 11 of the TurboIMAGE manual.

How can I reduce the size of my existing KSAM files? I have removed lots of records from the system and the KSAM files are consuming lots of magnetic real estate, even though there are few records left.

Chuck Trites says

Make a copy of the KSAM file. Then use the verify in KSAMUTIL to get the specs of the file. Purge the KSAMFIL and the KEYFILE if there is one. Build the KSAM file with the specs. FCOPY from the copy to the new KSAM file and you are done. It won't copy the deleted records to the new file.

Francois Desrochers explains

Do a LISTF,5 to get the current key definitions.

Build a temporary output file with all the same attributes:

:BUILD KSNEW;REC=-80,,F,ASCII;KSAMXL;KEY=(B,1,10)

Copy the records from the original file to the temporary file

:FCOPY FROM=KSTEST;TO=KSNEW

Purge the original file and rename the temporary file:

:PURGE KSTEST
:RENAME KSNEW,KSTEST

Posted by Ron Seybold at 08:24 PM in Hidden Value, Homesteading | Permalink | Comments (0)

October 25, 2017

OpenSSL's gaps for 3000s surface again

HP did its best, considering what was left of the MPE/iX lab budget, to move the server into modern security protocols. Much of the work was done after the company announced it would end its 3000 business. The gaps in that work are still being being talked about today.

OpensslA message on the 3000 newsgroup-mailing list noted that installing the SFTP package for the 3000 uncovers one gap in software. John Clogg at Cerro Wire said that "I successfully generated a key pair and loaded the public key on the server, but that didn't solve the No key exchange algorithm problem. One posting I found seemed to suggest that the problem was an old version of the SSL library that did not support the encryption the server was trying to use." A note on enabling the 3000's OpenSSL from 2010 still wished for a library newer than what's left on MPE/iX.

The work that remains to be done—so a 3000 can pass sensitive info via SFTP—has been on a community wish list for many years. Backups using SFTP are missing some updates needed to the SSL library. At least the server's got a way to preserve file characteristics: filecode, recsize, blockfactor, type. Preservation of these attributes means a file can be moved to any offsite storage that could communicate with the MPE/iX system. Posix on MPE/iX comes to the rescue.

In the heart of the financial industry in 2003, a modest-sized HP 3000 connected to more than 100 customers through a secure Internet proxy server. That encryption combination was emerging as HP went into its last quarter of sales for the system. But today's standards are miles ahead of those of 2003.

"The old OpenSSL library does not support the ciphers needed to meet current standards," Clogg said. "I was able to make the connection work because the FTP service provider has a configuration setting to enable "insecure old ciphers." Fortunately, this will work for our purposes, but it would be unacceptable if we were transferring banking, credit card or PII data."

The 3000's OpenSSL library is older than 1.01e, which another homesteader says is the cutoff for security that protects from the Heartbleed hacks and RSA key generation compromises.

James Byrne of Hart & Lyne said

The appropriate fix is to update the SFTP client software and associated OpenSSL libraries to versions which possess the high grade key exchange algorithms required by the sshd server. But given the stage of life the HP 3000 has entered, that may not be possible.

We handled a similar problem some time in the past by setting up a Linux host to act as an SFTP proxy. We connected the HP 3000 to the proxy via a cross-over cable to a NIC devoted solely to the HP 3000. Files were then securely transferred between the proxy and the HP 3000 via plain old FTP.

Clogg hoped that "Maybe some porting guru will do a port of the current SSH and SSL libraries someday. In the meantime, James' use of an intermediate server is probably the best solution."

Posted by Ron Seybold at 07:40 PM in Hidden Value, Homesteading | Permalink | Comments (0)

October 20, 2017

Fine-tune: Database passwords, slow clocks

We are trying to access a database on our old system using QUERY and it is asking for a password. I have done a LISTF ,-3 on the database, but there is no lockword listed (which I assumed would be the password). Where do I find the password assigned to a database?

John Burke replied

Assuming you do not have access to the original schema and you want to know what the password is, not just access the database, then sign on as the creator in the group with the database, run DBUTIL.PUB.SYS and issue the command SHOW databasename PASSWORDS.

Mike Church and Joseph Dolliver added

If you just want to access the database, log on to the system as the database creator and, when asked for password, put in a “;” semicolon and hit return.

Why is my system clock running slow? Our HP 3000 loses about one minute per day.

Bob J. replies

One possibility was addressed by a firmware update. HP's text from a CPU firmware (41.33) update mentions:

“System clock (software maintained) loses time. The time loss occurs randomly and may result in large losses over a relatively short time period. Occurrences of the above problem have only been reported against the HP 3000 979KS/x00 (Mohawk) systems. Software applications that perform frequent calling of a PDC routine, PDC_CHASSIS, affect the amount of time lost by the system clock. Your hardware support company should be happy to update for you.”

[Editor's note: as this question was posed a few years ago, today's hardware support company will be an independent one. We've always recommended Pivital Solutions.]

Tongue firmly in cheek, Wirt Atmar noted

My first guess would be relativistic time dilation effects as viewed by an observer at a distance due to the fact that you’re now migrating off of the HP 3000 at an ever accelerating rate. My second guess, although it’s less likely, would be that your machine has found out that it’s about ready to be abandoned and is so depressed that it simply can no longer work at normal speed. We’ve certainly kept this information from our HP 3000s. There’s just no reason that they need to know this kind of thing at the moment.

And in the same vein, Bernie Sherrard added this, referring to HP's promised end of 3000 support on Dec. 31, 2006

Look at the bright side. At a loss of one minute per day, you won’t get to 12/31/2006, until 2 AM on 1/2/2007. So, you will get 26 hours of support beyond everyone else.

Posted by Ron Seybold at 08:57 PM in Hidden Value, Homesteading | Permalink | Comments (0)

October 16, 2017

Getting the Message Across for MPE/iX

MessagesNot long ago, the HP 3000 community was wondering about the limits of message files in the operating system. HP introduced the feature well back in the 20th Century, but only took Message Files into Native Mode with MPE/iX 5.0. That's certainly within the realm of all operating HP 3000s by today. The message file, according to HP's documentation, is the heart of the 3000's file system InterProcess Communication.

Message files reside partly in memory and partly on disk. MPE XL uses the memory buffer part as much as possible, to achieve the best performance. The disc portion of the message file is used only as secondary storage in case the memory buffer part overflows. For many users of IPC, MPE XL never accesses the disc portion of the message file.

Yes, that says MPE XL up there. The facility has been around a long time.

What do you do with message files? A program could open a message file and write a data record every 2 seconds. The data record could be the dateline plus the 2-word return from the CLOCK intrinsic. In another example, a message file could be used to enable soft interrupts. It might then open a log file to write progress messages from the interrupt handler.

HP's examples of using message files are illustrated using Pascal/XL, so you know this is 3000-specific technology. You'd think they'd be little-used by now, but this month the developers on the 3000 mailing list were asking about limits for the number of message files. An early answer was 63, but Stan Seiler used a classic 3000-era method to discover it: testing.

The answer is 4083. Or, why testing counts.

I just successfully opened 4,083 new message files from one process. Since the max-files-per-process is 4095, I suspect I could probably have squeezed in a couple more, but my test program already had some files open.

That this programming facility is still in use seems to suggest it's got utility left. Multiple programs and processes use message files to communicate. HP explains in an extensive document, "Suppose that a large programming task is to be divided into two processes. One process will interface with the user. This process is referred to as the "supervisor" process. It does some processing tasks itself and offloads others to a "server" process. This process only handles requests from the supervisor and returns the results."

Posted by Ron Seybold at 07:29 PM in Hidden Value, Homesteading | Permalink | Comments (0)