« December 2017 | Main

January 17, 2018

VerraDyne adds new 3000 migration savvy

Legacy Migration VerraDyneThe HP 3000 has journeyed on the migration path for more than 16 years. The journey's length hasn't kept the community from gaining new resources give an MPE/iX datacenter a fresh home, though. VerraDyne takes a bow this year with an offer of skills and service rooted in 3000 transitions. The Transition Era isn't over yet, and Windows remains the most likely destination for the remaining journeys.

In-house application suites make up the biggest part of the homesteading HP 3000s. Business Development VP Bruce McRitchie said his MPE experience began in an era before MPE/XL ran the servers at McCloud-Bishop while other partners worked at System House during the 1980s.

In those days the transitions came off of Wang and DEC systems, he said, as well as making changes for HP 3000 customers. The work in those days was called a conversion more often than a migration. In the years since, replacing an in-house solution with a package was a common choice for migrations. Package replacements have their challenges, though. McRitchie reminds us that custom modifications can make replacement a weak choice, and often a business must change its operations to meet the capabilities of a package. There's sometimes data conversions, too.

In contrast, the VerraDyne migration solution is a native implementation to a target environment with no emulation, middleware, or any black box approach. ADO or ODBC enables database access when a VerraDyne project is complete, usually anywhere from three months to a year from code turnover to return to client. Microsoft's .NET platform is a solution that's worked at prior migrations. But there's also been projects where COBOL II has been moved to Fujitsu or AcuCobol.

Matinelli was an organization with a unique challenge for a VerraDyne migration: HP 3000 Basic became VB.net. Other clients have been the Medford Schools, BASF International (both to Fujitsu) and the Jewelers Board of Trade and the Oklahoma Teacher Retirement System (both to AcuCobol). The experience set also runs to the Micro Focus COBOL product lineup.

Projects for 3000 migrations are bid by lines of code to be moved. Screens in VPlus are converted to WebForms or WinForms for Windows-bound migrations. For systems headed to a Linux or Unix platform, the forms are converted to screen sections or JavaServer pages.

The typically knotty problem of replacing HP intrinsics is handled by rewrites into COBOL or a language the migrating site chooses. MPE JCL is converted to Windows or Unix command scripts. If there's a scripting language a site prefers, VerraDyne can target that as well.

"Every site has some of that specialized MPE/iX functionality," McRitchie said. He added that a migrated system, whether landing on a Windows VB.NET or C# program base, or in software converted to Java for Unix, is delivered completely compatible. "Bug for bug compatible," he quipped, following the fundamental best practices for every migration: what's running successfully on a 3000 will run on the new migrated platform.

IMAGE migration practices move data to SQL Server, Oracle, or other databases selected by a site. Any third party indexes used, such as Omnidex, are converted to database indexes or views. All relations between tables like automatic or manual masters are preserved. Conversion programs are included to convert IMAGE data from the HP 3000 to a selected database. VerraDyne provides source to migration all IMAGE intrinsics such as DBOPEN, DBUPDATE, or DBPUT.

In-house code that's migrated, as it always has been, lets a site retain the heavy investment made in highly customized systems applications. Preservation of resources has been what's kept HP 3000s running well into their second decade of post-HP manufacture.

Migrations of 3000 in-house systems are likely to come from sites that have been running applications since before the 1990s. McRitchie and his consulting partners Tony Blinco and Masoud Entezari have experience that runs back to the days when 3000 disks were large enough to vibrate when they were busiest. "They'd rub the paint off each other," he said with a wistful chuckle, sounding very much like an expert seasoned in servers which are several decades older than when first booted. 

09:55 PM in Migration, Newsmakers | Permalink | Comments (0)

January 15, 2018

Emulation or iron meets Classic 3000 needs

A few weeks ago the 3000 community was polled for a legendary box. One of the most senior editions of Classic 3000s, a Series 42, came up on the Cypress Technology Wanted to Buy list. The 42 was the first 3000 to be adopted in widespread swaths of the business world. It's not easy to imagine what a serious computing manager would need from a Series 42, considering the server was introduced 35 years ago.

Series 42 setupThese Classic 3000s, the pre-RISC generation, sparked enough business to lead HP to create the Precision RISC architecture that was first realized with its Unix server. The HP 9000 hit the Hewlett-Packard customer base and 3000 owners more than a year before the 3000's RISC servers shipped. Without the success of the Classic 3000s, though, nobody could have bought such a replacement Unix server for MPE V. Applications drive platform decisions, and creating RISC had a sting embedded for the less-popular MPE. Unix apps and databases had more vendors.

That need for a Series 42 seems specific, as if there's a component inside that can fulfill a requirement. But if it's a need for an MPE V system, an emulator for the Classic 3000s continues to rise. Last week the volunteers who've created an MPE V simulator announced a new version. The seventh release of the HP 3000 Series III simulator is now available from the Computer History Simulation Project (SIMH) site.

The SIMH software will not replace a production HP 3000 that's still serving in the field, or even be able to step in for an archival 3000. That's a job for the Stromasys Charon HPA virtualized server. But the SIMH software includes a preconfigured MPE-V/R disc image. MPE V isn't a license-protected product like MPE/iX.

Some CIOs might wonder what any MPE system, running MPE V or MPE/iX, might provide to a datacenter in 2018. The answers are continuity and economy, elements that are especially evident in any emulated version of a 3000. Old iron is on the market at affordable prices. If a PA-RISC system can be sold for $1,200, though, it's interesting to consider what a 35-year-old server might fetch. Or who would even have one in working order to sell.

Software like Charon, and to a lesser extent SIMH, earns its consideration more easily than old iron. Virtualization is so embedded in IT plans that it's a bit of a ding to admit you don't virtualize somewhere.

The deep-in the-mists tech of the ATC terminals is a big part of the new SIMH. The new capability shows off the limitations that make it obvious why PA-RISC 3000s are still genuine data processing solutions. The SIMH terminal IO uploads via Telnet using a Reflection terminal emulator are now over 100 times faster than earlier releases of the MPE V software. As a reminder of the IT world's pace during the 1980s when MPE V was king, the new, faster upload time for a one-megabyte file has decreased from 69 minutes to 30 seconds.

That's still a 2-MB a minute pace. A Simulator's User Guide shows the way for ATC setup required for successful Reflection Transfers. That's from the era when locating a Reflection client program on a 3000 was essential for moving data in many applications. That was also the era when HP was manufacturing its own disc drives in Boise, Idaho. The burn-in testing for those drives up in that factory was powered by a massive row of Series 42s. My 1988 tour at Boise was the last time I saw a Series 42 in use.

07:16 PM in Homesteading, News Outta HP | Permalink | Comments (0)

January 12, 2018

Disaster Recovery Optimization Techniques

Newswire Classic

Editor's Note: The 3000s still in service continue to require disaster recovery processes and plans. Here's a primer on crafting what's needed.

By Gilles Schipper
Newswire Homesteading Editor

While working with a customer on the design and implementation of disaster recovery (DR) plan for their large HP 3000 system, it became apparent that the mechanics of its implementation had room for improvement.

In this specific example, the customer has a production N-Class HP 3000 in its primary location and a backup HP 3000 Series 969 system in a secondary location several hundred miles removed from the primary.

The process of implementing the DR was more manual-intensive than it needed to be. As an aside, it was completed entirely from a remote location — thanks to the Internet, VPNs and the use of the HP Secure Web Console on the 969.

One of the most labor-intensive aspects of the DR exercise was to rebuild the IO configuration of the DR machine (the 969) from the full backup tape of the production N-Class machine, which included an integrated system load tape (SLT) as part of the backup.

The ability to integrate the SLT on the same tape as the full backup is very convenient. It results in a simplified recovery procedure as well as the assurance that the SLT to be used will be as current as possible.

When rebuilding a system from scratch from a SLT/Backup, if the target system (in this case the 969) differs in architecture from the source system (N-4000) it is usually necessary to modify all the device paths and device configuration specifications with SYSGEN and then rebooting the system in order to even be able to utilize the tape drive of the target system to restore any files at all.

(This would be apart from the files restored during the INSTALL process — which does not require proper configuration of any IO component at all).

Some would argue that this system re-configuration needs to be completed only once since any future system rebuilds would require only a “data refresh” rather than a complete system re-INSTALL.

I say that this would be true only in very stable system environments where IO configurations — including network printer configurations — are static and where TurboIMAGE transaction logging is not utilized. Otherwise there could be unpleasant results and complications from using stale configurations in a real disaster recovery situation.

In any case, there really is no reason to take any chances, since the labor-intensive step of creating a proper DR target system configuration environment is achievable minus the labor-intensive part – or at least without repetition of the manual chore of re-configuring the target system each time the DR is exercised.

Unless both the production system and the DR system are architecturally similar (i.e. they belong to same HP 3000 family) the configuration of the target system (the DR machine) cloned from the source system (the production machine) will be non-trivial.

At a minimum, before data restore can begin on the DR machine, the path hierarchy of the tape drive associated with the backup tape must be re-created. Further, if the subsequent restore requires more than just the system disk, all the path components for all the disk drives must also be created.

In a real DR situation, this task can be daunting at best – particularly since it may be difficult in that event to be able to access the appropriate documentation that describes the pertinent SYSGEN configuration requirements. It be preferable to complete this configuration well in advance of the hope-it-never-happens event.

In fact, it is entirely possible to create an appropriate DR configuration environment that is (almost) completely integrated into one’s production environment.

SYSGEN IO requirements

In order to provision a potential DR HP 3000 system’s IO configuration requirements into an existing production HP3000 SLT, it is only necessary to configure all of the DR path components into the existing production system’s IO configuration.

The fact that these paths do not exist on the production (source) system is immaterial — as long as you can withstand the menacing, although perfectly innocuous — console error messages that accompany a reboot of a system so configured.

There is also the small matter of actual device numbers — and that is why I included the “almost” when mentioning “completely integrated” earlier.

Clearly, it is not possible to have duplicate device numbers when configuring both production and DR devices into the production SYSGEN IO configuration. So, in order to distinguish between the two systems (one the real production, the other virtual DR), I simply add 100 (you can choose any number) to the device numbers associated with the virtual machine. Then when actually testing or invoking the DR process, it is a simple matter to change the device numbers in a batch job designed for that purpose.

Another batch job could be pre-built that would add the appropriate disk drives and volume sets to the system’s disk pool, using VOLUTIL. These batch jobs would be included in the full backup tape and could be restored almost immediately following the INSTALL by referencing :file tape;dev=107 (to use my example of adding 100 to the corresponding virtual device)

The command :restore *tape;{fileset}; directory;olddate; keep;create;show (where {fileset} corresponds to the fileset that would include the appropriate device number change and volutil batch jobs. One could take this technique one step further in the case where the DR target machine is unknown.

In such a situation, you could create a SYSGEN IO configuration that includes path constructs for any possible virtual machine that you could think of and include them in the host configuration – adding 100 for devices associated with virtual machine 1, 200 for virtual machine 2, and so on.

07:23 PM in Hidden Value | Permalink | Comments (0)

January 11, 2018

Rootstock acquires ERP vendor Kenandy

Rootstock logoThe world of cloud-based ERP got a rumble today when Rootstock acquired competitor Kenandy. The Support Group is a MANMAN-ERP service firm that's got a Kenandy migration in its resume by this year, after moving Disston Tools off MANMAN and onto Kenandy. Support Group president Terry Floyd said the combination of the two leading cloud ERP companies looks like good news for the market.

"They're scaling up to get new business," he said, after sending us the tip about the connection of the firms. He compared the acquisition to the period in the 1990s when Computer Associates absorbed ASK Computer and MANMAN.

"After CA bought MANMAN, they kept on putting out releases and putting money into the company," Floyd said. "Salesforce must be behind this acquisition in some way."

Kenandy and Rootstock's software is built upon Salesforce and its Force platform and toolsets. A thorough article on the Diginomica website says that the deal was a result of a set of opportunities around a mega-deal and a key leader for a new unit at Kenandy. The plans to combine forces for the vendors include keeping development in play for both Rootstock and Kenandy products.

The Diginomica reporting by Brian Sommer says that Kenandy has a significant number of software engineers and a strong financial executive. "It's the talent [at Kenandy] that makes the deal fortuitous," Sommer wrote, "as Rootstock was ramping up for a lot expensive and time-consuming recruiting activity." Rootsource, by taking on the vendor with a product that's replaced a 3000 at a discrete manufacturer, "is of more consequence to Salesforce."

Vendors like the Support Group seem likely to benefit from the acquisition. By Sommer's reckoning, Salesforce might not have known which vendor among its network of ERP partners to call for manufacturing prospects. "Now one call will [send] the right response and product onto the prospect."

ERP mergers don't always have this level of synergy. When Oracle bought JD Edwards/Peoplesoft, there was friction and disconnect between the organizations. Floyd said that as a result of the Kenandy acquisition, "There may be new business for us." Companies like the Support Group supply the front-line experience to migrate 3000 manufacturers to a cloud platform.

Expertise like what's been shown from TSG makes cloud ERP an attractive step forward for MANMAN sites ready to make a move. Rootstock's CEO Pat Gerrehy said that developing best practices aid manufacturers in migrating from legacy ERP.

“When it comes to cloud ERP implementations, customer success is often determined by how you implement, not just what you implement,” said Garrehy. “Our combined company is dedicated to making the transition from legacy ERP easier for our customers. We welcome Kenandy customers into the Rootstock fold.”

09:46 PM in Migration, Newsmakers | Permalink | Comments (0)

January 08, 2018

Searching and finding in MPE/iX with MPEX

Searching ManIt's a world where it's ever-harder to find files of value. This week a story aired on NPR about a hapless young man who mislaid a digital Bitcoin wallet. The currency that was worth pennies eight years ago when he bought it has soared into the $15,000 range. Alas, it's up to the Bitcoin owner to find their own money, since the blockchain currency has no means for recovery. Another owner in the UK a few years back, James Howells, lost millions on a hard drive he'd tossed out. A trip to the landfill to search for it didn't reward him, either.

Being able to locate what you need on your HP 3000 involves going beyond the limits of MPE/iX. Searches with the Vesoft utility deliver more results and faster than any native capabilities.

Terry Floyd of the Support Group suggested MPEX as a searching solution. "MPEX with wildcards and date parameters is what I use for search," he said, "for instance"  

%LISTF @xyz@.@(CREDATE>12/1/2017),3    


%PRINT @.@;SEARCH="Look for this"

Seeing MPEX come up as a solution for search reminded us of a great column from the Transition Era for the 3000. Steve Hammond wrote "Inside Vesoft" for us during that time when 3000s not only continued to hold data for organizations, but production-grade data, too.

Gonna find her, gonna find her, Well-ll-ll, searching
Yeah I’m goin’ searching, Searching every which a-way, yeh yeh

— The Coasters, 1957

By Steve Hammond

I have to admit it — I’m a bit of a pack rat. It drives my wife crazy and I’ve gotten better, but I still hold onto some things for sentimental reasons. I still have the program from the first game I ever saw my beloved Baltimore Colts play. On my desk is the second foul ball I ever caught (the first is on display in the bookcase). I have a mint condition Issue 1 of the HP Communicator — dated June 15, 1975 (inherited when our e3000 system manager retired). It tells that all support of MPE-B terminated that month, and the Planning Committee chairman of the HP 3000 Users Group was a gentleman from Walnut Creek named Bill Gates (okay, not that Bill Gates).

My problem is even when I know I have something, I just can’t find it. I had an item the Baseball Hall of Fame was interested in; they had no ticket stub from the 1979 World Series, which I had — seventh game no less. But it took me over two years before I literally stumbled across it.

I wish I could add some sort of easy search capabilities to my massive collection of junk like we have in MPEX.

The most commonly used is the search option in the PRINT command. But there are a couple of other ways to search that I’ve used over the years for different reasons, and we’ll look at those too.

In the olden days, when passwords were embedded in job streams, when we changed passwords, we would have to find every job with the password in it. A long, tedious task that never found all of them. And yes, the ultimate answer was converting to STREAMX, but that’s a column for another month.

When MPEX added the PRINT;SEARCH command, life became much easier and we found many uses for it. As the versions of MPEX evolved, the command gained power. The simplest form is:


This will search for any line with the word “FILE” in it — exactly “FILE”, not “file” or “File” or “fILE”, you get the picture. Easily solved with:

%PRINT @.JOBS.PROD; SEARCH=caseless “file”



Either of those will get you the word “file” in any form.

You can do boolean searches:




The first returns any file that has a line or lines with both FILE and TEMP, the second looks for FILE or RUN and the third looks for files with any lines that contain RUN but do not have PRODPGM.

You can even delimit your searches — let’s say you have a tape drive that you call “DAT”. Well, doing the search


will find the reference to DAT as the tape drive you’re interested in, but as a bonus feature it will also find “DATA”, “DATABASE”, etc. By using the DELIMIT option:


you will find only occurrences of “DAT” with non-alphanumeric characters before and after it. Taking this a step further, you can right and left delimit your search



The first will files with lines containing DBSTORE but not PRODDB, and the second vice versa. You can even add the caseless option to delimited option:

%PRINT @.JOBS.PROD; SEARCH=caseless rdelim “DB”

Caseless can be abbreviated “CL” and delim can be “D”.

But there’s another way you can do searches, which I found very useful - searches that I used out of a LISTF, building an indirect file for later use. You can’t do that with PRINT, because PRINT well, prints the result, showing the occurrence of the string you are searching for. Let’s say we were changing the name of a database — DEVDB to PRODDB. All I really need to know is all the jobs that have the reference to “DEVDB” — I don’t need to see the context of the string, I just need to know the files. And I need to know the fully-qualified name of the file and I want it in a file named INDFILE1.

This is where the file attributes FCONTAINS FSEARCHSTRING and FSEARCHEXP come into play. The second and last are similar because you have to state that there are greater than 0 occurrences in the file (or any value for that matter), but otherwise they all work the same.






As I said, FCONTAINS looks for the existence of that string in a file and basically FSEARCHSTRING does the same thing. But with FSEARCHSTRING, you can do :


which says you want to find a file that has any line with a minimum of three occurrences of “DEVDB”. Why would you want to do that? If you haven’t learned yet that you don’t ask that question, then we need to talk later.

The better question is why the “FSEARCHEXP” attribute? I’m glad you asked. This is the one of these attributes that lets you to a caseless search:


Note the additional set of quotes in there. FSEARCHEXP also lets you do boolean searches, but that’s just getting way too complicated!

The final question is why did I do it this way? If I was going to change a string, I would do an overnight job that found the files that needed to be changed. I would put the output of that process into an indirect file in the LISTF,6 output format (fully qualified file name). I would then use the indirect file as input for the job or process to change the string.


09:53 PM in Homesteading | Permalink | Comments (0)

January 05, 2018

Friday Fine-tune: How to discover the creation date of a STORE tape

Newswire Classic

By John Burke

It is probably more and more likely that, as the years pass by, you will discover a STORE tape and wonder when it was created. Therefore it is a good idea to review how to do this. I started out writing “how to easily do this,” but realized there is nothing easy about it — since it is not well-documented and if you just want the creation date, you have to do a bit of a kludge to get it. Why not something better?

It turns out the ;LISTDIR option of RESTORE is the best you can do. But if you do not want a list of all the files on the tape, you need to feed the command the name of some dummy, non-existent file. ;LISTDIR will also display the command used to create the tape.

By the way, this only works with NMSTORE tapes. For example, when ;LISTDIR is used on a SYSDUMP tape that also stored files, you get something like this (note that even though you are using the RESTORE command, if it contains the ;LISTDIR option, nothing is actually restored):

:restore *t;dummy;listdir


FRI, DEC 31, 2004, 3:22 PM


WED, MAY 7, 2003, 7:06 AM



10:43 PM in Hidden Value, Homesteading | Permalink | Comments (0)

January 03, 2018

How to make a date that lasts on MPE/iX

January 1 calendar pageNow that it's 2018, there's less than 10 years remaining before HP's intrinsic for date handling on MPE/iX loses its senses. CALENDAR's upcoming problems have fixes. There's a DIY method that in-house application developers can use to make dates in 2028 read correctly, too.

The key to this DIY repair is to intercept a formatting intrinsic for CALENDAR.

CALENDAR returns two numbers: a "year" from 0 to 127 and a "day of year" from 1 to 366.
FMTCALENDAR takes those two numbers and turns it into a string like Monday, January 1, 1900.  It takes the "year" and adds it to 1900 and displays that. "In a sense," explains Allegro's Steve Cooper, "that's where things 'go wrong'."
If one intercepts FMTCALENDAR and replaces it with their own routine, it can say if the "year" is 0 to 50, then add 2028 to it, otherwise, add 1900 as it always did.  That would push the problem out another 50 years.
This interception task might be above your organization's pay grade. If that's true, there are 3000-focused companies that can help with that work. These kinds of repairs to applications are the beginning of life-extension for MPE/iX systems. There might be more to adjust, so it's a good idea to get some help while the community still has options for support.

11:48 AM in Hidden Value, Homesteading | Permalink | Comments (0)