« August 2018 | Main | October 2018 »

September 28, 2018

The Migration Dilemma

3000-migration-penguins
This article first ran in the opening month of the 3000's migration era. For the companies still working through a migration, most of the issues remain in play. More to the point for the homesteading 3000 site: These are high-level reasons why a migration isn't on the horizon.

Newswire Classic

By Curtis Stordahl

Well, the other shoe has dropped. Hewlett-Packard has given their HP 3000 customer base just five years to migrate to another platform. This is a daunting task that is full of risk.

The biggest migration risk factor is probably that the complexity of the applications on the HP 3000 may have been severely underestimated. These applications can be over 20 years old, and some have had scores of programmers continuously evolving the original application without any supporting documentation. Consequently, it is possible that nobody knows just how big and complex these applications are. Many migration projects are also led by personnel with no experience on the HP 3000 platform, who have a perception of it being something like an old dBase application on an IBM PC.

Many organizations will be lured by the “flavor of the month” technology and want to completely redevelop their HP 3000 applications accordingly. This is also full of hidden risks.

A major redevelopment is going to essentially have three project teams. The first project team is going to be responsible for the development of the new application. This team faces multiple risks: of underestimating the complexity of the legacy application they are replacing; or of completing the development only to find it does not meet the minimum requirements and cannot be implemented without extensive rework.

At that point, the team could then find it impossible to obtain the resources needed to complete the project. The technology they choose may not meet expectations and so will not satisfy the minimum requirements. If you go outside your organization for new application development, the vendor you contract to do the work could go bust.

A second project team needs to migrate the data to the new platform. A radical change in design could make this difficult if not impossible.

A third project team needs to provide ongoing support to the legacy application. A major redevelopment could be years in the making, and you can’t stop the business from evolving during that time. This introduces additional risk into both the development and migration project teams because they must aim at a moving target.

There is an overall risk that a migration project could fail, leaving you with no additional funding or time to recover from the failure.

Packaged app migration

Many organizations will be lured by the promises of packaged software vendors. A package reduces migration risk. But this isn’t like just plugging in a new e-mail package.

The biggest risk factor is going to be implementation. Instead of building an application to conform to your business, you must make now make your business conform to the application. Managers outside the IT organization must buy into this and revise policy and procedures accordingly. So much is needed outside the IT organization to make it work that it must be managed from the executive level. If there is insufficient commitment at the executive level, then the project is destined to fail.

Training users is another risk factor. If you train too early you risk losing them through attrition before you implement. If you train too late, they may not be able to retain and understand what is expected of them.

 

09:20 PM in Migration | Permalink | Comments (0)

September 26, 2018

Why and When to Leave Platform-Land

Goodbye Tin Man
Life in the 3000 community revolved around platforms. We used to think about these as operating systems. Long ago it was time to change that thinking and call the combo of servers and the OS a platform. You could think of that era as the Land of Oz, instead of OS. It might be time for 3000 owners to change their thinking about computers as platforms. It depends on what else is doing service in your datacenter.

For a small percentage of 3000 owners, the servers built by HP are all that runs in what we once called the computer room. They live in what one storage vendor, one who knows the 3000 well, calls platform-land. Everything in platform-land is connected to a 3000, so the homogenous benefits of multi-server storage just aren't needed.

Companies that live fully in platform-land are using HP-branded devices built exclusively for the 3000. That's the way HP used to qualify its peripherals: tested for MPE/iX. For a while during the years after HP's "we're outta here" announcement, the vendor asserted that any other storage device was risky business. We covered those debates. The results showed the risks were not substantial.

HP's outta-here movement caused movement in the 3000 community, of course. Some of the movement was inbound instead of an exodus. Companies have turned to using Linux servers, more Windows Servers (2008 and later) and even some Unix boxes from HP, Sun, or IBM. That's the moment when a company starts to leave platform-land. You should leave it once you've got multiple OS servers and need to leverage networks and peripherals across all servers. That's the When.

The Why is a little more complicated. 3000s and the Stromasys servers that have replaced the MPE/iX hosts are cradles for the applications that companies don't want to drop. The companies shouldn't drop an application just because it's on the wrong platform. Applications need to exit when they don't serve the business logic anymore. Leaving platform-land supports the continued service from MPE/iX apps. Like I said, it's complicated.

What we know about managing IT in the year 2018 is that it's complex in a way we couldn't imagine in the 1980s and 1990s. The diversity of devices is what's changed the community for good. Even HP began to understand that a multiple-platform storage device was a better value to anyone who didn't live in platform-land.

For almost all of the HP 3000 customers left today, their server isn't the only box in the shop. They have added Windows, Linux and Unix, yes. More often, those operating systems aren't part of the decision loop. We always said that it was the applications, stupid, when deciding what computer was going to get the assignment. The 3000 survives because of its apps, but its survival is beset by more than the calendar pages slipping away. The differences in connected devices chip away at the 3000's rightful place. 

Oh. Not only do we have to try to retain MPE/iX expertise, we also have to keep that Jamaica disk set running. A set that does nothing else for everything else in the datacenter. When it fails, we have to find a replacement for that device with moving components. An old replacement, at that. If we'd made the 3000 ready for a platform-agnostic environment, so it could use modern storage like the later-generation XP arrays, we could justify a replacement and get something newer.

Now the 3000 goes onto the bubble again. The most-quoted reason for companies to scrap their MPE environments, their platform, is because of aging hardware and the worry over replacing it. Peripherals like storage and tape now have their own worry points, according to support companies serving 3000s. Tape is a crapshoot. People have been warning about old disks for a long time.

The Why to leave platform-land is all about the future. When a datacenter is no longer thinking about 3000-only hardware -- when that hardware can be commodity high-class Intel servers or an array that's beyond the reach of platform-land -- the apps can stay off the bubble. The need to migrate recedes, so long as there's expertise about MPE/iX somewhere.

MPE know-how isn't automatic anymore, but there are support companies that specialize in it. The future of self-maintaining that know-how isn't bright. Indie companies can help, and at least the devices need to be useful to all hardware. Until that's done, being stuck in platform-land is just one more way to hurry to an exit from everything still working from the 3000 world, built and configured in the era when platforms were like the heroes from Oz. We'll miss you most, Tin Man.

08:06 PM in Homesteading, Migration | Permalink | Comments (0)

September 24, 2018

Data migration integrity can be in the garage

RulesoftheGarage
In the world of book publishing, a customer with a legacy of using HP 3000s is pushing users through a migration. The Nielsen name has long been associated with TV ratings, but that former HP 3000 customer tracks so much more. This month the book sales service BookScan is getting a migration to a new system. The old system was called Nielsen and the new one is NPD's Decison Key. The transfer is expected to have its bumps. Some might have impact who's on this year's bestseller lists.

Publisher's Marketplace reports, "BookScan will complete its transition from the old Nielsen platform to the NPD system and at outset, the biggest adjustment for users may be getting accustomed to system updates on a different day of the week. "

As the story unfolds there's more changes expected. NPD Books president Jonathan Stolper is predicting high integrity. "We're going to get it right," he said in a Publisher's Marketplace article. The Marketplace resells Nielsen data to authors, publishers and booksellers—so the forecast would of course be bright. Many data migrations have had this forecast.

But the data on this Nielsen system, some of which goes back to the HP 3000 era there, has deep roots. From the Marketplace report:

We're talking about millions of titles, a system that goes back to 2004 in detail. There is a ton of data within this system. So it's only natural that there's probably going to be some – I don't want to call them hiccups, but some variances. Whenever you switch systems, there's some slight variance. People are going to have to realize that it's not an absolute match."

Then comes the upshot of the migration. Some books are going to "sell" better than others, depending on the data integrity for this year's sales.

The switchover will increase sales because the newer data collection system reaches into more points of sale. Gift shops and museum stores, for example, are in the new mix.

Those variances mean that some titles will see increased sales for 2018 in the new platform versus legacy BookScan, due to the broader information enabled by NPD, which collects retail point-of-sale data in broad fashion across many industries.

Data migration has been the heartland at MB Foster for the last three decades. Last week the company that made database interfaces for HP 3000s in ODBC/SE hosted a webinar on a concept called the Migration Garage. "A garage can be set up to deal with the end-to-end requirement, the staffing required to make the garage a success, and where the savings come from when a garage process is defined."

It took two years for the BookScan tracking system to migrate to the NPD platform. The NPD system, called Decision Key, will become the data of record for BookScan. The number of titles that have been printed with the word "Bestseller" on the cover probably hasn't been tracked. It's a powerful marketing term, though.

MB Foster's concept of a Migration Garage "is useful when there are a number of applications with the same requirements or that have a common set of data that is changing and similar changes have to be made to each application as it is transformed to its new environment."

Publishing is an industry that often clings to legacy practices, like paying published authors their royalties just twice a year. (Something about needing to count the returned books before the checks for the writers go to the agents, who send the money to the authors.) Nielsen is moving its data to a new vehicle this fall. Just as in every migration, the numbers will speak for themselves.

08:36 PM in Migration | Permalink | Comments (0)

September 21, 2018

Fine Tune: Storing in Parallel and to Tapes

Does the MPE/iX Store-to-Disc option allow for a ‘parallel store,’ analogous to a parallel store to tape? For example, when a parallel store to tape is performed, the store writes to two or more tape drives at the same time. Is there a parallel store-to-disc option that allows for the store to write to two or more disc files at the same time (as opposed to running multiple store-to-disc jobs)?

Gavin Scott and Joe Taylor reply

Yes, the same syntax for parallel stores works for disk files as well as tape files. I really don’t know if you would get any benefit from this, but if you went to the trouble of building your STD files on specific disks, then it might be worthwhile.

What is the recommended life or max usage of DLT tapes?

Half a million passes is the commonly used number for DLT III. One thing to remember is that when they talk about the number of passes (500,000 passes), it does not mean number of tape mounts.

For SuperDLT tapes, the tape is divided into 448 physical tracks of 8 channels each giving 56 logical tracks. This means that when you write a SuperDLT tape completely you will have just completed 56 passes. If you read the tape completely, you will have done another 56 passes.

The DLTIV tapes (DLT7000/8000) have a smaller number of physical and logical tracks, but the principle is the same. The number of passes for DLTIIIXT and DLT IV tapes is 1,000,000. The shelf life is 30 years for the DLT III XT and DLT IV tapes and 20 for the DLT III.

Our DDS drive gets cleaned regularly. Our tapes in rotation are fairly old, too. However, we are receiving this error even when we use brand new tapes. 

STORE ENCOUNTERED MEDIA WRITE ERROR ON LDEV 7 (S/R 1454)

The new tapes are Fuji media, not HP like our old ones.

John Burke replies:

Replace that drive. DDS drives are notorious for failing. Also, the drive cannot tell whether or not you are using branded tapes. I’ve used Fuji DDS tapes and have found them to be just as good as HP-branded tapes (note that HP did not actually manufacture the tapes). I have also gotten into the habit of replacing DDS tapes after about 25 uses. When compared to the value of a backup, this is a small expense to pay.

07:52 PM in Hidden Value, Homesteading | Permalink | Comments (0)

September 19, 2018

Wayback: HP's prop-up in a meltdown week

TradersTen years ago this week the HP shareholder community got a slender boost amid a storm of financial crisis around the world. While the US economy was in a meltdown, Hewlett-Packard -- still a single company -- made a fresh promise to buy back its stock for $8 billion. Companies of HP's size were being labelled Too Big to Fail. The snarl of the banking collapse would be a turning point for a Presidential election. A Wall Street Journal article on the buybacks called HP's move a display of strength. HP wanted to ensure its market capitalization wouldn't take a pounding.

HP was electing to pump a smaller buyback into its shares compared to a competitor's effort. Microsoft was announcing a $40 billion buyback in the same week. At the time, the two companies were trading at about the same share price. Hewlett-Packard was working through its final season with a 3000 lab, tying a bow on the final PowerPatch of the MPE era. One customer recently called that last 2008 release "MPE/iX 7.5.5."

The company was looking to get into a new operating system business in September of 2008, though. HP would be developing a server of its own built upon a core OS of Linux. HP closed down its Nashua, New Hampshire facility just a few months earlier. The offices where VMS was being revived were going dark. At least HP was still selling hardware and growing. We took note of the contrast between selling goods and shuffling financial paper.

Not all of the US economy is in tatters, despite what trouble is being trumpeted today. HP and Microsoft and Nike still run operations which supply product that the world still demands, product which can't be easily swapped in some shadowy back-door schemes like debt paper or mortgage hedges.

A decade later, much has changed and yet not enough to help HP's enterprise OS customers. VMS development has been sold off to a third party firm, OpenVMS Inc. That move into Linux has created a low-cost business server line for HP which doesn't even mention an OS. Meanwhile, Microsoft's stock is trading above $120 a share and HP's split-up parts sell for between $15 and $27 a share, covering the HP Enterprise and PC siblings.

Last week Microsoft announced an impressive AI acquisition, Lobe. For its part, HP Enterprise announced it was refinancing its debt "to fund the repayment of the $1.05 billion outstanding principal amount of its 2.85% notes due 2018, the repayment of the $250 million outstanding principal amount of its floating rate notes due 2018, and for general corporate purposes." A decade ago financial headwinds were in every corporate face. By this year the markets have sorted out the followers from the leaders. HP stepped away from OS software and has created a firm where sales of its Enterprise unit has gone flat. 

Stock buybacks offer a mixed bag of results. Sometimes the company doing the buyback simply doesn't have the strength and bright enough future to hope to reap some benefit—for the company. Shareholders love them, though. The customers are a secondary concern at times.

The $8 billion probably seemed like a good idea at the time, considering it was in the Leo Apotheker era and its misguided acquistions. (A deal for 3Par comes to mind, where a storage service vendor recently noted that it was Dell that drove up the 3Par acquistion price by pretending to bid for it.) The trouble with stock buybacks is just about nobody can stop them. Shareholders are always happy to have shares rise, either on the news of the buyback or the upswing over the next quarter.

10:39 PM in History, News Outta HP | Permalink | Comments (0)

September 17, 2018

Planning to migrate has been the easy mile

Postman3000 owners have made plans for many years to leave the platform. The strategies do take a considerable while to evolve into tactics, though. The planning stage is easy to get stopped at, like an elevator jammed up at a floor. 

For example, take a company like the one in the deep South, using HP 3000s and manufacturing copper wire and cable. The manager would rather not name his employer and so we won't, but we can say the 3000 is dug in and has been difficult to mothball.

In fact, the only immediate replacement at this corporation might be its storage devices. The datacenter employs a VA7410 array.

We do have to replace a drive now and then, but there hasn't been any problem getting used replacements, and we haven't suffered any data loss. I think if we were planning to stay with MPE for the long term, we might look for something newer, but we are planning to migrate. In fact we planned to be on a new platform by now, but you know how that goes.

More companies than you'd imagine know how that goes in 2018. We're nearing the end of the second decade of what we once called the Transition Era. The final mile of that journey can be the slowest, like the path of the postman who must carry the mail on foot through urban neighborthoods.

01:25 PM in Homesteading, Migration, User Reports | Permalink | Comments (0)

September 14, 2018

Use Command Interpreter to program fast

NewsWire Classic

By Ken Robertson

An overworked, understaffed data processing department is all too common in today’s ever belt-tightening, down-sizing and de-staffing companies.

Running-shoesAn ad-hoc request may come to the harried data processing manager. She may throw her hands up in despair and say, “It can’t be done. Not within the time frame that you need it in.” Of course, every computer-literate person knows deep down in his heart that every programming request can be fulfilled, if the programmer has enough hours to code, debug, test, document and implement the new program. The informed DP manager knows that programming the Command Interpreter (CI) can sometimes reduce that time, changing the “impossible deadline” into something more achievable.

Getting Data Into and Out of Files

So you want to keep some data around for a while? Use a file! Well, you knew that already, I’ll bet. What you probably didn’t know is that you can get data into and out of files fairly easily, using IO re-direction and the print command. IO re-direction allows input or output to be directed to a file instead of to your terminal. IO re-direction uses the symbols ">", ">>" and "<". Use ">" to re-direct output to a temporary file. (You can make the file permanent if you use a file command.) Use ">>" to append output to the file. Finally, use "<" to re-direct input from a file:

echo Value 96 > myfile
echo This is the second line >> myfile
input my_var < myfile
setvar mynum_var str("!my_var",7,2)
setvar mynum_var_2 !mynum_var - (6 * 9 )
echo The answer to the meaning of life, the universe
echo and everything is !mynum_var_2.

After executing the above command file, the file Myfile will contain two lines, “Value 42” and “This is the second line.” (Without quotes, of course.) The Input command uses IO re-direction to read the first record of the file, and assigns the value to the variable my_var. The first Setvar extracts the number from the middle of the string, and proceeds to use the value in an important calculation in the next line.

How can you assign the data in the second and consequent lines of a file to variables? You use the Print command to select the record that you want from the file, sending the output to a new file:

print myfile;start=2;end=2 > myfile2

You can then use the Input command to extract the string from the second file.

Rolling Your Own System Variables

It’s easy enough to create a static file of Setvar commands that gets invoked at logon time, and it’s not difficult to modify the file programmatically. For example, let’s say that you would like to remember a particular variable from session to session, such as the name of your favorite printer. You can name the file that contains the Setvars, Mygvars. It will contain the line: setvar my_printer “biglaser”

The value of this variable may change during your session, but you may want to keep it for the next time that you log on. To do this, you must replace your normal logoff procedure (the Bye or Exit command) with a command file that saves the variable in a file, and then logs you off.

byebye
purge mygvars > $null
file mygvars;save
echo setvar my_printer "!my_printer" > *mygvars
bye

Whenever you type byebye, the setvar command is written to Mygvars and you are then logged off. The default close disposition of an IO re-direction file is TEMP, which is why you have to specify a file equation. Because you are never certain that this file exists beforehand, doing a Purge ensures that it does not.

07:14 PM in Hidden Value, Homesteading | Permalink | Comments (0)

September 12, 2018

Wayback: DX cuts new 3000 price to $7,077

Field-of-dreams
The Series 918DX was going to deliver the 3000's Field of Dreams

If only the HP 3000 were less costly. The price of the system and software was a sticking point for most of its life in the open systems era, that period when Unix and Windows NT battled MPE/iX. HP's own Unix servers were less costly to buy than the 3000s using the same chipset. Twenty-one years ago this season, the cost of a 3000 became a problem HP wanted to solve.

Cheaper 3000s would be a field of dreams. If a developer could build an app, the customers would come.

Now, Hewlett-Packard was not going to cut the cost of buying every HP 3000 in 1997. When developers of applications and utilities made their case about costs, the HP 3000 division at last created a program where creators would get a hardware break. The Series 918DX was going to help sell more 3000s. It would be the only model of 3000 HP ever sold new for under $10,000. A less costly workbench would attract more application vendors.

The list price of the DX was $7,077. Still more than a Unix workstation or a Windows PC of 1997. The thinking of the time came from a new team at the 3000 division, where marketing manager Roy Breslawski worked for new GM Harry Sterling. Removing a cost barrier for small, startup developers was going to open the doors for new applications.

HP simply adjusted its pricing for hardware and software on a current 3000 model to create the DX. The product was a Series 918/LX with 64 MB of memory, a 4GB disk, a DDS tape drive, a UPS, and a system console.

HP included all of its software in the bundle, such as compilers for C, COBOL, FORTRAN, BASIC, Pascal and even RPG. It was all pre-loaded on that 4GB drive: a Posix Developers Kit, ARPA Services, Workload Manager, Glance Plus, TurboStore, Allbase/SQL. No 3000 would be complete without IMAGE/SQL. The harvest was rich for the small development ventures.

The size of the bundled HP software created one of the drags on the DX. HP automatically billed for the support on every program. When developers started to evaluate the offer, the $7,000 hardware came with $14,000 worth of support commitments.

HP leasing wasn't an HP option for such an inexpensive server, however. Rental costs would amount to buying it more than once. The vendors who were sensitive to hardware pricing didn't have strong sales and marketing resources. They could build it, but who would come?

So the DX reduced the cost to purchase a 3000, and buying support was always optional for any acquisition of a 3000. But self-maintainers were not as common 21 years ago. Developers learned the way to get the low-cost development tool was to order it stripped of support and then add HP support only for the software they needed.

The 3000 vendor community of 1997 was excited about the prospects, both for new customers who might buy a 3000 as a result of a new app — as well keen on prospects for their own products. Within a few months, 14 third-party companies offered 30 products either free or at rock-bottom discounts to developers buying the 918DX. The idea was noble and needed because the 3000 had fallen far behind in the contest to offer apps to companies. One developer quipped that HP would need to equip the system with a bigger disk drive to handle all the available software.

The number of available third party programs might well have been greater than the number of DX systems ever sold. All HP required of a company was to sign up for the SPP developer program, a free membership. The roadblock didn't turn out to be a cheaper 3000 for smaller vendors. Although the DX was thin on storage, RAM, and horsepower, the one thing that would've moved the computer to the front of development plans was customers. The 918DX was not going to make orders appear, just the software for them.

In 1997 the ideal of a startup was still new, surrounded by some mystery. The Internet still had a capital letter on the front of the word and keeping costs low was important to savvy developers. Low hardware costs were a benefit. Sterling said the package was designed to draw out development efforts from sources with high interest in the HP 3000 market. "You guys have been telling us this for two years," he said. "I say it's time we try it, and see what happens."

"It was like Christmas in August," said Frank Kelly, co-chair of the COBOL Special Interest Group in the earliest days of the DX offer. Birket Foster, SIGSOFTVEND chairman, said "HP has come forward and done the right thing. We expect this will be a very good developer platform and the right tools will show up for it."

Developers delivered the tools and some showed up to purchase the DX. The dead weight of not being Unix or Windows, with their vast customers pools, is what pulled down 3000 app availability. By today, those 21 years have delivered a market for a 3000 where an N-Class server, 60 times more powerful than the Series 918, sells for about as much as the DX.

The Field of Dreams doesn't get its future told in the classic 1989 film of the same name. The field does exist, however, a stretch of Iowa farmland near Dyersville where the baseball faithful pay $10 to relive that movie magic. It's a beautiful field, and people do come.

07:17 PM in History, News Outta HP | Permalink | Comments (0)

September 10, 2018

Durable 3000s seek, sometimes find, homes

Computer Museum 918Earlier this month a notice on the 3000-L mailing list tried to match an old HP 3000 with a new home. Joshua Johnson said he's got a Series 918 LX (the absolute bottom on the 9x8 lineup) that's got to go. It's a good bet this server hasn't been running any part of a business since HP left the support arena.

I have a 918LX that's been sitting around for a while that I'd like to get rid of. It worked when it was last shutdown. I think I still have a bunch of ram for it in a box somewhere. Anyone interested?

Then there was a question about where his HP hardware was sitting. "I’m in Providence RI. It sat in a shed for 10 years. When it was shut down it worked fine. I think I have several memory sticks for it as well."

This was a give-away 3000, the kind that goes for sale on the used market at about $700 in the best case. The Series 918 LX weighs enough that the shipping is going to be the biggest part of that free transaction. The 918 was at the bottom of HP's relative performance ratings, 10.0 on a scale where a Series 37 was a 1.0.

Last week we talked with a 3000 developer who witnessed the shutdown of seven N-Class systems. "They were going to throw them away," he said, because the health care provider had followed its app and moved to Unix. He got the rights to an N-Class and talked the broker who took the rest of the orphaned N-Class systems to trade one for an A-Class server. "The power situation was just too great for me to use the N-Class," he said— referring to the hardware's electrical needs, not the horsepower.

Old 3000s seeking new homes is still news in your community. Sometimes the adoptions feel like they're foster homes, though.

HP's 3000 iron was built to extraordinary standards, or there wouldn't be a Series 918 available to give away in Rhode Island. That's a server built while Clinton was President. In an odd piece of comparison, the N-Class system is 60 times more powerful than the Series 918, but at the end of the line, it had just as much value.

The N-Class and A-Class boxes are newer, of course, and that decision to send them to the scrap-heap might have been wasteful. The durable value of these computers isn't in the hardware whose components age every day. It's in MPE and the applications. 

Holding on to old hardware could be one way to prove that MPE/iX has an evaporating value. Being able to move the apps and the OS onto a newer box puts the brakes on that decline. To be fair, lots of elderly 3000s are able to reboot after a long winter's nap. Our developer who got that A-Class also has a Series 967 in his garage. It was powered down for more than two years before it switched on. 

 

08:04 PM in Homesteading, User Reports | Permalink | Comments (0)

September 07, 2018

Queue up those 3000 jobs with MPE tools

NewsWire Classic

By Shawn Gordon

A powerful feature of MPE is the concept of user-defined job queues. You can use these JOBQ commands to exert granular job control that is tightly coupled with MPE/iX. HP first introduced the commands in the 6.0 release.

For example, you only want one datacomm job to log on at a time, but there are 100 that need to run. At the same time you need to let users run their reports, and you want to allow only two compile jobs to run at a time. Normally you would set your job limit down to 1, then manually shuffle job priorities around and let jobs go. In the new multiple job queue controlled environment, you can define a DATACOMM job queue whose limit was 1, an ENDUSER job queue whose limit was 6 (for example), and a COMPILE job queue whose limit was 2. You could also set a total job limit of 20 to accommodate your other jobs that may need to run.

Three commands accommodate the job queue feature:

NEWJOBQ qname [;limit=n]
PURGEJOBQ qname
LISTJOBQ

The commands LIMIT, ALTJOB, JOB and STREAM all include the parameter ;JOBQ=.

As an example, I am going to create a new job queue called SHOWTIME that has a job limit of 1. You will notice the job card of the sample job has a JOBQ parameter at the end to specify what queue it is to execute in.

Alternatively I could have said STREAM SHOWTIME.JCL;JOBQ=SHOWTIME to put it into my job queue. Here’s the coding to do this:

NEWJOBQ SHOWTIME;LIMIT=1

!JOB SHOWTIME,MANAGER.SYS,PUB;
!JOBQ=SHOWTIME !
!SETVAR HPAUTOCONT TRUE
!
!SHOWTIME
!
!SHOWCLOCK
!
!SHOWME
!
!SHOWVAR [email protected]
!SHOWVAR [email protected]
!
!ECHO !HPDATEF
!ECHO !HPTIMEF
!
!PAUSE 300
!
!EOJ

I just streamed five copies of the job, and using the LISTJOBQ command I am able to see the default system defined job queue HPSYSJQ. I haven’t been able to find out why it indicates a limit of 3500, since my current job limit was 30. [Editor’s Note: Gavin Scott reports that “All job queues have a LIMIT that is separate from the one true system LIMIT. This includes the default HPSYSJQ. The 3500 default is a number large enough that you should never run into the case where the existence of this second, un-obvious, limit on normal jobs affects you.”]

You can see my SHOWTIME job queue with a limit of 1, with one executing and five total jobs, so four are currently in a wait state. This is obvious in the SHOWJOB command below.

listjobq

JOBQ      LIMIT     EXEC  TOTAL

HPSYSJQ   3500      12    12
SHOWTIME  1         1     5

SHOWJOB [email protected]

JOBNUM  STATE IPRI JIN  JLIST    INTRODUCED  JOB NAME

#J2     EXEC        10S LP       TUE  7:09A  NP92JOB,MGR.MINISOFT
#J3     EXEC        10R LP       TUE  7:09A  BACKG,MANAGER.VESOFT
#J4     EXEC        10S LP       TUE  7:09A  WTRSH,MGR.WTRSH
#J5     EXEC        10S LP       TUE  7:09A  MSJOB,MGR.MINISOFT
#J6     EXEC        10S LP       TUE  7:09A  MASTEROP,MANAGER.SYS
#J7     EXEC        10S LP       TUE  7:09A  VCSSERV,MGR.DIAMOND
#J8     EXEC        10S LP       TUE  7:09A  VCSCHED,MGR.DIAMOND
#J9     EXEC        10S LP       TUE  7:09A  JINETD,MANAGER.SYS
#J10    EXEC        10S LP       TUE  7:09A  JWHSERVR,MANAGER.SYS
#J12    EXEC        10S LP       TUE  7:25A  GUI3000J,MANAGER.SYS
#J19    EXEC        10S LP       TUE  8:08A  BROLMSGJ,JOBS.REVIEW
#J130   EXEC        10S LP       TUE  1:06P  SHOWTIME,MANAGER.SYS
#J131   WAIT:1   8  10S LP       TUE  1:06P  SHOWTIME,MANAGER.SYS
#J132   WAIT:2   8  10S LP       TUE  1:06P  SHOWTIME,MANAGER.SYS
#J133   WAIT:3   8  10S LP       TUE  1:06P  SHOWTIME,MANAGER.SYS
#J134   WAIT:4   8  10S LP       TUE  1:06P  SHOWTIME,MANAGER.SYS

16 JOBS (DISPLAYED):
   0 INTRO
                4 WAIT; INCL 0 DEFERRED
                12 EXEC; INCL 0 SESSIONS
                0 SUSP
JOBFENCE= 6; JLIMIT= 30; SLIMIT= 60

Now if I want to increase the job limit for my SHOWTIME job queue, I can use the following command

limit +1;jobq=showtime
altjob #j131;jobq=hpsysjq

You will probably notice that there are a number of nice enhancements to ALTJOB and LIMIT in support of the job queues, having uses outside of the job queues. For example, LIMIT now allows you to use a plus or minus value to increase or decrease the number, so you don’t have to use an absolute value. It is common to up the limit by one to allow another job to execute, but previously you had to check the current job limit, change it, then change it back. At least now you can just do +1 to let the job launch.

On the ALTJOB command, you can now specify HIPRI to cause a job to start up immediately and not have to play with limits to let it go. You can also alter the output device of the job. I did find during my tests that altering a job to a queue that had open slots didn’t seem to allow the job to release if you sent it to the system default HPSYSJQ. However, if you sent it to a user-defined job queue that had room left in it for another job to execute, then it would launch immediately.

There is another side benefit of job queues, and that is ensuring that never more than one version of a job logs on. For example, if you have some background job running and you cannot have a second copy running, but there is nothing that prevents it, you could create a job queue for it with a limit of 1 that would keep any extra copies from launching.

This is just one example of an extended use of the feature. If you try to purge a job queue that is currently in use, you will receive this message:

Cannot purge job queue as there are jobs
running/waiting in that queue. (CIERR 12251)

If you try to stream a job into a queue that does not exist you will receive the message

JOBQ parameter expected. (CIERR 12255)
Spooler internal error occurred. (CIERR 4522)

The job will be streamed regardless — however, it won’t start executing, because there is no queue for it to execute in. The major problem is that the job will stream into a WAIT state because there is no queue available for it. At this point you can’t abort it, you can’t create the queue it was intended for and have it work, you can’t alter it into the system job queue because of whatever the problem is that we described earlier. Finally you can try to create a new queue and alter it into it. The LISTJOBQ will show it as a job for that queue, but it will never start executing. The only way to get rid of the job is to shut down the system and do a START NORECOVERY.

06:49 PM in Hidden Value, Homesteading | Permalink | Comments (0)

September 05, 2018

Where's the lure to launch into the cloud?

Cloud_computing
We’ve talked about it here before. Is there any genuine interest from 3000 owners and managers for  getting their servers migrated into the cloud? In the most common scenario today, an adequately powered Amazon or Rackspace server, or even something like a Google host, or something from Oracle, becomes the IT datacenter floor. Amazon will even sell a cloud server that only spins up when accessed. It's all billed by the hour, the day, or the amount of time connected.

For MPE/iX systems, this is only possible using a Charon install for MPE. Stromasys, which sells Charon and mentioned the possibilities for using the cloud. A notice this week announced the company is exhibiting Charon at the Gulf Information Technology Exhibition next month in Dubai. The GITEX news noted that Charon has a cloud option, saying the software is available in the cloud or on premise.

Most important for these virtual 3000s are the servers' horsepower. Doug Smith of Stromasys checked in with some upcoming Charon 3000 news and noted that 4 GHz is the CPU low bar for running Charon as fast as HP's native PA-RISC hardware.

By 2018 there's now very little hardware tuning that cannot be done if the host is up in the cloud. 3000 expertise of today works from a laptop far removed from the manufacturing or distribution floor. So what's the lure to launch an MPE server into the cloud? I think cloud’s big edge has got to be low cap-ex and assured hardware evolution.

For example, if you buy a $19,000 Intel server in a rack, attach it to fast storage, and it runs Charon, well, you’re set. Somewhere in the future, of course, you might need more throughput and CPU. That $19K server has to be farmed out to another task if you can't upgrade it. If the host itself were cloud-based, more horsepower is one reconfiguration order away.

It's in this scenario that a company which uses a virtual partition for a Charon Linux host might have a chance at containing long term hardware costs. Virtualized Linux could induce some drag on performance. That's why Stromasys only sells servers that are configured by Smith. Many 3000 software vendors have customers using the emulator.

So far, nobody's raised their hand to say they're putting a 3000 into the cloud like this.

When you think about it, “Cloud-based 3000” sounds a lot like the timesharing of the 1980s, doesn’t it? The uptime service guarantee is “It’s somebody else’s concern to keep my MPE hardware backed up and running without MPE errors.” 

The first place I ever worked while reporting on HP 3000s was Wilson Publications in Austin. We used a subscriber database hosted on a Series 42 hosted down at a printing company. We dialed up using PC 2622 software from Walker, Richter and Quinn. I guess we were working on 3000s in the cloud in 1984. That might be one lure to launching into the cloud for MPE: It's been done before.

05:04 PM in Homesteading | Permalink | Comments (0)

September 03, 2018

The Labors of 3000 Love

Union-laborHere in the US we celebrate Labor Day today, a tribute to the wages and benefits that workers first guaranteed during the labor movement of the 20th Century. It's a holiday with most offices closed, but much labor in the shops and boutiques across towns like our Austin and elsewhere.

Homesteading 3000 customers face labors, and they often seem to struggle for respect from the departed members of the 3000 computer community. Homesteading work is no less crucial than the heavy lifting of migration, although there's far less of that latter movement going on by now. Homesteading is just as necessary, too.

If you were lucky enough to have a holiday today, thank your precursors in the labor unions. Those organizations are becoming as derided now as 3000 customers who stick with the platform and polish MPE skills. Unions protected the middle class, though. A lot like a 3000 protected a company from the cheap Windows PCs expensive server churn, or the steep outlay for mainframes. For a good look at what labors a homesteader should work on, see Paul Edwards' homesteading primer.

Homesteading tasks are little changed by now, although the hardware from HP and the media needs a closer watch. That's a DIY task a homesteader might not prepare for. Many customers have moved the labor of their 3000 support to third parties.

05:08 PM in Homesteading | Permalink | Comments (0)