« February 2014 | Main | April 2014 »

March 31, 2014

On the Inclusive Nature of Modern Paper

By Ron Seybold

Editorial-Icon-185.epsA friend of mine recently took up working a temporary job. It’s that kind of economy in places, even in Texas of 2014, and this job is the third that he’s working at once. But it’s his only full-time job, and this one gets him into a commute before daybreak. For years, we met early Wednesday mornings for breakfast tacos, but now it’s coffee in the evenings for us. My friend is working for a certain government agency that every one of us Americans has a relationship with, one that he doesn’t want me to mention by name. All he wants me to say here, with a wink, is “This is their busiest season.” And here in Austin, the agency has temp jobs available for a few months. 

Every minute of those temp jobs is based on the durable value of paper.

Of all documents that wage earners, retirees and businesses must transmit here in the American springtime, one out of every four are sent via paper. The government would rather not see this continue to be true. But some people are still true to their paper. We’ve been true to paper here at the Newswire, too — now for 150 printed issues.

There are other ways to communicate and learn about what we all do, what we spend and how we budget, the stories that we tell in reports to an agency or to each other about our earnings, our revenues, and our estimates for the future. But in the end, the most fundamental trust — as well as the means to include everybody in telling these stories — relies on paper.

And oh my, even well into the 21st Century, does my friend ever have stories about his early mornings with paper.

He talks of cardboard letter trays and plastic mail tubs, bins and crates and metal racks, all jammed with envelopes. A workday, he says, lived among the sizes that I’ve come to know in my own career of paper: the No. 10 envelope, the 9x12, by now the Tyvek, and even greeting card envelopes. All opened while using talc-lined latex gloves. He talks of staplers rated to punch together 80 pages at a time, the motorized and manual letter openers, plus hand-wielded staple pullers to rearrange the forms just so. My pal rattles off their four-digit numbers like they were MPE commands, instructions he has memorized like some newbie 3000 operator — back in the days when there were such things as operators.

He says the forms stream in around the clock, like what sounds to me like so many HP 3000 jobs, scanned with the oldest of school-skills, eyeballed by dozens of temps in a room that sounds bigger than any datacenter a 3000 ever occupied. He tells me that when he walks into that room with its hung ceiling tiles, the floors thump while he crosses them. Sounds like classic datacenter design to me, with raised flooring and ceilings for cables. 

But then he says this governmental room has less than a dozen keyboards across more than 150 desks. It’s manual work, and he adds with a grin, “The greatest part of it is that anything that people did in the 1970s with records is still being done today, by us.”

That must sound familiar to a 3000 developer, veteran, vendor or manager reading this. See, they all know that paper’s got backward compatibility as well as security that no nouveau computer system can ever match. Where do all those forms go? Well, keyed into acres and acres of hard drives, their data tapped in once they’re extracted from envelopes. But the forms themselves live in warehouses. “For heaven knows how long,” says my breakfast pal.

“Heaven knows how long” could be words to swear by in our 3000 world. People attempt to estimate when they can migrate, and then begin. The process can take a matter of months or the better part of a decade. But the equivalent of those paper documents, the redoubtable 3000, churns onward just like those 1-in-every-4 government forms that fill my pal’s paper mornings.

10:34 AM in Newsmakers | Permalink | Comments (0)

March 28, 2014

MPE's dates stay at home on their range

2028 is considered the afterlife for MPE/iX, and MPE in general, based on misunderstanding of the CALENDAR intrinsic. The operating system was created in 1971 and its builders at the time used 16 bits, very state of the art design. Vladimir Volokh of VESOFT called to remind us that the choice of the number of bits for date representation probably seemed more than generous to a '71 programmer.

"What could anyone want with a computer from today, more than 50 years from now?" he imagined the designers saying in a meeting. "Everything will only last five years anyway." The same kind of choices led everybody in the computer industry to represent the year in applications with only two digits. And so the entire industry worked to overcome that limitation before Y2K appeared on calendars.

YogiberraThis is the same kind of thinking that added eight games to the Major League Baseball schedule more than 50 years ago. Now these games can be played on snowy baseball fields, because March 29th weather can be nothing like the weather of, say, April 8 in northern ballparks.

Testing the MPE/iX system (whether on HP's iron, or an emulator like CHARON) will be a quick failure if you simply SETCLOCK to January 1, 2028. MPE replies, "OUT OF RANGE" and won't set your 3000 into that afterlife. However, you can still experience and experiment with the afterlife by coming just to the brink of 2028. Vladimir says you can SETCLOCK to 11:59 PM on December 31, 2027, then just watch the system roll into that afterlife.

It goes on living, and MPE doesn't say that it's out of range, out of dates, or anything else. It rolls itself back to 1900, the base-year those '71 designers chose for the system's calendar. And while 1900 isn't an accurate date to use in 2028, 1900 has something in common with Y2K -- the last year that computers and their users pushed through a date barrier.

The days of the week are exactly the same for dates in 1900 as for the year 2000, Vladimir says. "It's ironic that we'll be back to Y2K, no?" he asked. VESOFT's MPEX has a calendar function to check such similarities, he added.

The MPE/iX system will continue to run in 2028, but reports which rely on dates will print incorrectly. That's probably a euphemism, that printing, 14 years from now. But it's hard to say what will survive, and for how long. Or as Vladimir reminded us, using a quote from Yankee baseball great Yogi Berra, "It's tough to make predictions, especially about the future."

The year 2028 was 67 years into the future when the initial MPE designers chose the number of bits to represent CALENDAR dates. Who'd believe it might matter to anyone? "Will Stromasys continue to run after 2028?" asked one ERP expert a few years back during a demo. "Just as well as MPE will run," came the reply, because CHARON is just a hardware virtualization. The operating system remains the same, regardless of its hosting.

And as we pointed out yesterday, one key element of futuristic computing will be having its own date crisis 10 years after MPE's. Linux has a 2038 deadline (about mid-January) for its dates to stop being accurate. Linux-based systems, such as the Intel servers that cradle CHARON, will continue to run past that afterlife deadline. And like the Y2K days of the week that'll seem familiar in MPE's 2028, an extension for Linux date-handling is likely to appear in time to push the afterlife forward.

Perhaps in time we can say about that push-it-forward moment, "You could look it up." Another quote often misunderstood, like the 2028 MPE date, because people think Berra said that one, too. It's not him, or the other famous king of malapropisms Casey Stengel. You Could Look It Up was a James Thurber short story, about a midget who batted in a major league game. Fiction that became fact years later, when a team owner used the stunt in a St. Louis Browns ballgame by batting Eddie Gaedel. You never know what part of a fantasy could come true, given enough time. Thurber's story only preceded the real major-league stunt by 10 years. We've still got more than 13 years left before MPE's CALENDAR tries to go Out of Range.

02:47 PM in Hidden Value, Homesteading, Newsmakers | Permalink | Comments (0)

March 27, 2014

Beyond 3000's summit, will it keep running?

ClimbersGuy Paul (left) and Craig Lalley atop Mt. Adams, with their next peak to ascend (Mt. Hood) on the horizon.

If you consider the last 40 years and counting to be a steady rise in reputation elevation for the HP 3000 and MPE -- what computer's been serving business longer, after all? -- then 2027 might be the 3000's summit. A couple of 3000 experts have climbed a summit together, as the photo of Guy Paul and Craig Lalley above proves. What a 3000 might do up there in 20 years prompted some talk about 2027 and what it means.

MountAdamsThe two 3000 veterans were climbing Washington state's second highest mountain, Mt. Adams, whose summit is at 12,280 feet. On their way up, Paul and his 14 year old grandson had just made the summit and ran into Lalley, and his 14 year old son, on their way to the top.  

The trek was announced on the 3000 newsgroup last year. At the time, some of the group's members joked that a 3000 could climb to that elevation if somebody could haul one up there. "Guy is a hiking stud," said his fellow hiker Lalley. "Rumor has it that Guy had a small Series 989 in his back pack. I wasn't impressed until I heard about the UPS."

After some discussion about solar-powered computing, someone else said that if it was started up there on Mt. Adams with solar power, the 3000 would still be running 20 years later.

Then a 3000 veteran asked, "But won't it stop running in 2027?" That's an important year for the MPE/iX operating system, but not really a date of demise. Such a 3000 -- any MPE/iX system -- can be running in 20 years, but it will use the wrong dates. Unless someone rethinks date handling before then.

Jeff Kell, whose HP 3000s stopped running at the University of Tennessee at Chattanooga in December, because of a shutdown post-migration, added some wisdom to this future of date-handling.

"Well, by 2027, we may be used to employing mm/dd/yy with a 27 on the end, and you could always go back to 1927. And the programs that only did "two-digit" years would be all set. Did you convert all of 'em for Y2K? Did you keep the old source?"

Kell added that "Our major Y2K issue was dealing with a "semester" which was YY01 for fall, YY02 for spring, and so forth. We converted that over to go from 9901 (Fall 1999) to A001 (Fall 2000), so we were good for another 259 years on that part. Real calendar dates used 4-digit years (32-bit integers, yyyymmdd)."

At that summit, Paul said that two climbers "talked for a few minutes we made tentative plans to climb Oregon's tallest mountain, Mt. Hood, pictured in the background. We have since set a date of May 16th."

We've written before on the effects of 2027's final month on the suitability of the 3000 for business practice. Kell's ideas have merit. I believe there's still enough wizardry in the community to take the operating system even further upward. The HP iron, perhaps not so much. By the year 2028, even the newest servers will still be 25 years old. Try to imagine a 3000 that was built in 1989, running today.

Better yet, please report to us if you have such a machine, hooked up in your shop.

Why do people climb mountains? The legend is that the climber George Mallory replied, "because it is there." 2028 is still there, waiting for MPE to arrive. Probably on the back of some Intel-based server, bearing Linux -- unless neither of those survives another 14 years. For Intel, this year marks 15 years of service for the Xeon processor, currently on the Haswell generation. Another 25 years, and Xeon will have done as much service as MPE has today.

There is no betting line on the odds of survival for Xeon into the year 2039. By that date, even Unix will have a had its own date-handling issue. The feeling in the Linux community is that a date solution will arrive in time.

04:44 PM in Hidden Value, Homesteading, User Reports | Permalink | Comments (0)

March 26, 2014

Twice as many anti-virals: not double safety

Editor's note: While 3000 managers look over the need to update XP Windows systems in their company, anti-virus protection is a part of the cost to consider. In fact, extra anti-virus help might post a possible stop-gap solution to the end of Microsoft's XP support in less than two weeks. A lack of new security patches is past of the new XP experience. Migrating away from MPE-based hosting involves a lot more reliance on Windows, after all. Here's our security expert Steve Hardwick's lesson on why more than one A/V utility at a time can be twice as bad as a single good one.

By Steve Hardwick, CISSP
Oxygen Finance

If one is good, then two is better. Except with anti-virus software.

When it comes to A/V software there are some common misconceptions about capabilities. Recently some vendors, such as Adobe, have started bundling anti-virus components as free downloads with their updates. Some managers believe if you have one anti-virus utility, a second can only make things safer. Once we look how anti-virus software operates, you'll see why this is not the case. In fact, loading a second A/V tool can actually do more damage than good.

PolarbeardukeoutThe function of an anti-virus utility is to detect and isolate files or programs that contain viruses. There are two fundamental ways in which the A/V utility does this. The anti-virus program will have a data file that contains signatures for known viruses. First, any files that are saved on the hard drive are scanned for signatures to see if they contain malicious code. This is very similar to programs that search for fingerprints. Once the A/V utility finds a match, the file is identified as potentially dangerous and quarantined to prevent any infection. Second, the anti-virus utility will intercept requests to access a file and scan it before it is run. This requires that the anti-virus program can inspect the utility prior to it being launched.

Anti-virus designers are aware that their utility is one of the primary targets of a hacker. After all, if the hacker can bypass the A/V system then it is open to attack, commonly referred to as owned or pwned. So a core component of the A/V system is to constantly monitor its own performance to make sure it has not been compromised. If the A/V system detects that it is not functioning correctly, it will react as if there is a hacking attack and try to combat it. 

So here's what happens if two anti-virus programs are loaded on the same machine. Initially, there are issues as the second system is installed. When the second utility is loaded it contains its own database of known virus signatures. The first anti-virus will see that signature file as something highly dangerous. After all, it will look like it contains a whole mass of virus files. It will immediately stop it from being used and quarantine it. Now the fun starts -- fun that can drive a system into a ditch.

The second anti-virus program will react to the quarantine of its signature file. The second A/V does not know if the issue is another A/V, or a hacker trying the thwart the operation of the system. So it will try to stop the quarantine action of the first A/V. The two systems will battle until one of them gives up and the other wins, or the operating system steps in and stops both programs. Neither outcome is what you're after.

If the two systems do manage to load successfully -- in many cases anti-virus programs are now built to recognize other A/V systems - then a second battle occurs. When a file is opened, both A/V systems will try to inspect it before it is passed to the operating system for processing. As one A/V tries to inspect the file, the second one will try and stop the action. The two A/V systems will battle it out to take control and inspect the file ahead of each other.

Even if multiple systems do acknowledge each other and decide to work together, there are still some issues left. When a file is accessed, both systems will perform an inspection, and this increases the amount of time the virus scan will take. What's more, the anti-virus programs continually update their signature files. Once a new signature file is loaded, the A/V program will kick of a scan to see if the new file can detect any threats the old one did not catch. In most cases, new signature files arrive daily to the A/V system. That means both systems will perform file scans, sometimes simultaneously. This can bring a system to its knees -- because file scanning can be CPU intensive.

So two is worse than one, and you want to remove one of them. Removing A/V programs can be very tricky. This is because one goal of the hacker is to disable or circumvent the anti-virus system. So the A/V system is designed to prevent these attempts. If A/V programs were easy to install, all the hacker would have to do is launch the uninstall program - and in many cases, the A/V manufacturer does provide an uninstall program. Unfortunately in many cases, that uninstall may not get rid of all of the elements of the A/V. Several of the A/V manufacturers provide a utility that will clean out any remnants, after the A/V system has been initially uninstalled. 

So are there any advantages to having a second A/V system running? There is always a race between A/V companies to get out the latest signatures. Adding more A/V providers may increase your chances of getting a wider coverage, but only very marginally. The cost of the decreased performance versus this marginal increase in detection is typically not worth it. Over time, A/V vendors tend to even out on their ability to provide up-to-date signature files.

In summary, the following practices make up a good approach to dealing with the prospects of multiple A/V systems.

1) Read installation screens before adding a new application or upgrade to your system. Think carefully before adding an A/V feature that your current solution provides. Even if a new feature is being provided, it may be worth checking with your current provider to see if they have that function, and adding it from them instead.

2) If you do get a second A/V system in there and you want to remove it, consult the vendor's technical web site regarding removal steps. Most A/V vendors have a step-by-step removal process. Sometimes they will recommend a clean-up tool after the initial uninstall.

3) If you do want to check your A/V system, choose an on-line version that will provide a separate scan without loading an utility. There are many to choose from - search on “on-line antivirus check” in your favorite engine and pick one that is not your primary A/V vendor. Be careful - something online may try to quarantine your current A/V system. But this will give you a safe way to check if your current A/V is catching everything.

4) Don't rely on A/V alone. Viruses now come in myriad forms. No longer are they simple attacks on operating system weaknesses. Newer ones exploit the fallibility of browser code and are not dependent on the operating system at all. A good place to start looking at how you can improve your security is the CERT tips page https://www.us-cert.gov/ncas/tips By following safe computing practices, one A/V should be sufficient.

5) Beware of impostors. There are several viruses out there that mimic an A/V system. You may get a warning saying that your system is not working and to click on a link to download a better A/V system. Before clicking on the link, check the source of the utility. If you don't know how to do that, don't click on the link. You can always go back to Step 3 and check your A/V yourself.

01:38 PM in Migration, Newsmakers | Permalink | Comments (0)

March 25, 2014

How to Delete All But the Last 5 Files

On our Series 937 I need a routine that will delete all but the last five files in a group that begins with certain values and have a certain pattern to the file names.

Example: We keep old copies of our PowerHouse dictionaries, but only need the last five. I can not do it by date like other groups of files, since it does not get changed everyday. Sometimes we'll go weeks, even months before we make a change. 

I have a routine for other groups of files (interface files) that get created every day and keep only the last 31 days. This is done very easily with VESOFT’s MPEX by simply checking the create date. I was wondering if anyone has a routine either in JCL or MPEX that will keep the last 5 instances of these files. The two file-naming conventions are PT###### and PL######. The ###### represent MMDDHH (month, day, hour).

A wide range of solutions emerged from HP 3000 experts, veterans and consultants.

Francois Desrochers replies

How about doing a LISTF and use PRINT to select all but the last 5 into another file (PTPURGE):

LISTF PT#####,6;PTFILES
PRINT PTFILES;END=-6;OUT=PTPURGE

You could massage PTPURGE and turn each line into a PURGE. It has been a while since I used MPEX, but maybe it has an indirect file function e.g. %PURGE ^PTPURGE.

Of course MPEX has such a function. Vladimir Volokh of VESOFT supplied an elegant solution involving a circular file, a feature added to MPE/iX more than 15 years back. 

First, build an MPE circular file (to do this, look at HELP BUILD ALL). The nearly 1,000 lines that will follow include an explanation of the CIR parameter. We use logfiles below in our example.

PURGE V
BUILD V; CIR; DISC=5
FILE V, OLD
LISTF LOG####, 6;*V

(MPE's asterisk, by the way, can be used in about 19 different ways, Vladimir adds.)

Your result in V can be your last five names. Now you purge, using MPEX -- because purging something minus something is an MPEX-only function. (Using the caret sign is a way to signal all the files mentioned in the file V.)

%PURGE LOG####-^V

There are other solutions available that don't require a third-party gem like MPEX. 

Olav Kappert replied

This is easy enough to do. Here are the steps:

Do a listf into a file 'foo'

Set 'count' = end-of-file count
Set 'index' to 1
Set 'maxindex' to 'count' - 5
Read 'foo'
Increment 'index' by 1
If 'index' < 'maxindex' then
Purge file
Loop to read 'foo'

The exact syntax is up to you and MPE.

Barry Lake adds

Very simple if you're willing to use the Posix shell. If this needs to be done with CI scripting, it's certainly possible, but way more complicated. Someone else may chime in with an "entry point" command file to do this in "pure" MPE. But here's the shell method:

Posix Shell Delete Last 5

So... move the last 5 out of the way, delete whatever's left, then move the 5 back into place.

05:39 PM in Hidden Value, Homesteading | Permalink | Comments (0)

March 24, 2014

40 years from a kitchen-size 3000 to 3.4GHz

HP3000 CX 1974Forty years ago this spring, the HP 3000 was just gaining some traction among one of its core markets: manufacturing. This was a period where the computer was big enough to take over kitchen space in a software founder's home, according to an HP software VP of the time. That server didn't run reliably, and so got plenty of attention from the software labs of that day's Hewlett-Packard. And if you were fortunate, a system the size of a two tall-boy file cabinets could be yours for $99,500 in a starter configuration, with 96KB of core memory.

MPE was so new that Hewlett-Packard would sell the software unbundled for $10,000. The whole collection of server and software would burn off 12,000 BTU per hour. HP included "cooling dissipation" specs for the CX models -- they topped off at a $250,000 unit -- so you could ramp up your air conditioning as needed in your datacenter. (Thanks to the HP Computer Museum for the details).

DL380Those specs and that system surfaced while I wrote the Manufacturing ERP Options from Windows article last week. Just this week I rolled the clock forward to find the smallest HP 3000 while checking on specifications. This 2014 era 3000 system runs off an HP DL380 server fired by on a 3.44 GHz chip. It's plenty fast enough to handle the combo of Linux, VMWare and the Stromasys CHARON 3000 emulator. And it's 19 inches x 24 by 3.5.

We've heard, over the past year from Stromasys tech experts, that CPUs of more than 3 GHz are the best fit for VMWare and CHARON. It's difficult to imagine the same operating system that would only fit on a 12,000 BTU server surviving to run on that 2U-sized DL380. The newest Generation 8 box retails for about one-tenth of the cost of that '74 HP3000 System CX server unit. But the CX was all that ASK Computer Systems had to work with, 40 years ago. And HP needed to work with ASK just to bring MPE into reliable service. "It didn’t work worth shit, it’s true," said Marty Browne of ASK. "But we got free HP computer time."

The leap in technology evokes the distinction between a Windows ERP that will replace ASK's MANMAN, and other choices that will postpone migration. Especially if a company has a small server budget, enough time to transfer data via FTP or tape drive -- and no desire to revise their manufacturing system. What started in a kitchen has made its transition to something small enough to look like a large briefcase, a thousand times more powerful. Users made that happen, according to Browne and retired HP Executive VP Chuck House.

The last time I saw these two in a room together, the No. 2 employee at ASK and HP's chief of MPE software management had a touching exchange over the roots of MANMAN -- an application that's survived over four decades. (No. 1 at ASK would be the Kurtzigs, Andrew and Sandy. It's always been a family affair; their son Andy leads Pearl.com, a for-pay Q&A expert site.) 

At the HP3000 Software Symposium at the Computer History Museum, Browne said that if the 3000 had failed to take root, ASK would have been hung out to dry.

Marty Browne: It used to be so expensive to buy computer time to do development work. And it was so much better a deal for me to do this 3000 development. I was able to put several years of engineering work into my product before I ever sold it. I could not have afforded that since I was bootstrapping my business.

Chuck House: Let me add that was true for Sandy too. She got a free HP 3000 for her kitchen. 

Browne: It was not in the kitchen. We had the first HP 3000 on the computer floor at HP. Did you say kitchen?

House: Correct.

Browne: Yes, we got an HP 3000. We had to work at night, by the way.

House: But it was free time.

Browne: It was free time. It didn’t work worth shit. It’s true. But we got free HP time.

House: No, we used you to debug.

Browne: Pardon me?

House: You were our debuggers.

Browne: Yes, right. HP provided an open house in a lot of ways, I mean that’s part of the HP culture. They were good partners. HP is an excellent partner.

Moderator Burt Grad: So if the 3000s had not been able to sell, you would have been hung out? 

Browne: Yes.

Why is this history lesson important today? You might say that whatever MANMAN's bones were built from is sturdy stuff. Customization, as we noted in that ERP article, makes MANMAN sticky. Robert Mills commented to clarify that after I posted the article.

MANMAN could be customized and added to by the customer because they were given full documentation on the system. ASK would, for a reasonable cost, make modifications to standard programs and supply you with the source code of the modified programs. Even MM/3000 had a Customizer that allowed you to make database and screen changes. Can you do this with MS Dynamics and IFS? Will Microsoft and IFS allow this, and give you the information required?

The answer to the question might be just a flat-out no, of course not. Just as HP stopped selling MPE unbundled, Microsoft and IFS don't customize their application. But partners -- some perhaps the equivalent of Marty Browne, abeit of different skill -- would like to do that customization. It's just that this customization in the modern era, which would run on the same DL380, would come after host environment transfer, plus work configuring and testing the apps and installation of a new OS. Then there's the same transfer of data, no small task, which is about the only one that these options have in common.

If a migration away from the HP 3000 for ERP is essential, that change could cost as much as that 1974 CX server did. This is one reason why still-homesteading companies will work hard to prove they need that budget. A $2,000 DL380 and disks plus CHARON might be more cost-effective and less disruptive. How much future that provides is something your community is still evaluating. 

03:56 PM in History, Homesteading, Migration | Permalink | Comments (0)

March 21, 2014

Shadows of IT Leaders, at HP and Apple

JacksonEarlier this week, the Reverend Jesse Jackson made an appearance at Hewlett-Packard's annual shareholder meeting. He used the occasion of a $128 billion company's face-up to stockholders to complain about racial bias. In specific, Jackson complained that the HP board, by now, should have at least one African American serving on it.

HP's CEO Meg Whitman took respectful note of Jackson's observation, which is true. After 75 years of corporate history that have seen the US eliminate Jim Crow, and the world shun apartheid, HP's board is still a collection of white faces (10 of 12). Hewlett-Packard always had a board of directors, but it didn't become a company with a board in public until it first offered shares in 1958. We might give the company a pass on its first 20 years, striving to become stable and powerful. But from the '60s onward, the chances and good people might have been out there. Just not on HP's board, as Jackson pointed out.

But that story about the vendor who created your HP 3000s, MPE, IMAGE and then the systems to replace all, is incomplete. It's just one view of what Hewlett-Packard has become. In spite of Jackson's accurate census, it overlooks another reality about the company's leadership. HP has become woman-led, in some of its most powerful positions. Whitman had the restraint to not to point to that. But she's the second woman over those 75 years to be HP CEO.

Companies with potent histories like HP will always be in the line of fire of misunderstanding. The same sort of thing happed to Apple this week. This rival to HP's laptop and desktop and mobile space was inked over as a company still run by the ghost of its founder Steve Jobs. Like the Jackson measurement of HP's racial diversity at the top, the Ghostly Jobs Apple story needs some revisions. HP's got diversity through all of its ranks right up until you get to the director level. Given what a miserable job the board's done during the last 10 years, it might be a good resume item to say "Not a Member of Hewlett-Packard's Board."

HauntedempireipadRegarding Apple, the misunderstanding is being promoted in the book Haunted Empire. The book that's been roundly panned in reviews might sell as well as the Steve Jobs biography by Walter Issacson, but for the opposite reasons. Jobs' biography was considered a hagiography by anybody who disliked the ideal of Apple and "Computing for the Rest of Us." He indeed acted like a saint in the eyes of many of his customers, and now that very sainthood is being devolved into a boat anchor by the writer of  Haunted Empire. It doesn't turn out to be true, if you measure anything except whether there's been a game-changer like a tablet in the past four years.

BillanddavecoverSimilar things happened to Bill Hewlett and Dave Packard after Hewlett died in 2001, leaving HP with just the "HP Way" and no living founders. Whatever didn't happen, or did, was something the founders would've fought for, or wouldn't have tolerated. The Way was so ingrained into the timber of HP that the CEO who preceded Whitman, Leo Apotheker, imagined there might be a Way 2.0. Tying today to yesterday can be a complicated story. There was an HP Way 0.5, according to Michael Malone's history of HP, Bill and Dave.

And the term "Bill and Dave" is invoked to this day by people as disappointed in 2014's Hewlett-Packard as Jackson is impatient with its diversity. Like the Ghostly Apple story, WWBDD -- What Would Bill and Dave Do -- can be told with missing information. Accepting such missing data is a good way to show you know the genuine HP Way. Or if you care, the correct state of the Apple Empire.

Since my reader will care far more about WWBDD, let's just move to Malone's book, well-reviewed and available for the cost of shipping alone. Under the section An Army of Owners, he outlines the discounted HP employee stock ownership, one product of the Way.

One of the least-noticed aspects of Hewlett's and Packard's managerial genius was their ability to hide shrewd business strategy inside of benevolent employee programs, and enlightened employee benefits within smart business programs -- often at the same time.

Having so much company stock in the hands of HP employees ultimately meant that Bill and Dave could resist any pressure from Wall Street to substitute short-term gains for long-term success.

Malone goes on to note that employee stock purchasing gave Bill and Dave a great engine to make cash, as well as keep lots of stock out of the hands of institutional investors. That HP Way 0.5 came out of the period when Hewlett and Packard established their business ideals -- then crafted the story about it in a way that was true, but missing some of its most potent context.

When HP employees' stock descended into the nether regions of popularity -- share price plummeting into the $20s and qualification for the program becoming tougher -- Mr. Market of Wall Street started to take over. The board let the vendor chase big markets like PCs, as well as cut down small product lines to make way for a new way of doing business at HP. Bigger was sure to be better, even if it sparked lawsuits and besmirched the HP patina built over all those decades.

But at the same time, the company was getting on the right track with diversity. One of the last general managers the 3000 group had was Harry Sterling. He came out as a gay man before he left his job, and diversity for gender preferences was written into HP's codes.

The New York Times story about Jackson's visit to the meeting emphasized that representation of gender was not Jackson's chosen subject.

HP has a female chief executive, a female head of human resources, and a female chief financial officer, perhaps the largest representation of women in power of any major Silicon Valley company.

Meg Whitman, HP’s chief executive, cited what she said was a long record of civil rights activism on HP’s part. Mr. Jackson noted that HP did not currently have a single African-American on its board. “This board, respectfully, does not look like America.”

Ms. Whitman later said she would meet with Mr. Jackson on the subject.

HP's diversity, or the success of the post-Jobs Apple, are subjects to be misunderstood. Past injuries -- from dropped products, or envy of a supplier that made its fortune on mobile while others did not -- tend to shape beliefs about all that follows. Some 3000 owners will never forgive this vendor for losing its belief in unique platform environments, starting with MPE. Other pragmatists still have an all-HP shop, including decade-old 3000 iron. Leadership changes, sometimes more swiftly than products are eliminated. If Whitman and her board figure that naming an African-American is part of a new HP Way, they're likely to do so. Directors Shumeet Banerji and Rajiv Gupta would remind Jackson HP's board already has some diversity. HP isn't supposed to look like America, but present a world view.

07:43 PM in History, News Outta HP | Permalink | Comments (1)

March 20, 2014

Manufacturing ERP Options from Windows

ASKLogoEven among the companies that host homesteader solutions for manufacturers, there's a sense that the long-term plan will involve Windows rather than MPE. The length of that term varies, of course, depending on the outlook for the current software in place. Customization keeps MPE systems in charge at companies very small and some large ones (albeit in small spots at those giants, like Boeing.)

Moving a 3000 installation away from MANMAN -- first created in the 1970s and after five ownership changes, still serving manufacturers -- is a skill at The Support Group's Entsgo group. In that TSG practice, the IFS suite is available and can be installed to replace the MANMAN, software which began development at a kitchen table in the middle 1970s. (That's if HP's former executive VP of Software Chuck House is to be believed. He said that HP sent a 3000 to ASK CEO Sandy Kurtzig's kitchen when MANMAN was still being debugged, as well as  MPE itself.)

MS DynamicsIFS -- which you can read up on at the Entsgo webpages of the Support Group site -- is just one of several replacement applications for manufacturers. Like IFS, Microsoft Dynamics GP has a wide range of modules to cover all the needs of a company using a 3000. Like any replace-to-migrate strategy, there's a lot of customized business logic to carry forward. But that's what a service company like TSG does, in part to keep down the costs of migrating.

TSG's CEO Terry Floyd said that Microsoft solution is battle-tested. Dynamics also happens to be a solution that our homesteading service hoster of a couple of days ago offers as a Windows migration target. Floyd says:

Several companies have converted from MANMAN to MS Dynamics, including one company in SoCal; that was 10 years ago. It's a fairly mature product by now, and had some great features when I checked it out way back when.

Windows used to be an anathema to the 3000 IT director, at least when it was considered as an enterprise-grade solution. Those days are long gone -- just as vanished as the sketchy beginnings of MPE itself, from its earliest days, and then again when it became a 32-bit OS in the late 1980s.

So it makes sense that someone who knows the genuine article in ERP, MANMAN, could have a positive review of a Windows replacement -- whether it's Microsoft's Dynamics, or IFS. Floyd said

There are dozens of viable ERP alternatives now (some industry specific, but many general purpose for all types of manufacturers.) There used to be hundreds. MS Dynamics is not as good as IFS, but choosing Microsoft now is considered as safe as choosing IBM was in the early 1980's. And at least you know they won't get bought by [former MANMAN owner] Computer Associates :)   

Microsoft bought several ERP packages from Europe (one big one from Denmark, as I recall) and merged them together about 2002. They didn't write [that app suite] but they certainly have a viable product and a sizable user base, after this many years into it.

12:23 PM in Migration | Permalink | Comments (1)

March 19, 2014

A year-plus later, Ecometry awaits a map

NeedlepointAbout one year ago, users of the Ecometry multi-channel software were wondering what the future might be for their software. According to analysts, the software (now joined up with the Escalate Retail system, to add extra Point of Sale power) is being used by 60 percent of the $200 million and under retailers. That's a lot of companies, and some will sound very familiar to the 3000 community. Catalog and website providers like M&M Mars, or retailers with strong online store presences like Hot Topic. These were all part of the Ecometry community that's been folded into a much larger entity.

That would be JDA, a company large enough to join forces with Red Prairie in early 2013. But not large enough to deliver a futures map for the Ecometry customer. These customers have been loath to extend their Ecometry/Escalate installations until they get a read on the tomorrow they can expect from JDA.

The JDA Focus conference comes up in about a month, and right now there's only one certain piece of news. JDA will meet for the last time, via conference call, with the Ecometry/Escalate user group before Focus opens up. That's not extending the contact with customers. There's a total of six meetings, including one meet-and-greet and two updates on enhancement requests.

MB Foster's been tracking the Ecometry situation for years by now. As a result of being a partner with Red Prairie, they're now a partner with JDA since the merger of 15 months ago. In all that time, says CEO Birket Foster, no evidence of product planning has emerged from the larger company. JDA is very big, he notes, more than 130 software suites big. Ecometry is just one. JDA is so large they now offer a JDA Eight, which is 30 applications in one bundle.

Foster's view, backed up by work over the years in migrating Ecometry's MPE sites to Ecometry Open, is that anyone who's making a migration to the Open version is in a better place to react to whatever plan JDA might introduce. "I think it's possible there's nobody left in JDA who can even spell MPE, let alone know what it means to Ecometry sites," he said.

Foster would like to assemble a consortium of companies, partnering together, to assist in these migrations to Ecometry Open. In part, that's because the estimate for a migration which the customers get from JDA today runs 6-12 weeks. This is accurate only to the point that it describes installing the Open software on Windows or Unix servers (for SQL Server or Oracle databases) and Windows servers (for the app.) That's the part of the project where JDA (nee Red Prairie, nee Escalate) helps with the software transfer. But it's only a part of the 3000 customer's migration.

"In a lot of people's minds, from the day they decide to go to Ecometry Open, it's just a month or three," Foster said. "That [JDA] estimate is right. They come in, start it up, make sure the menu screens work right, and they're done. We do the integration work for everything else outside of the main Ecometry stuff."

In the JDA practice, the customer is responsible for loading its own data. "If you have any issues with your data, you might have to get more services from [JDA]," Foster said. "Or in this case, from MB Foster" perhaps along with its partners.

But without a clear roadmap from JDA, these customers cannot make plans to migrate with confidence. About a dozen of them are looking into this prospect, but have to work with only those half-dozen Ecometry sessions scheduled on the Focus agenda for the April 28 meeting. "They're scrambling for content, again," Foster said. "We've been addressing Ecometry customers since 2002, standing with [former CEO] John Marrah [of Ecometry] to talk about migration."

Foster's organization practices assessment as an essential prelude to change. Once an assessment has been completed -- in an overall look -- the cost of the migration can be estimated for the Ecometry site within a 30 percent plus-minus range. "After a detail assessment, you should be down to plus-minus 10 percent," he said. Other practices include doing a "parking lot" routing for parts of a migration that can be postponed with no ill effects, plus good project management and change control.

A migration away from an essential 3000 application like Ecometry should include a solid estimate of the costs. "You can get though a project like this on time and on budget," Foster said. "We don't believe in the philosophy that you should bid low and change-control people to death. We like to be up front and say 'this is what it's going to take.' Some people may not like the number they see, but that's actually what it's going to take."

Pragmatism about processes is really a significant part of the way an Ecometry customer operates, anyway. Foster shared a story about retailers who still ship out catalogs. Really, here well into the 21st Century? "Of course," Foster said the retailer told him. "With a catalog I've got a half-hour of your time on the couch on a Saturday, without the distraction of the Web. Focused on my products for you." One catalog retailer, a needlepoint supply resource, says the majority of its customers are beyond age 65. "That company actually gets a lot of orders from the paper order form in the catalog," Foster said.  

06:22 PM in Migration | Permalink | Comments (0)

March 18, 2014

Customizing apps keeps A500 serving sites

A-Class in rackHP's A-Class 3000s aren't that powerful, and they're not as readily linked to extra storage. That's what the N-Class systems are designed to do. But at one service provider's shop, the A500 is plenty powerful enough to keep a client's company running on schedule, and within budget. The staying power comes from customization, that sticky factor which is helping some 3000s remain in service.

The A500 replaced a Series 987 about a year ago. That report is one point of proof that 9x7 systems are still being replaced. It's been almost two decades since the 9x7s were first sold, and more than 15 years since the last one was built. The service company, which wants to remain unnamed, had good experience with system durability from the 3000 line.

We host a group of companies that have been using our system for over 20 years. So, we’re planning on being around for a while. One of these customers may migrate to a Windows-based system over the next few years, but I anticipate that this will be a slow process, since we have customized their system for them over the years.

The client company's top brass wants to migrate, in order to get all of its IT onto a single computing environment. That'd be Windows. But without that corporate mandate to make the IT identical in every datacenter, the company would be happy staying with the 3000, rather than looking at eventual migration "in several years' time." It will not be the speed of the server that shuts down that company's use of an A500. It will be the distinction that MPE/iX represents.

A-Class singleThere are many servers at a similar price tag, or even cheaper, which can outperform an A500. HP never compared the A-Class or N-Class systems to anything but other HP 3000s. By the numbers, HP's data sheet on the A-Class lineup lists the top-end of the A500s -- a two-CPU model with 200 MHz chips -- at five times the performance of those entry-level $2,000 A400s being offered on eBay (with no takers, yet.) The A500-200-200 tops out at 8GB of memory. But the chip inside that server is just a PA-8700, a version of PA-RISC that's two generations older than the ultimate PA chipset. HP stopped making PA-RISC chips altogether in 2009.

HP sold that 2-way A500 at a list price of just under $42,000 at the server's 2002 rollout. In contrast, those bottom-end A400s had a list price of about $16,000 each. Both price points didn't include drives, or tape devices. Our columnist at the time, John Burke, reported on performance upgrades in the newer A-Class systems by saying

There is considerable controversy in the field about the A-Class servers in particular, with many people claiming these low-end boxes have been so severely crippled (when compared to their non-crippled HP-UX brothers) as to make them useless for any but the smallest shops. Even if you accept HP’s performance rating (and many people question its accuracy), the A400-100-110 is barely faster then the 10-year-old 928 that had become the de-facto low-end system.

I see these new A-Class systems as a tacit agreement by HP that it goofed with the initial systems.

The power of the iron is just a portion of the performance calculation, of course. The software's integration with the application, and access to the database and movement of files into and out of memory -- that's all been contributing to the 3000's reputation. "I’ve been working on the HP since 1984 and it’s such a workhorse!" said the service provider's senior analyst. "I've seen other companies that have gone from the 3000 to Windows-based systems, and I hear about performance issues."

Not all migrations to Windows-based ERP, for example, give up performance ground when leaving the 3000 field. We've heard good reports on Microsoft Dynamics GP, a mature set of applications that's been in the market for more than a decade. Another is IFS, which pioneered component-based ERP software with IFS Applications, now in its seventh generation.

One area where the newer products -- which are still making advances in capability, with new releases -- have to give ground to 3000 ERP is in customization. Whatever the ERP foundation might be at that service provider's client, the applications have grown to become a better fit to the business practices at that client company. ERP is a set of computing that thrives on customization. This might be the sector of the economy which will be among the last to turn away from the 3000 and MPE.

10:52 AM in Homesteading, Migration, User Reports | Permalink | Comments (0)

March 17, 2014

Breaching the Future by Rolling Back

Corporate IT has some choices to make, and very soon. A piece of software that's essential to the world's business is heading for drastic changes, the kind that alter information resource values everywhere. Anyone with a computer older than three years has a good chance of being affected. What's about to happen will echo in the 3000 owner's memories.

Windows XP is about to ease out of Microsoft's support strategy. You can hardly visit a business that doesn't rely on this software -- about 30 percent of the world's Windows is still XP -- but no amount of warning from its vendor seems to be prying it off of tens of millions of desktops. On this score, it seems that the XP-using companies are as dug-in as many of the 3000 customers were in 2002. Or even 2004.

A friend of mine, long-steeped in IT, said he was advising somebody in his company about the state of these changes. "I'm getting a new PC," he told my pal. "But it's got Windows 8 on it. What should I do?" Of course, the fellow is asking this because Windows 8 behaves so differently from XP that it might as well be a foreign environment. Programs will run, many of them, but finding and starting them will be a snipe hunt for some users forced into Windows 8. The XP installations are so ubiquitous that IT managers are still trying to hunt them down.

Classic-shell-02-580-100023730-largeThe market sees this, knows all, and has found a solution. It won't keep Windows 8 from being shipped on new PCs. But the solutions will return the look and feel of the old software to the new Microsoft operating environment. One free solution is Classic Shell, which will take a user right back to the XP interface for users. Another simply returns the hijacked Start Button to a rightful place on new Windows 8 screens.

You can't make these kinds of changes in a vacuum, or even overnight. Microsoft has been warning and advising and rolling its deadlines backwards for several years now, but April 8 seems to be the real turning point. Except that it isn't, not completely. Like the 2006-2010 years for MPE and the 3000, the vendor is just changing the value of installed IT assets. It will be making them more expensive, and as time rolls on, less easy to maintain.

The expectation is that the security patches that Microsoft has been giving away for XP will no longer be free. There's no announcement, officially, about the "now you will pay for the patches" policy. Not like the one notice that HP delivered, rather quietly, back in 2012 for its enterprise servers. Security used to be an included value for HP's servers, but today any patch requires a support contract. 

Windows XP won't be any different by the time the summer arrives, but its security processes will have changed. Microsoft is figuring out how to be in two places at once: leading the parade away from XP and keeping customers from going rogue because XP is going to become less secure. The message is mixed, at the moment. A new deadline of 2015 has been announced for changes to the Microsoft Security Engine, MSE.

Cue the echoes of 2005, when HP decided that its five-year walk of the plank for MPE needed another two years worth of plank. Here's Microsoft saying

Microsoft will continue to provide updates to their antimalware signatures and Microsoft Security Engine for Windows XP users through July 14, 2015. 

The extension, for enterprise users, applies to System Center Endpoint Protection, Forefront Client Security, Forefront Endpoint Protection and Windows Intune running on Windows XP. For consumers, this applies to Microsoft Security Essentials.

Security is essential, indeed. But the virus that you might get exposed to in the summer of next year can be avoided with a migration. Perhaps over the next 16 months, that many percentage points of user base will have moved off XP. If so, they'll still be hoping they don't have to retrain their workforce. That's been a cost of migration difficult to measure, but very real for HP 3000 owners.

Classic Shell, or the $5 per copy Start 8, work to restore the interface to a familiar look and feel. One reviewer on ZDNet said the Classic Shell restores "the interface patterns that worked and that Microsoft took away for reasons unknown. In other words, Classic Start Menu is just like the Start Menu you know and love, only more customizable."

The last major migration the HP 3000 went through was from MPE V to MPE/XL, when the hardware took a leap into PA-RISC chipsets and 32-bit computing. Around that time, Taurus Software's Dave Elward created Chameleon, aimed at letting managers employ both the Classic and MPE/XL command interfaces. Because HP had done the heavy lifting of creating a Classic Mode for older software to run inside of MPE/XL, the interface became the subject of great interest.

But Chameleon had a very different mission from software like Classic Shell. The MPE software was a means to let customers emulate the then-new PA-RISC HP 3000 operating system MPE/XL on Classic MPE V systems. It was a way to move ahead into the future with a gentle, cautious step. Small steps like the ones which Microsoft is resorting to -- a string of extensions -- introduce some caution with a different style.

Like HP and the 3000, Microsoft keeps talking about what the end of XP will look like to a customer. There's one similarity. Microsoft, like HP, wants to continue to control the ownership and activation of XP even after the support period ends. 

"Windows XP can still be installed and activated after end of support on April 8," according to a story on the ZDNet website. The article quotes a Microsoft spokesperson as explaining, "Computers running Windows XP will still work, they just won’t receive any new security updates. Support of Windows XP ends on April 8, 2014, regardless of when you install the OS." And the popular XP Mode will still allow users with old XP apps to run them on Windows 7 Professional, Enterprise and Ultimate.

And just like people started to squirrel away the documentation and patches for the 3000 -- the latter software resulting in a cease-and-desist agreement last year -- XP users are tucking away the perfectly legal "professionals and developers" installer for XP's Service Pack 3, which is a self-contained downloadable executable.

"I've backed that up in the same place I've backed up all my other patch files and installers," said David Gerwitz of CBS Interactive, "and now, if I someday need it, I have it." These kinds of things start to go missing, or just nearly impossible to find, once a vendor decides its users need to move on.

08:03 PM in History, Migration | Permalink | Comments (0)

March 14, 2014

Listen, COBOL is not dead yet, or even Latin

MicrophoneIt's been a good long while since we did a podcast, but I heard one from an economy reporting team that inspired today's return of our Newswire Podcasts. The often-excellent NPR Planet Money looked into why it takes so long to get money transferred from one bank to another. It's on the order of 3 days or more, which makes little sense in a world where you can get diapers overnighted to your doorstep by Amazon.

Some investigation from Planet Money's reporters yielded a bottleneck in transactions like these transfers through the Automated Clearinghouse systems in the US. And nearly all automated payments. As you might guess, the Clearinghouse is made of secret servers whose systems were first developed in the 1970s. Yeah, the 3000's birth era, and the reporting devolved into typical, mistaken simplication of the facts of tech. Once COBOL got compared to a languge nobody speaks anymore, and then called one that nobody knows, I knew I was on to a teachable moment. Kind of like keeping the discussion about finance and computing on course, really. Then there's a podcast comment from a vendor familiar to the credit union computer owner, a market where the 3000 once held sway.

Micro Focus is the company raising the "still alive" flag highest for COBOL. 

But while every business has its language preferences, there is no denying that COBOL continues to play a vital role for enterprise business applications. COBOL still runs over 70 percent of the world’s business -- and more transactions are still processed daily by COBOL than there are Google searches made.

You might be surprised to hear how essential COBOL is to a vast swath of the US economy. As surprised as the broad-brush summary you'll hear from Planet Money of how suitable this language is for such work. To be sure, Planet Money does a great job nearly every time out, explaining how economics affects our lives, and it does that with a lively and entertaining style. They just don't know IT, and didn't ask deep enough this time.

Have a listen to our eight minutes of podcast. You can even dial up the original Planet Money show for complete context -- there are some other great ones on their site, like their "We created a t-shirt" series.  Then let me know what your COBOL experience seems to be worth, whether you'd like an assignment to improve a crucial part of the US economy, and the last time you had a talk with anybody about COBOL in a mission-critical service.

03:41 PM in Homesteading, Newsmakers, Podcasts | Permalink | Comments (0)

March 13, 2014

Can COBOL flexibility, durability migrate?

InsideCOBOLogoIn our report yesterday on the readiness for CHARON emulation at Cerro Wire, we learned that the keystone application at that 3000 shop began as the DeCarlo, Paternite, & Associates IBS/3000 suite. That software is built upon COBOL. But at Cerro Wire, the app's had lots of updating and customization and expansion. It's one example of how the 3000 COBOL environment keeps on branching out, in the hands of a veteran developer.

That advantage, as any migrating shop has learned, is offset by finding COBOL expertise ready to work on new platforms. Or a COBOL that does as many things as the 3000's did, or does them in the same way.

OpenCOBOL and Micro Focus remain two of the favorite targets for 3000 COBOL migrations. The more robust a developer got with COBOL II on MPE, however, the more challenge they'll find in replicating all of that customization.

As an example, consider the use of COBOL II macros, or the advantage of COBOL preprocessors. The IBS software "used so many macros and copylibs that the standard COBOL compiler couldn't handle them," Terry Simpkins of Measurement Specialities reported awhile back. So the IBS creators wrote a preprocessor for the COBOL compiler on the 3000. Migrating a solution like that one requires careful steps by IT managers. It helps that there's some advocates for migrating COBOL, and at least one good crossover compiler that understands the 3000's COBOL nuances.

Alan Yeo reminds us that one solution to the need for a macro pre processor is AcuCOBOL. "It has it built in," he says. "Just set the HP COBOL II compatibility switch, and hey presto, it will handle the macros."

Yeo goes on to add that "Most of the people with good COBOL migration toolsets have created COBOL preprocessors to do just this when migrating HP COBOL to a variety of different COBOL compilers. You might just have to cross their palms with some silver, but you'll save yourself a fortune in effort." Transformix is among those vendors. AMXW support sthe conversion of HP COBOL to Micro Focus as well as AcuCOBOL.

Those macros were a staple for 3000 applications built in the 1980s and 90s, and then maintained into the current century, some to this very day. One of the top minds in HP's language labs where COBOL II was born thinks of macros as challenges to migrations, though. Walter Murray has said

I tried to discourage the use of macros in HP COBOL II. They are not standard COBOL, and do not work the same, or don't work at all, on other compilers.  But nobody ever expects that they will be porting their COBOL. One can do some very powerful things with macros. I have no argument there.

COBOL II/iX processes macros in a separate pass using what was called MLPP, the Multi-Language PreProcessor.  As the name implies, it was envisioned as a preprocessor that could be used with any number of HP language products.  But I don't think it was used anywhere except COBOL II, and maybe the Assembler for the PA-RISC platform.

Jeff Kell, whose 3000 shutdown at the University of Tennessee at Chattanooga we've chronicled recently, said macros were a staple for his shop.

In moving to COBOL II we lived on macros. Using predictable data elements for IMAGE items, sets, keys, status words and so forth, we reduced IMAGE calls to simple macros with a minimum of parameters.

We also had a custom preprocessor. We had several large, modular programs with sequential source files containing various subprograms.  The preprocessor would track the filename, section, paragraph, last image call, and generate standard error handling that would output a nice "tombstone" identifying the point in the program where the error occurred.  It also handled terminal messages, warnings, and errors (you could put the text you wanted displayed into COBOL comments below the "macro" and it filled in code to generate a message catalog and calls to display the text).

It's accepted as common wisdom that COBOL is yesterday's technology, even while it still runs today's mission critical business. How essential is that level of business? The US clearinghouse for all automated transfers between banks is built upon COBOL. But if your outlook for the future is, as one 3000 vet said, a staff pool of "no new blood for COBOL, IMAGE, or VPlus," then moving COBOL becomes a solid first step in a migration. Just ensure there's enough capability on the target COBOL to embrace what that 3000 application -- like the one at Cerro Wire -- has been doing for years.

07:38 PM in Migration, User Reports | Permalink | Comments (0)

March 12, 2014

Wiring Up the Details for Emulation

CopperWireFor two-plus years, Herb Statham has been inquiring about the Stromasys CHARON HP 3000 emulator. He first stuck his hand up with curiosity before the software was even released. He's in an IT career stop as Project Manager for Cerro Wire LLC, a building wire industry supplier whose roots go back to 1920. Manufacturing headquartered in Hartselle, Alabama, with facilities in Utah, Indiana and Georgia.

Statham is checking out the licensing clearances he'll need to move the company's applications across to this Intel-powered solution. The privatization of Dell turns out to be a factor in his timetable. Dell purchased Quest Software before Dell took itself private. By the start of 2014, Dell was still reorganizing its operations, including license permissions needed for its Bridgeware and Netbase software. Cerro Wire uses both.

I’m after some answers about moving over to a virtual box" Statham says. "I know CHARON's emulating an A500, but that [Intel] box [that would host it] has four processors on it. I’ve heard what I’m going to have to pay, instead of hearing, 'Okay, you’re emulating an A500, with two processors.' They’re looking more at the physical side.”

This spring is a time of change and new growth for legacy software like Netbase, or widespread solutions such as PowerHouse. While the former's got some room to embrace license changes, the latter's also got new ownership. The PowerHouse owners Unicom Systems have been in touch with their customers over the last few months. The end of March will mark the projected wrap-up on Unicom's field research. At Cerro, the Quest software is really the only license that needs to be managed onto CHARON, according to Statham.

Cerro Wire's got an A500 now as a result of several decades of 3000 ownership. The company is fortunate enough to have control over its main applications, software based on the DeCarlo, Paternite, & Associates IBS/3000 suite. At the company HQ in Hartselle, Alabama, Statham said turning to the 3000 early meant source code was Cerro's to revamp and extend.

"We have highly customized it, and we’ve written applications around it," Statham says. "When we bought it, source code was part of it. Some of the programs that were written for it now do a lot more than they used to do. Some have been replaced altogether."

The company replicates its data from the Hartselle center to an identical A Series server, including a dedicated VA 7410 RAID array, in Indiana. Netbase was a replication groundbreaker for the 3000 from the late 1980s onward, so it's essential to keeping the MPE/iX applications serving Cerro.

Statham has no pressure from Cerro management to replace the applications that are successful at running the company. With ample spare parts, independent support and storage consulting, and his own source in hand, he needs only the green light from Dell to move forward. Specifics on pricing and performance are still in play from Stromasys, at least from his vantage point. A 1.5 version of CHARON HPA/3000 was announced late last year, promising increased performance. But meeting the speed needs of an A-Class would be no challenge for the CHARON lineup.

This veteran of 3000 deployment and management has little desire to send his company toward an application replacement that might end up with Cerro "spending millions of dollars." There are many years left for MPE/iX, and his company is an all-HP shop, with the exception of a couple of Dell monitors on Statham's desk. He can see a long future for the app the company has fine-tuned to its business.

The CALENDAR intrinsic roadblock is the only thing he can forecast by now. He's not sure how HP might react to an independent fix for that issue, a date challenge that's still 13 years away.

"If we could ever get this 2027 thing out of the way, you could run your applications indefinitely, so long as you’ve got someone to support them," he says. "My only concern is HP themselves, in the event that someone said they had a patch to the operating system — and so you didn’t have to worry about the year, because there was some type of workaround."

But Stromasys became an HP Worldwide Reseller Partner last year, so perhaps even that question could be resolved. What nobody can be sure of, at the moment, is if Dell might want CHARON to be hosted on its server hardware, now that it owns Netbase.

 

06:24 PM in Homesteading, User Reports | Permalink | Comments (0)

March 11, 2014

FIXCLOCK alters software, hardware clocks

My HP 3000 system was still on EST, so I wanted to change it during startup. I answered "N" to the date/time setting at end of startup, and it refused my entry of 03/09/14; it returned a question mark. After several quick CR, it set the clock back to 1 Jan 85, which is where it is now waiting.

Gilles Schipper of GSA responds:

While the system is up and running, you could try (while the system is up and running):

:setclock ;date=mm/dd/yyyy;time=hh:mm
:setclock  ;cancel 
:setclock ;timezone=w5:00 (for example)
:setclock ;cancel (again)

Brian Edminster of Applied Technologies notes:

I'd been quite surprised by how many small 'single machine' shops don't properly set the hardware clock to GMT with the software clock offset by 'timezone' Instead, they have their hardware and software clock set to the same time, use the 'setclock correction=' and then give either a +3600 or -3600, for spring or fall time changes. 

Allegro's got a simple command file called FIXCLOCK, on their Free Allegro Software page, that allows fixing the hardware clock AND properly setting the time-offset for the software clock -- all without having to take the system down.

Here's the jobstream code for both the spring and fall time changes. You can use this and modify it for your specific needs. Note that it's set up for the Eastern US time zone. (That's the TIMEZONE = W5:00 -- meaning the number of hours different than GMT -- and TIMEZONE = W4:00 lines.)  Modify these lines as necessary for your timezone.

!JOB TIMECHG,MANAGER/user-passwd.SYS/acct-passwd;hipri;PRI=CS;OUTCLASS=,1
!
!setvar Sunday,    1
!
!setvar March,     3
!setvar November, 11
!
!showclock
!if hpday = Sunday and &
!   hpmonth = November and &
!   hpdate < 8 then
!   comment (first Sunday of November)
!  SETCLOCK TIMEZONE = W5:00
!   TELLOP ********************************************
!   TELLOP Changing the system clock to STANDARD TIME.
!   TELLOP The clock will S L O W   D O W N  until
!   TELLOP we have fallen back one hour.
!   TELLOP ********************************************
!elseif hpday = Sunday and &
!       hpmonth = March and &
!       hpdate > 7 and hpdate < 15 then
!   comment (second Sunday of March)
!  SETCLOCK TIMEZONE = W4:00
!   TELLOP *********************************************
!   TELLOP Changing the system clock to DAYLIGHT SAVINGS
!   TELLOP TIME.  The clock jumped ahead one hour.
!   TELLOP *********************************************
!else
!   comment (no changes today!)
!   TELLOP *********************************************
!   TELLOP No Standard/Daylight Savings Time Chgs Req'd
!   TELLOP *********************************************
!endif
!
!comment - to avoid 'looping' on fast CPU's pause long enough for
!comment - local clock time to be > 2:00a, even in fall...
!while hphour = 2 and hpminute = 0
!   TELLOP Pausing 1 minute... waiting to pass 2am
!   TELLOP Current Date/Time: !HPDATEF - !HPTIMEF
!   showtime
!   pause 60
!endwhile
!
!stream timechg.jcl.sys;day=sunday;at=02:00
!showclock
!EOJ

Do a showclock to confirm results. Careful, though, of any existing running jobs or sessions that may be clock-dependent.

10:00 PM in Hidden Value | Permalink | Comments (0)

March 10, 2014

Getting 3000 clocks up to speed, always

ClockgoingforwardThe US rolled its clocks forward by one hour this past weekend. There are usually  questions in this season about keeping 3000 clocks in sync, for anyone who hasn't figured this out over the last several years. US law has altered our clock-changing weekends during that time, but the process to do so is proven.

Donna Hofmeister, whose firm Allegro Consultants hosts the free nettime utility, explains how time checks on a regular basis keep your clocks, well, regular.

This past Sunday, when using SETCLOCK to set the time ahead one hour, should the timezone be advanced one hour as well?

The cure is to run a clock setting job every Sunday and not go running about twice a year. You'll gain the benefit of regular scheduling and a mostly time-sync'd system.

In step a-1 of the job supplied below you'll find the following line:

    !/NTP/CURRENT/bin/ntpdate "-B timesrv.someplace.com"

Clearly, this needs to be changed.

If for some dreadful reason you're not running NTP, you might want to check out 'nettime'. And while you're there, pick up a copy of 'bigdirs' and run it -- please!

Also, this job depends on the variable TZ being set -- which is easily done in your system logon udc:

    SETVAR TZ "PST8PDT"

Adapt as needed. And don't forget -- if your tztab file is out of date, just grab a copy from another system. It's just a file.

This job below was adapted from logic developed by Paul Christidis:

 !JOB SETTIME,MANAGER.SYS;OUTCLASS=,5
!TELLOP       SETTIME  
!TELLOP       ALL MPE SYSTEMS
!TELLOP ==SETTIME -- SYNCs SYSTEM CLOCK W/ TIME SERVER !
!# from the help text for setclock....
!# Results of the Time Zone Form
!#
!#   If the change in time zone is to a later time (a change to Daylight
!#   Savings Time or an "Eastern" geographic movement), both local time
!#   and the time zone offset are changed immediately.
!#
!#   The effect is that users of local system time will see an immediate
!#   jump forward to the new time zone, while users of Universal Time
!#   will see no change.
!#
!#   If the change in time zone is to an earlier time (a change from
!#   Daylight Savings to Standard Time or a "Western" geographic
!#   movement), the time zone offset is changed immediately.  Then the
!#   local time slows down until the system time corresponds to the
!#   time in the new time zone.
!#
!#   The effect is that users of local system time will see a gradual
!#   slowdown to match the new time zone, while users of Universal Time
!#   will see an immediate forward jump, then a slowdown until the
!#   system time again matches "real" Universal Time.
!#
!#   This method of changing time zones ensures that no out-of-sequence
!#   time stamps will occur either in local time or in Universal Time.
!#
!showclock
!showjob job=@j
!TELLOP =====================================  SETTIME   A-1
!
!errclear
!continue
!/NTP/CURRENT/bin/ntpdate "-B timesrv.someplace.com"
!if hpcierr <> 0
!  echo hpcierr !hpcierr (!hpcierrmsg)
!  showvar
!  tellop NTPDATE problem
!endif
!
!tellop SETTIME -- Pausing for time adjustment to complete....
!pause 60
!
!TELLOP =====================================  SETTIME   B-1
!showclock
!
!setvar FallPoint &
!   (hpyyyy<=2006 AND (hpmonth = 10 AND hpdate > 24)) OR &
!   (hpyyyy>=2007 AND (hpmonth = 11 AND hpdate < 8))
!
!setvar SpringPoint &
!   (hpyyyy<=2006 AND (hpmonth =  4 AND hpdate< 8)) OR &
!   (hpyyyy>=2007 AND (hpmonth =  3 AND (hpdate > 7 AND hpdate < 15)))
!
!# TZ should always be found
! if hpday = 1
!    if SpringPoint
!# switch to daylight savings time
!      setvar _tz_offset   ![rht(lft(TZ,4),1)]-1
!      setclock timezone=w![_tz_offset]:00
!    elseif FallPoint
!# switch to standard time
!      setvar _tz_offset   ![rht(lft(TZ,4),1)]
!      setclock timezone=w![_tz_offset]:00
!    endif
!  endif
!endif
!
!TELLOP =====================================  SETTIME   C-1
!
!showclock
!EOJ

Mark Ranft of 3k Pro added some experience with international clocks on the 3000.

If international time conversion is important to you, there are two additional things to do.

1) Set a system-wide UDC to set the TZ variable. (And perhaps account UDCs if accounts are for different locations)

:showvar tz
TZ = CST6CDT

2) There is also a tztab.lib.sys that needs to be updated when countries change when or if they do DST.

:l tztab.lib.sys
ACCOUNT=  SYS         GROUP=  LIB     

FILENAME  CODE  ------------LOGICAL RECORD-----------  ----SPACE----
                 SIZE  TYP        EOF      LIMIT R/B  SECTORS #X MX

TZTAB            1276B  VA         681        681   1       96  1  8


:print tztab.lib
# @(#) HP C/iX Library A.75.03  2008-02-26

# Mitteleuropaeische Zeit, Mitteleuropaeische Sommerzeit
MEZ-1MESZ
0 3 25-31 3  1983-2038 0   MESZ-2
0 2 24-30 9  1983-1995 0   MEZ-1
0 2 25-31 10 1996-2038 0   MEZ-1

# Middle European Time, Middle European Time Daylight Savings Time 
<< snipped >>

10:17 PM in Hidden Value, Homesteading | Permalink | Comments (0)

March 07, 2014

Clouds, Google and the HP 3000 in print

Google Cloud PrintThe above items really shouldn't go together, if you follow conventional wisdom. Yes, the HP 3000 has been in the clouds, so long as you consider timesharing as the cloud. I first began to cover the 3000 at a publishing company that was using cloud computing. Once a month in 1984, we logged on for timesharing at Futura Press, where an HP 3000 connected to a PC with a 3000 terminal emulator. Our operator Janine set type using an MPE program on that 3000.

But today the cloud usually means something like a server farm from Google, Apple, HP or elsewhere. These vendors make it attractive to give up hardware and let an outside provider supply what's needed. However, we all still seem to like working with paper in our offices. Cue the 3000 manager who wants cloud printing from his MPE application.

Has anyone ever tried to use Google Cloud Printing using HP 3000, via NPConfig?

Even though the answer might be no as of today, it's possible to make Google's Cloud Print serve a 3000. The magic must leap over lots of 3000 traditional wisdom. Allegro's Stan Sieler explained, to the manager, that "depending on what you mean, I don't think it's even remotely possible."

If you meant, "print from HP 3000 to a printer via Google Cloud," then no.  

Oh, it's possible someone could investigate the Google Cloud Print API and write some software for the HP 3000 that would intercept output sent to a "local" printer and redirect it to the Google Cloud API. But, it's not likely to happen.

Sieler outlined some other possibilities, always examining the full range of prospects.

If you meant, "print from an iOS or Android app (or a Safari/Chrome web browser) to a network printer a 3000 also prints on," then yes -- but the 3000 will be unaware it's sharing the printer with Google Cloud (or with other users, for that matter).

If you meant, “print from an iOS or Android app (or a Safari/Chrome web browser) to a printer locally attached (e.g., HP-IB) to printer on a 3000," then no. This would be hard, and (most likely) undesirable.

There's a pair of software solutions from RAC Consulting, Rich Corn's company that has connected the 3000 and other servers to the widest possible range of printers. Corn wasn't certain he could provide a connection as described, but he noted that the basic tools are on hand to try to create such a Google Cloud Printing solution.

I have quite a bit of experience with the Google Cloud Print API and 3000-based printing. In general, Stan is correct  -- that the effort to connect GCP and the 3000 is too large to make sense to undertake. But there might be some scenarios that would work if you used our two products: ESPUL and Cloud Print for Windows together -- and your content is suitable. If you'd care to be more specific on what you want to do, then there might be a solution.

To create Cloud Print for Windows, Corn's used his expertise and decades of attaching print devices to HP business servers, to help create software that helps Windows systems employ the Google Cloud Print virtual printer service. So long as your printer's host can connect to the Web, Cloud Printing can be accessed from other desktops online.

Cloud Print for Windows then monitors these virtual printers and prints jobs submitted to a virtual printer on the corresponding local PC printer. In addition, Cloud Print for Windows supports printing from your PC to Google Cloud Print virtual printers. All without any need for the Chrome browser.

People expect Windows to be a more affordable platform per desktop, but the costs can add up. Employing cloud services can keep things more manageable in a budget. Cloud Print for Windows costs just $19 a seat. 

06:13 PM in Homesteading | Permalink | Comments (0)

March 06, 2014

Reducing the Costs in a Major MS Migration

XPLook all around your world, anywhere, and you'll see XP. Windows XP, of course, an operating system that Microsoft is serious about obsoleting in a month. That doesn't seem to deter the world from continuing to use it, though. XP is like MPE. Where it's installed, it's working. And getting it out of service, replacing it with the next generation, has serious costs. It will remind a system manager of replacing a 3000, in the aggregate. Not as much per PC. But together, a significant migration cost.

The real challenge lies in needed upgrades to all the other software installed on the Windows PCs.

There's a way to keep down the costs related to this switch. MB Foster reminded us that they've got a means to improve the connection to the 3000 updated via Windows PCs.

Microsoft will end support for Windows XP on April 8, 2014. MB Foster has noticed companies moving to Windows 7/8 with an eye toward leveraging 64-bit architectures, reducing risks and standardizing on a currently supported operating system.

As an authorized reseller of Attachmate's Reflection terminal emulation software, we advise you that now is the time to seize the opportunity and minimize risks -- and get the most out of your IT investments.

The key to keeping down these costs is something called a Volume Purchase Agreement. It's an ownership license that HP 3000 shops may not have employed up to now, but its terms have improved. MB Foster's been selling and supporting Reflection ever since the product was called PC2622, and ran from the DOS prompt. Over those three decades, the company estimates it's been responsible for a million or more desktops during the PC boom, when 3000 owners were heavy into another kind of migration: replacement of HP2392 hardwired terminals. "Today, we are responsible for the management and maintenance of approximately 50,000 desktops," Foster's Accounts Manager Chris Whitehead said.

Upgrading Reflection is a natural step in the migration away from Windows XP. "We recommend upgrading terminal emulation software to Windows 7/8 compatible versions," said Whitehead. "As your partner we can make it easy and convenient to administrate licenses, reduce year over year costs, secure a lower price per unit and for your company to gain some amazing efficiencies."

For a site that has individual licenses of Reflection or a competing product, there's an opportunity to move into a Volume Purchase Agreement (VPA). The minimum entry is now only 10 units, Whitehead explains.

Years ago, when the product was sold by WRQ, the minimum for a Reflection limited site license was 25. Then it went to a point system. But as a follow-on, a minimum 10 units is just required for a Volume Purchase Agreement. The VPA provides a mechanism for maintaining licenses on an annual basis -- meaning free upgrades and support. It also provides price protection, typically giving a client a lower price per unit when compared to a single-unit purchase. The VPA also allows you to transition from one flavor of Reflection to another, i.e. going from Reflection for HP to Reflection for Unix, and at a lower cost.

If a site is already a Volume Purchase Agreement (VPA) customer and it’s not maintained, Whitehead suggest a customer consider reactivating it. During the reactivation process you can

• Consolidate and upgrade licenses into one or more standardized solutions

• Surrender / retire licenses no longer needed or required

• Trade in competing products

• Only maintain the licenses needed

Details are available from Whitehead at cwhitehead@mbfoster.com.

08:35 PM in Homesteading, Migration | Permalink | Comments (0)

March 05, 2014

What does a performance index represent?

I know this may be a tough question to answer, but thought I'd at least give it a try.

SpeedometerI'm doing an analysis to possibly upgrade our production 959KS/100 system to a 979KS/200, and I see the Hewlett-Packard performance metric chart that tells me we go from a 4.6 to 14.6. What does that increase represent? For instance, does each whole number (like 4.0 to 5.0) represent a general percentage increase in performance? I know it varies from one shop to another, so I'm just looking for a general guideline or personal experience -- like a job that used to take 10 hours to run now only takes 7 hours. The "personal experience" part of this may not even be appropriate, in that the upgrades may not be close to the metrics I am looking at.

Peter Eggers offers this reply, still worthy after several years

Those performance numbers are multiples of a popular system way back when, based on an average application mix as determined by HP after monitoring some systems and probably some system logs of loads on customer systems. No information here as to where you are on the many performance bell curves. The idea is to balance your system resources to match your application load, with enough of a margin to get you through to the next hardware upgrade.

Confchartpic (1)People mention system and application tuning. You have to weigh time spent tuning and expected resource savings against the cost of an upgrade with the system and applications as is.  Sometimes you can gain amazing savings with minor changes and little time spent.  Don't forget to add in time to test, QA, and admin time for change management.

There are a many things to consider: CPU speed and any on chip caching; memory cache(s) size and speed; main memory size and speed; number of I/O channels and bandwidth; online communication topography, bandwidth, and strategy; online vs. batch priorities, and respective time slices; database and file design, access, locking, and cache hit strategies; application efficiency, tightening loops to fit memory caches, and compiler optimizations; and system load leveling.

Since you didn't understand the performance numbers, you might hire a good performance consultant that knows the HP 3000. Of course, look for the "low hanging fruit" fruit first for the biggest bang for the buck, and continue "up the tree" until you lose a net positive return on time invested.

You'll also hear it mentioned that adding memory won't help if the system is IO-bound. That is typically not the case, as more memory means more caching which can help eliminate IOs by retrieving data from cache, sometimes with dramatic improvements. This highlights the need for a good performance guru -- as it is easy to get lost in the details, or not be able to see "the big picture" and how it all fits together.

Aside from Eggers' advice, we take note of the last time HP rated its 3000 line.

At HP World in 2002, it announced the final new 3000 systems, all based upon the PA-8700 processors. At the high end, HP announced a new N-Class system based upon the 750 MHz PA-8700 processor. The new N4000-400-750 was the first HP e3000 to achieve an MPE/iX Relative Performance Units (MRPU) rating of 100; the Series 918 has an MRPU of 1.

HP contends that the MRPU is the only valid way to measure the relative performance of MPE systems. In particular, they maintain that the MHz rating is not a valid measure of relative performance, though they continue to use virtual MHz numbers for systems with software-crippled processors. For example, there are no 380 MHz or 500 MHz PA-RISC processors. Unfortunately, the MRPU does not allow for the comparison of the HP e3000 with other systems, even the HP 9000.

HP has changed the way it rates systems three times over the life of the HP 3000. During the middle years, the Series 918 was the standard with a rating of 1. In 1998, HP devised a new measurement standard for the systems it was introducing that no longer had the Series 918 at 1. It is under this new system that the N4000-400-750 is rated at 100. Applying a correction factor, AICS Research has rated the N4000-400-750 at 76.8 relative to the Series 918’s rating of 1.

08:36 PM in Hidden Value, Homesteading, News Outta HP | Permalink | Comments (0)

March 04, 2014

Experts show how to use shell from MPE

I am attempting to convert a string into a number for use in timing computations inside an MPEiX job stream. In the Posix shell I can do this:

/SYS/PUB $ echo "21 + 21" | bc
42
/SYS/PUB $

But from the MPE command line this returns blank:

run sh.hpbin.sys;info='-c echo "21 + 21" | bc'

But why? I would like to calculate a formula with containing factors of arbitrary decimal precision and assign the integer result to a variable.  Inside the shell I can do this:

shell/iX> x=$(echo "31.1 * 4.7" | bc)
shell/iX> echo $x
146.1
shell/iX> x=$(echo "31.1 * 4.7 + 2" | bc)
shell/iX> echo $x
148.1
shell/iX> x=$(echo "31.1 * 4.70 + 2" | bc)
shell/iX> echo $x
148.17

What I would like to do is the same thing albeit at the MPE : prompt instead, and assign the result to an MPE variable.

Donna Hofmeister of Allegro replies

CI numeric variables only handle integers (whole numbers).  If your answer needs to be expressed with a decimal value (like 148.17 as shown above) you might be able to do something to express it as a string to the CI (setvar string_x "!x").

This is really sounding like something that's best handled by another solution -- like a compiled program or maybe a perl script.

For what it’s worth, the perl bundle that's available from Allegro has the MPE extensions included.  This means you could do take advantage of perl's 'getoptions' as well as 'hpcicmds' (if you really need to get your result available at the CI level.

Barry Lake of Allegro adds

The answer to your question of why, for the record, is that the first token is what's passed to the shell as the command to execute. In this case, the first token is simply "echo", and the rest of the command is either eaten or ignored.

To fix it, the entire command needs to be a single string passed to the shell, as in:

:run sh.hpbin.sys; info='-c "echo 21 + 21 | bc"'
 42
 END OF PROGRAM
 :

And if you want to clean that up a bit you can use XEQ instead of RUN:

 :xeq sh.hpbin.sys '-c "echo 21 + 21 | bc"'
 42

Or, you can do it with, for example, Vesoft's MPEX:

 : mpex

 MPEX/3000  34N60120  (c) VESOFT Inc, 1980  7.5  04:07407  For help type 'HELP'

 % setvar pi 3.14159
 % setvar r  4.5
 % calc !pi * !r * !r
            63.617191
 % setvar Area !pi * !r * !r
 % showvar Area
 AREA =            63.617191
 % exit

 END OF PROGRAM
 :

But the only thing I want is to be able to use a complied program which handles arbitrary precision variables from inside a job stream — such that I can return the integer part of the result to an MPE/iX variable.

Barry Lake replies

If you're happy with truncating your arithmetic result — that is, lopping off everything to the right of the decimal point, including the decimal point — then here's one way to do it:

/SYS/PUB $ echo "31.1 * 4.7" | bc
 146.1
 /SYS/PUB $ echo "31.1 * 4.7" | bc | cut -f1 -d.
 146
 /SYS/PUB $ callci setvar result $(echo "31.1 * 4.7" | bc | cut -f1 -d.)
 /SYS/PUB $ callci showvar result
 RESULT = 146
 /SYS/PUB $ exit

 END OF PROGRAM
 : showvar result
 RESULT = 146
 :

Perfect! Thank you. And this construct accepts CI VAR values as I require.

:SETVAR V1 "31.1"
:SETVAR v2 "4.7"
:XEQ SH.HPBIN.SYS;INFO='-c "callci setvar result $(echo ""!V1 * !V2"" | bc |
cut -f1 -d.)"'
:SHOWVAR RESULT
RESULT = 146

05:55 PM in Hidden Value, Homesteading | Permalink | Comments (0)

March 03, 2014

Cloudy night shows that it's Magic Time

WideWorldStandHeadServer drives churn, routers flash, and time machines transport us through the power of stories. In our own community we are connected by wires and circuits and pulses of power. We always were, from days of black arts datacomm pushing data on cards of punched paper. We’ve lived through a glorious explosion of ideas and inspiration and instruction. It’s the movie that always has another story in waiting, this Internet. So ubiquitous we’ve stopped calling it by that name. In 2014, 40 years after MPE became viable and alive, the World Wide Web is named after an element common throughout the physical world: The Cloud.

NClass movieAnd through the magic of these clouds come stories that lead us forward and allow us to look back at solved challenges. My partner Abby and I sit on the sofa these days and play with paper together, crossword puzzles, especially on weekends with the New York Times and LA Times puzzles. We look up answers from that cloud, and it delivers us stories. The Kingston Trio’s hit BMT leads us to The Smothers Brothers, starting out as a comic folksinger act. After video came alive for the HP 3000 in HP strategy TV broadcasts via satellite, there were webinars. Today, YouTube holds stories of the 3000’s shiniest moment, the debut of the ultimate model of that server.

Gravity - George ClooneyLast night we sat on another couch in the house and watched the splashiest celebration of stories in our connected world, the Academy Awards. Despite racking up a fistful and more of them, Gravity didn’t take the Best Picture prize. You can have many elements of success, parts of being the best, and not end up named the winner of the final balloting. The 3000 saw a similar tally, a raft of successes, but the light began to fade. In the movies they call the last light of the day magic time, because it casts the sweetest shades on the players and settings.

It’s magic time for many of the 3000’s stalwart members of its special academy. The 3000’s remaining a time machine in your reaches of space. Data is like gravity, a force to unify and propel. MPE systems contain ample gravity: importance to users, plus the grounding of data. It becomes information, then stories, and finally wisdom.

And in our magic time, we are blessed with the time machine of the Web, the cloud. You can look up earlier wisdom of this community online, written in stories, illustrated in video, told via audio. Find it in the cloud at the following resources:

The HP Computer Museum

3K Associates

The hosts of the HP Jazz papers, Client Systems and Fresche Legacy

The MM II Support Group

MPE Open Source.org

Plus, the companies that have kept websites stocked with stories about how to keep the magic lantern light of your system flashing onto screens. I’m grateful to have been part of that set of producers, directors and writers for the screen. It’s an exciting time to be able to move paper, as well as move beyond it with the speed of electrons. We’ve all grasped the tool of the Web with our whole hearts — even while we remember how to gather in a room like all those moviemakers did, to remember. There are many ways to honor the art of our story. 

07:41 PM in Homesteading, Web Resources | Permalink | Comments (0)