July 31, 2015
Zero day attacks: reports are dangerous, too
News has started to roil through the Android community about a fresh MMS attack vector for those devices, and last month reports rolled out about a similarly dangerous zero-day malware attack for Apple iOS. But what is zero day, and how can the news of these exploits be as damaging as the malware itself? Our security expert Steve Hardwick explains in this edition of Essential Skills, covering the non-3000 skillset for multi-talented MPE pros.
By Steve Hardwick, CISSP
Many computer users do not understand the term Zero Day and why it is so serious. To understand the term, it is first necessary to understand how an exploit works. In general, there are different types of exploits used on computers
1. Social attacks, phishing for example, which cause a user to unintentionally disclose information to a hacker.
2. Trojan horses, viruses that hide in otherwise legitimate applications. Once the legitimate application is launched, the Trojan horse releases the virus it contains.
3. Web attacks that trick users into divulging personal information using weaknesses in browsers and web server software
4. Application and OS attacks that use errors in the code to exploit the computer's programming
With the exception of the first category, these attacks rely on exploiting weaknesses in the underlying operating system and application code that runs on the computer. To be able to prevent this type of illicit access, the mechanism by which the malware is operating must first be understood. Therefore many researchers will examine operating code and look for these types of flaws. So will thousands of hackers. The challenge becomes how to mitigate such a vulnerability before it becomes a virus in the wild. That's where the Zero Day marker comes into play.The first, obvious response would be to fix the broken code. Although it sounds simple enough, it is not as straightforward as it seems. In order to prevent this type of condition occurring in the first place, software vendors will have development and test cycles that may take days or even weeks to complete. After all, it would not be good to develop a patch for one hole in the code only to create more. So it takes a finite period of time to detect the exploitation method the malware is using and then produce a patch that will fix the hole.
In many cases the research is done behind the scenes, and the security hole is fixed before it ever is exploited by hackers. In other cases a virus is spotted and the failure mechanism is already understood and a patch is in the works. For example, an application is compromised and the developer notices similar conditions can occur in other programs the software vendor produces.
Another response is to use anti-malware to protect against the threat. One of the main ways that anti-malware works is to look for signature patterns in downloaded or executing code. These patters are stored in a virus definition database. The supplier of the anti-malware solution will develop a profile of the malware and then supply a new definition to the database. As in the distribution of software patches, it takes time to define the profile, produce the signature definition, then test and distribute it. Only when the signature profile has been distributed is the computer system protected again
The time at which the malware is detected is called the zero day — as this starts the clock on the time between the detection and the distribution of the remedy. In the case of the software vendor, this would mean a patch for the broken code. In the case of the anti-malware vendor it is the time to provide the signature and deploy it.
The anti-malware vendor has the advantage that they are not supplying software to the machine. In many respects it is quicker to generate the signature and distribute it. For the software vendor there is the task of verifying that any new code does not affect the operation of the product, nor create any new vulnerabilities.
In either case, it is a race against time between the hackers on one side and the anti-malware or software vendor on the other. Furthermore, the end user is also in the fray. Whether it is a signature definition or a patch, the end user must download and install it. In many cases this can be automated, however, end users must have selected this option in the first place.
So when a zero day virus is announced, it means that the vulnerability has been made public and the software community needs to start to respond. There is a lot of debate as to the merits of announcing zero day exploits. There is concern that lower-skilled hackers will take advantage of the free research, and start to deploy viruses that exploit the disclosed vulnerability. The counter concern, as portrayed in the article about iOS cited at the beginning, is that the software vendor will not act on the research. No matter which side your opinion falls, it does not change the fact that a virus without a known cure is a very dangerous beast.
July 30, 2015
TBT: HP Image goes dead. Long live IMAGE
It was 1988, and Adager co-creator Adager Alfredo Rego had already skied for Guatemala in the Winter Olympics. Months later, with the Summer Olympics at hand, Hewlett-Packard killed off development of a new database for the HP 3000. The project was supposed to give the server a spot on industry-wide benchmark charts, HP believed. But HP Image was only 98 percent compatible with TurboIMAGE, and that's 2 percent short of being usable. HP Image abdicated the throne that HP intended to a TurboIMAGE rewritten for the brand-new Spectrum-class 3000s.
The move matters today because it marks a turning point in the march toward industry standards for the 3000. The server has been legendary for preserving its customers' investments like app development. A from the ground up SQL database might have helped put the 3000 into a more homogenous tier during an Open Systems era. Of course, HP would've had to create a database that worked for existing customer apps. HP Image was not that database.
HP's step-back from HP Image in the summer of 1988 came after more than two years of development, lab work that hit the wall after test users tried to make their applications and data fit with the product. After dropping that baton, HP raced to put the HP SQL of Allbase/SQL into making 3000 and 9000 apps compatible.
In an HP Chronicle article I wrote back then, I quoted developer Gavin Scott while he was at American Data Industries. By that summer, HP had managed to move TurboIMAGE onto MPE XL 1.1. "Pulling the Turbo database into the Allbase concept appears to have reaped some benefit for users," I wrote. "In Scott's view, it's faster and still compatible, a rare combination."
It works flawlessly, and it is quite fast. Native Mode TurboIMAGE works exactly the way old TurboIMAGE did, even to the extent that it still aligns all of the data on half-word boundaries. You have to take that into account when you're writing Native Mode programs to access Native Mode TurboIMAGE; it will be slightly less efficient, because you have to tell your program to use the Classic 3000 packing method when you go to access the database.
That summer marked the point that HP had to give up on creating an IMAGE replacement for the brand-new MPE XL. HP eventually supplied a native SQL interface for IMAGE, thereby taking that product into its IMAGE/SQL days. But HP Image never would have been proposed if the vendor wasn't thinking about attracting SQL-hungry customers from other platforms with a new database scheme.That era's TPC benchmarks, built around tests using Oracle and other SQL databases, were being used by IBM and DEC to win new enterprise customers. HP could only counter with the HP 9000 and HP-UX, and it needed another entry in that benchmark derby. TurboIMAGE was too boutique to qualify for a TPC test, the suites that were created to pit hardware vendors against each other. What would be the point of making a TPC test that required a non-SQL IMAGE? Only HP's IMAGE-ready systems could be compared there.
Instead, HP eventually had to pay close attention to retaining IMAGE ISVs and users. Scott commented this week on how that turning point came to pass in the late '80s.
Just as MPE suffered because management (really mostly the technical influencers and decision makers, not upper management) decided that Open Systems (which meant Unix) were the way of the future, I think the HP database lab had some PhD types who were convinced that SQL and relational was the answer to everything, without understanding the issues MPE faced with compatibility.
They tried to build one relational core engine that had both an SQL and an Image API, but for a long list of reasons this could not be made 100 percent compatible with TurboIMAGE, so you just could not run an existing 3000 application on top of it without major changes -- which was of course a non-starter for customers wanting to move from MPE/V to MPE/XL.
HP had already received a better strategy from independent vendors, advice HP chose to ignore. Deep in the heart of IMAGE lie routines and modules written in SPL, the foundational language for MPE and the 3000. SPL was going to need a Native Mode version to move these bedrock elements like IMAGE to the new generation of 3000s powered by RISC chips. But HP's language labs said an SPL II was impossible, because SPL wasn't defined well enough. So trying to leverage the Allbase transaction processor, HP galloped into building HP Image, using Modcal, its modified version of Pascal that already drove many MPE XL routines and subsystems.
As it turned out, it was easier to create a Native Mode SPL than to make a new SQL database that was 100 percent compatible with TurboIMAGE. Steve Cooper of Allegro, the company that partnered with Denkart and SRN to create the second generation of SPL with SPLash!, said 98 percent compatibility never succeeds.
"Just like something can never be very unique -- it's just unique -- software can't ever be very compatible. It's compatible, or it isn't." DBGET calls in TurboIMAGE worked faster than DBGET ever would in HP Image. The number of items is reported in TurboIMAGE's DBGET automatically. HP Image had to run through a DBGET chainhead from stem to stern once again to get that number, "and that's a lot more IOs," Cooper said. Scott noted that native TurboIMAGE was a direct result of that independent language work on SPL.
The ultimate solution was to basically give up on HP Image completely and simply port TurboIMAGE from MPE/V to MPE/XL, which actually turned out to be relatively easy (after they stole the ideas surrounding the architecture of the SPLash! compiler to make their own Native Mode SPL II compiler (what TurboImage was written in.) HP's language guys spent several years saying a Native Mode SPL compiler was not practica -- but of course SRN, Denkart and Allegro succeeded with SPLash! thus making them look stupid).
Scott said TurboIMAGE was too simple to need SQL's prospective advantages. It was just a fast networked database that had a common API which thousands of apps were using.
HP Image and Allbase/SQL were big and bulky and complex, and thus a lot slower than TurboImage once it got to Native Mode. Today the world runs primarily on SQL/relational databases, up until you get to Big Data distributed no-SQL databases used in huge clusters. But in those days TurboIMAGE had the big advantage of simplicity, and the biggest advantage of having an API that all existing HP 3000 applications were already written to.
I'm not sure about "turning point" for hte database labs. I think they just continued on doing their Allbase stuff, they just didn'thave to think about Image anymore. It was intrepid programmers at CSY that got TurboImage working (with help from the compiler guys) and TurboImage remained simply one other MPE subsystem, not really part of any "database lab" which wouldn't care about a crusty old proprietary non-relational database.
July 29, 2015
Carrying ODBC Links Into Windows Use
Software that helps HP 3000s remain relevant is still being sold and still working. MB Foster sent an email this morning that reminded the 3000 community they've got a leg up on important connectivity. It's called ODBCLink/SE, installed on every HP 3000 that has the 5.5 release of MPE/iX running. It could also use some updating.
MB Foster's Chris Whitehead annotated the distinctions between ODBCLink/SE and its fully-grown sibling, UDALink. "Numerous organizations continue to utilize ODBCLink/SE (Special Edition of MB Foster Technology. Developed and distributed by HP on HP 3000s and HP-UX servers)," he wrote. "ODBCLink/SE’s ability to adapt to new technologies such as JDBC or Windows 7/8, or 64-bit architectures, is severely limited." UDALink is the means to bridge those limits.
We've been tracking the ODBC functionality of MPE/iX and IMAGE since the beginning — ours, as well as the customer demand. In 1994 MB Foster started selling ODBCLink for connecting to desktops. The start of widespread demand for better SQL access was in the fall of 1995, at the same time the NewsWire launched. HP labored to build access, and that labor progressed slowly. By December 1996 we pointed out in an editorial that deliberate work from MB Foster's engineers was going to bridge the HP gap.
The 32-bit world that Win95 created didn't have an HP-supplied path between HP 3000 databases and those slick, graphical interfaces on PC desktops. Third parties have stepped in to sell what HP is still working to bundle. Companies using ODBCLink praise the product and the connectivity it brings. So much praise has rained down that HP decided it should buy what it has been too slow to build. A deal was signed between HP and M.B. Foster. ODBCLink gets a trimmed-down cousin, ODBCLink/SE.
HP got out of the PC-based software business by turning to ODBCLink/SE. There's an extensive table in today's MB Foster email that shows why this free software in HP's MPE FOS has significant shortcomings. Updating this kind of essential tool can be a big step in keeping a homestead 3000 in the loop for corporate data. It's a story as true today as it was 20 years ago.By 1996, Birket Foster said if customers could consider ODBCLink/SE as entry-level software, "we feel there is room in the market for both an entry level and a full-featured commercial solution. I think the issue is 'free' software, not the ability to solve the ODBC problem. That solution has been here for awhile."
In 1997 HP first shipped ODBCLink/SE from Foster's labs, wired into the MPE/iX 5.5 Express 3 release. We called it "the long-awaited ODBC driver to support 32-bit clients."
While customers must still navigate the complexity of Allbase/SQL and its attach/detach challenges, the software will finally give IMAGE/SQL customers a way to use things like Crystal Reports and Access 97 to tap their databases with both read and write capability. That capability is becoming more commonplace -- and much simpler and faster, by some reports -- if you're willing to buy a third-party tool to do the job.
As soon as SE was out there, people began to decide if bundled software was valuable enough to succeed. In a 1998 review, Joe Geiser said of that software, "it requires the use of IMAGE/SQL and the obligatory attachment of databases. Larger projects, however, should be using a commercial driver. The reasons are simple — ODBCLink/SE is the slowest of all of the available drivers, and consumes more connections to the HP 3000 than its commercial counterparts, issues which contribute to problems with response time and performance."
Geiser didn't even get to consider that 32-bit computing was going to be a full generation behind the current standard. The 64-bit standards didn't arrive until Windows 7.
In a 1998 review, another of our writers, John Burke, asked, "Are you going to use ODBC for anything more than casual, occasional access to a handful of IMAGE databases? If the answer is “yes,” then you need to consider the hidden costs of the free ODBCLink/SE."
In 1999, retiring GM Harry Sterling chronicled HP's reliance on the SQL lab at MB Foster.
Customers buy the hardware from us, but there’s no value in our software to them. For any added value in software above the base, we prefer our partners do that. It frees us up to do the operating system stuff, and it gives the partners a revenue stream. I could see we were going to have another year of investment, and no revenue for that. I re-channeled those engineers back into networking, which was more critical for us at the time.
I canceled our whole ODBC driver project. It was halfway implemented, and I said “We’re not going to do this. It’s clear the customers are not going to pay for it. They want it free.” We started negotiating with Birket for ODBCLink/SE. We said hey, if you sell this and build stuff on top of it, and be willing to contribute the base driver in FOS, we’ll put that out that base driver for nothing, and you can sell products on top of it.
Two decades later, Foster's still willing to sell products on top of that base driver. Homesteaders can stick to solid tools, so long as they're being advanced and updated.
July 28, 2015
Winds of change blow through HP's closets
It's time to check back in with Hewlett-Packard, the vendor providing enterprise servers and solutions for a meaningful section of the 3000 migrators. Our latest news update involves poaching employees and a nouveau dress code, a subset of the things that any splitting-up corporation might be handling.
Details of the HP split into HP Enterprise and HP Inc were rolled out earlier this month, and there's explicit language on how the workforce will be handled once it is halved. Each of the new entities has a one-year embargo on even approaching the other's employees for hiring. For the six months beginning in November of 2015 -- a period when a lot of serious hiring gets delayed -- the two companies cannot hire from the other's ranks. If an employee is fired by Enterprise or Inc, then it's open season.
To sum up, if a talented HP staffer wants to work at the other HP before next June, getting fired is the fastest way to get permission. That might turn out to matter more than it appears, since the company just floated a memo in the Enterprise Services group, including HP-UX and Proliant operations, about professional dress, according a report from the website The Register.
Men should avoid turning up to the office in T-shirts with no collars, faded or torn jeans, shorts, baseball caps and other headwear, sportswear, and sandals and other open shoes. Women are advised not to wear short skirts, faded or torn jeans, low-cut dresses, sandals, crazy high heels, and too much jewelry.
It wouldn't be unprecedented. When former CEO Carly Fiorina took her first tour of former Compaq facilities, post-merger, employees there were told to don "western wear" as a welcoming gesture.
That was at least a merger. Nothing the size of Hewlett-Packard has ever tried to cleave itself into two complementary pieces remaining in the same business sector. This is uncharted territory, but a dress code memo and limited job transfer options might deliver some new talent into the non-HP workforce. Meanwhile, the current CEO says that turning around the company has been relatively easy.That memo isn't public, but the workforce hiring practices are part of an SEC Form 10 filing that runs more than 300 pages. HP's issuing new email addresses for the employees leaving the full company to work for Enterprise. It will now be hpe.com, although the old addresses will work for a year from the Oct. 31 split-up date.
CEO Meg Whitman, who'll lead Enterprise, said latest chapter of her turnaround of the company is easier than running for California governor. On a Bloomberg TV show, Whitman said running her political campaign that lost the election was harder than a turnaround.
You know, you get up every morning, and you fight the good fight, and you win hearts and minds of HP people, and you restore the confidence of customers and partners. So it’s been hard but it’s been really gratifying. And I have to say, relative to running for governor, this is easy.
The nuances and operational detail of creating two $60 billion entities — which will partner to buy supplies, to jointly sell products to customers, and to share patents and other intellectual property — might start to tax Whitman's estimation. Intellectual property at HP is held in the HP Development Company. The MPE/iX licensing is probably going to remain with Enterprise. If there were any technical resources in the 3000 community that want to take on that license, HP may be up for cleaning out its Enterprise closets.
Sorry, just kidding. Nobody wants those engineers to stop wearing t-shirts, either, do they? It's an exercise for the reader to decide which proposition seems sillier in this season of change at HP.
July 27, 2015
N-Class 3000s offer subtle bathtub curves
A serious question on today's HP3000 newsgroup emerged about server reliability. The best answer came from an HP engineer whose career features more than 15 years of IO design and maintenance on hardware systems including that ultimate 3000 N-Class system. And along the way, Jim Hawkins introduced many of us to the bathtub-curve charting strategy.
It looks like a bathtub, this chart of how reliable hardware can be. High left-hand side, the part of a product lifecycle called infant mortality. Long-term youth to middle-age to early senior years, the flat, stable part of the bathtub. Finally the end of life, that sharp upswing on the right where moving parts wear out.
The question was posed to the newsgroup readers by Steven Ruffalo
I'm concerned about the reliability going forward of our N-Class servers. Are there any type of studies and metrics that could be used to determine how the failure rates of the parts on/in the N-Class will increase linear with the age of the equipment? I would imagine this would be true for any systems, but we have had an increase in processor failures over the last year. Is this coincidental, or should we start trying to stockpile additional spares?
According to Hawkins, there's been no tracking of N-Class hardware reliability by HP, which introduced the first N-Class models within a year of announcing it would be exiting the 3000 business. But he offered anecdotal, your mileage may vary, caveat emptor advice. He advised the 3000 owner that "You are in uncharted territory. Literally."
Typically reliability folks talk about the "bathtub curve" of failure rates: a high failure rate ("infant mortality), long low "stable failure rate," and an acceleration "wear-out" phase. I don't know anywhere where there is enough decent data to track long term reliability for N-Class populations at a statistical level with reasonable confidence bounds (even inside HP).
I will say anecdotally the N-Class itself was not subject to any large quality issues that I can recall. That is, I have some recollection of issues both in K/T and following rx/rp ZX1 and ZX2 systems but, while my attention may have wandered, things seem to have been pretty solid for N-Class.
(That's a reference to the K-Class and T-Class servers, known as the Series 9x9 and 9xx systems in 3000-speak.)
"I don't have any data to project when or if you'll see a rapid rise in parts replacement needs," Hawkins added, "the far side of the reliability bathtub curve."
Moving parts are the first to wear out in any computing device, but Hawkins noted that "movement includes thermal cycles through on/off switching, or even temperature swings if you don't have well-managed HVAC." There's a reasonable lifespan for everything, and those N-Class systems are at least 12 years old by now. A user might consider how long to trust a 12-year-old disc drive, and give some thought to the reliability goals for solid-state components. Burnouts were pretty rare in the stories we've heard about HP servers which run MPE/iX. For the time being, a lot of N-Class owners are enjoying HP engineering that's had a smooth bottom.
July 24, 2015
3000 world loses a point of technical light
Veteran engineer and developer Jack Connor passed out of worlds including the HP 3000's this month, dying at age 69 after a long career of support, volunteering, and generous aid to MPE users.
In a death notice posted on his local funeral chapel's website, Connor's story included Vietnam era military service, a drag racing record, and playing bass on Yummy, Yummy, Yummy, I Got Love In My Tummy, a single that went to No. 4 on the US charts. He had been the proprietor of a bar in Columbia, Missouri, known as Nasties, and a tea house in Columbus, Ohio, The Venus Fly Trap.
Connor played a role in the volunteer efforts for OpenMPE in the last decade. He was also the worldwide account manager for HP and DuPont in the 1970s and 80s, and the death notice reports he was involved in the first satellite uplink in history for commercial purposes. At the time of his death Connor was working at Abtech Systems and Support from Indiana, and at his own company, InfoWorks, Inc. In the months that followed HP's shutdown of its MPE lab, he created NoWait/iX, software that eliminated the wait for an HP technician to arrive, on a rush-charge time and materials call, to transfer an old HPSUSAN to a new 3000 CPU board.
NoWait/iX was intended for use "until HP can be scheduled on site at both HP and the customer’s convenience -- and not paying the emergency uplift charge," Connor said. "However, if a customer has a third-party tool which is no longer supported, or licensing is no longer available for an upgrade, NoWait/iX can operate indefinitely, returning the old information to that single product."
In the waning months of OpenMPE's activity, he chaired the board of directors and promoted the creation of a new Invent3k shared server. "Making Invent3K a repository for the community is the primary focus," he reported to us in 2011.
Connor was a frequent contributor of free tech savvy to the 3000 community, using the 3000 newsgroup as a favored outlet. Just this spring we relayed his advice about linking a 3000 with existing networks.What do I need to do on our MPE boxes to ensure that they will see new networking hardware? Does MPE cache the MAC address of neighbor gateways anywhere? I was thinking I needed to restart networking services, but I wasn't sure if anything more will be needed.
If you're taking it off the air for the network changes, I'd go ahead and close the network down until the work has completed and then reopen it. MPE will be looking for the IPs as it opens up. I know you can see the MAC addresses in NETTOOL, but I don't think they're of any import other than informational and for DTC traffic.
While serving on the OpenMPE board of directors, he also tracked down a data-at-rest security solution compatible with HP 3000s. 10ZiG's Security Group still sells the Q3 and Q3i appliances, one of which Connor put between a Digital Linear Tape device and a 3000. The results impressed him for a device that costs a few thousand dollars -- and will work with any host.
I tested an encryption box that sits between the DLT and IO card a year or so ago and it worked like a champ. It maintained streaming mode and all. However, it was in the $2,000-$3,000 range — and to be useful for a DR world, it would require two, so I haven't pursued actually recommending it.
He often helped out with IO and storage device questions in the 3000 community. For the Series 927LX, he noted that a DLT tape drive could be installed in the server that was designed in the early 1990s.
"This is not a problem as long as you have a free slot, or an open 28696A fast-wide card," he said. "I believe you need to be on MPE/iX 6.0 or 6.5 to go with a DLT8000. I'm sure a DLT4000 and probably a DLT7000 are okay." (The 28696A is a double-high interface device that permits the 927 to use HVD SCSI DLTs of 4000, 7000 or 8000 models.)
A simple search of the Newswire with "Jack Connor" turns up dozens of tips. Several 3000 veterans offered tributes in the wake of the Gary Robillard's news about Connor's passing. "He was a master at his trade," said Tracy Johnson.
"Jack was a great guy who would always help no matter the problem, time or distance," said Bill Long. "As I moved on to different companies Jack was always there to help. He did consulting work for us when I worked for a small semiconductor company in Newark DE. He wrote the exotic interfaces we needed. Just a few years ago he helped me when I was consulting for Dow Chemical and needed help with my in-home HP 3000."
"My dear friend and colleague, a frequent contributor to this list, passed away peacefully in his sleep after a long illness," Robillard wrote. "Words cannot express how greatly he will be missed by all who knew him."
On the tenth anniversary of HP's pullout notice for the 3000, Connor summed up his philosophy about helping in the MPE community. "I'd say we've all been a pretty good human chain holding the 3000 Community together," he said. "There's indeed life after HP, and a pretty full one so far."
He was laid to rest this past Sunday, and the obituary webpage included a link to the Van Morrison song "Into the Mystic," whose lyrics include these lines.
And when that fog horn blows I will be coming home
And when that fog horn blows I want to hear it
I don't have to fear it
I want to rock your gypsy soul
Just like way back in the days of old
Then magnificently we will float into the mystic
July 23, 2015
Throwback: When IA-64's Arrival Got a Pass
During a summer of 15 years ago, the reach of HP's final processor foundation became obvious. Rather than take over the computing world, the project that started as Tahoe and eventually became IA-64 was labeled as an incremental improvement. Hewlett-Packard said this was so while it started talking about IA-64's lifespan and impact. It would be a gradual change.
This story is instructive both to today's migration planning as well as sustaining homesteading of the HP 3000. Processor power doesn't matter as much as a vendor claims. The pass that HP gave IA-64 in 2000, labeling the technology as years away from the datacenter, proved that chips wouldn't make a difference much more. When it comes to chip futures, the only ones that make a difference come from the timelines of Intel. HP partnered with the vendor, but it wouldn't get a marketable advantage out of the alliance.
In July of 2000, not a single IA-64 system had shipped, even though Hewlett-Packard annointed IA-64 as the successor to the PA-RISC chips that powered servers like the HP 3000. PA-RISC performance remains the leading edge of Hewlett-Packard's MPE hardware. But 15 years ago, making the leap to IA-64 processing looked essential to keeping MPE/iX competitive.
In 2000, though, the technology based on Explicitly Parallel Instruction Computing was just being dubbed Itanium. HP's Integrity brand of servers hadn't been introduced, and HP was supposed to be farming out Itanium to niche markets. The vendor's Unix servers, being sold by the same resellers who offered 3000s, ran on the same PA-RISC chips. And those chips were in no danger of being lapped by IA-64.
Up at the CNET website, an interview with HP's Duane Zitzner included a comment from HP's marketing for IA-64. In 2000, IA-64 computers were "a development environment," said Dirk Down. "You're not going to put this stuff near your datacenter for several years."
In the Newswire, we did the translation for a customer base that seemed certain that leaving IA-64 off the MPE roadmap was a bad fork in the road. Zitzner said PA-RISC would still outsell IA-64 for another five years.
His comments explain why few people in the HP 3000 division seem to think of IA-64 as nothing more than a future. In one interview after another, lab experts and general managers praise the new architecture, but point out that it has little to do with meeting customer demands for performance. Now we seem to know why: the stuff won't be ready for datacenter-level performance for years.
While one analyst thought these delays might be a problem, we think they're a blessing in disguise. There's nothing so broken in today's PA-RISC that it must be replaced. And if PA-RISC's successor is still on the drawing board, that lets the 3000 lab focus. Considering how tough it is to staff development labs, nobody's engineering effort needs the distraction of having to build more than one version of an operating system at a time.
IA-64 looks like it's going to have about a 10-year history of being a future at HP, considering that it was first announced in 1994. (Of course, back then, HP was calling it Tahoe, and then Merced, and so on.) Since HP has four more generations of processors in the wings for the PA-RISC line after the PA-8500 rolls out next spring, it looks like IA-64 might have more impact on PowerPoint slides than in any HP 3000 for the next five years.
Like HP, we were just guessing on when IA-64 computing would be ready to assert itself in the datacenters. We couldn't see a future where HP would lose faith in the 3000 customer and the MPE ecosystem — not any more than HP could see that IA-64 would become more of a boutique for computing instead of the superstore the vendor imagined five years earlier.
Only two generations of PA-RISC were ever produced that pulled ahead of the top 3000 processors. The 8800 and 8900 would both work in what HP was still calling the HP 9000. The 8800 arrived in 2004's 9000s, mostly being the driver for Superdome. The 8900 showed up in 2005's HP servers.
IA-64, when it was called the Merced project, was supposed to arrive by the end of the 20th century and become the replacement for x86 computing. Instead, HP's partner Intel doubled down on the x86 to make Xeon, an extension to IA-32 created when IA-64 took longer than expected. Intel didn't give IA-64 a pass. It passed it by.
July 22, 2015
Replacement rides on software selection
MB Foster's latest webinar on Applications Portfolio Management included an estimate that 80 percent of applications in a migration can be replaced. Migrating app code to a new platform is usually only a solution for 15 percent of the software in a 3000 environment, and an unlucky 5 percent of applications will have to be rebuilt.
So if 4 out of every 5 apps should be replaced, what steps make up the best practices for software selection? The webinar indentified nine.
- Add a detailed current workflow to the software assessment.
- Look at three to seven packages for each replacement
- Compare the selections to the app, to make sure you have no orphaned functionality
- Understand the business process re-engineering tasks. CEO Birket Foster said "If you're changing applications, there will be certain rules where there's a different way to do things, and people will have to be retrained."
- Make all data clean and ready, moving department by department
Then there's a step to plan for surround code, which can sometimes be the trickiest to replace.There may be interfaces plugged into your current applications, Foster said, "that feed data into or out of the application. You have to find those and see how they integrate into the big picture — because if you pull the application out of the middle, these are the loose wires. When you put the new application in, you have to re-integrate those wires."
After all of that work, there's a test plan, a go-live plan, and a plan for decommissioning the 3000.
"You have to have a plan if the computer is a system of record," Foster said. "Systems of record tend to require specific care and treatment when you're decommissioning them."
The webinar also listed a set of common mistakes during a replacement project.
- Not gathering enough information from each department
- Going straight to the software demo stage "without actually building a scorecard," Foster said. "It's hard to do a comparison of 3-7 packages without an objective scorecard. People from each different workflow should participate, to help the weighting of different scorecard items."
- Not having a broad enough Request For Information process; this can reduce the need for details in a subsequent Request For Proposal.
- Not appointing a software selection liaison — a person high up enough in the organization to bring all the company's stakeholders together
- Skipping the appointment of conference room pilots, individuals who explain how workflows operate during a boardroom-level meeting. "If you skip those, nobody has an idea of what they're about to see, and so they don't ask all the questions they should," Foster said. "It's not all about price. A lot of the questions have to do with functionality."
July 21, 2015
User group takes virtual tack for conference
A vendor with ties back to the 1980s of the HP 3000 world took several steps today into the new world of virtual user conferences. The education and outreach at the Virtual Conference & Expo came in part from Fresche Legacy, formerly Speedware, but it was aimed at that company's latest prospects: IBM Series i enterprises. Advances in long-form remote training, with on-demand replays of tech talks, gave the IBM COMMON user group members of today a way to learn about the IBM i without booking time away from workplaces.
The offerings on the day-long agenda included talks about vendors' tools, as well as subjects like "Access your IBM i in the modern world with modern devices." Customer-prepared talks were not a part of today's event; that sort of presentation has become a rare element in the conference experience of 2015. But some of the best HP 3000 talks at the Interex user group meetings came from vendors, lifted up from the ranks of users.
The virtual conference of today won't be mistaken for the full-bore COMMON Fall Conference & Expo of this fall. That's a three-day affair in Fort Lauderdale, complete with opening night reception and conference hotel rates at the Westin. A few days in Florida could be a perk for a hard-working IT manager, even in early October.
But the practices of remotely educating users about enterprise IT have become polished by now. Wednesdays in the 3000 world have often included a webinar from MB Foster, guiding managers in subjects like Application Portfolio Management or data migration. Those are more dynamic opportunities, with individuals on an interactive call using presentation software including a Q&A element. They also cover skills that are more essential to the migration-bound customers — although data migration skills promise great potential payback for any IT pro.But whether it's on-demand talks bolstered by chat requests at the COMMON event, or a phone and demo-slide package at a Wednesday webinar, training doesn't equal travel anymore. A three-day event would've looked small to the HP Interex user group member of the 1990s. Over the final years of that user group's lifespan, though, even a handful of days away to train and network at a conference became an on-the-bubble choice.
Making a migration from a legacy platform like the 3000 opens up the opportunity to increase the level of learning in a career. But even legacy computing like the IBM i can trigger reasons to train and explore fresh features. It's another reminder that what matters to a vendor is not necessarily the strength of a legacy server's ecosystem, but the stickiness and size of the installed base.
IBM's i still counts six figures' worth of installed customers, and many have links to other IBM systems. IBM could afford to take care of an established base of proprietary computer systems. The independent third parties like MB Foster and others that remained after HP exited have been left to care for 3000 users on the move, and otherwise.
July 20, 2015
The Weekend a User Group Went Lights-out
Ten years ago this week, the Interex user group went dark in both a digital and literal way. The organization that was launched 30 years earlier to serve HP 3000 customers took down its website, shuttered its servers, and shut out the lights to lock up its Sunnyvale, Calif. offices. A bankruptcy went into its opening days, one that would take more than two years to make its way into Federal Court. But the immediate impact was the loss of the tent-pole gathering for the 3000 community, that year's annual HP World conference.
Millions of dollars in hotel guarantees, prepaid advertising, and booth exhibitor rents went unpaid or unreturned. It was more than the loss of an event that had a 28-year history of joining experts with customers. The Interex blackout turned off a notable light that might've led to a brighter future for a 3000 community still looking for answers and contact with vendors and expertise.
Looking back from a decade later, signs were already evident for the sudden demise of a multi-million dollar organization with 100,000 members of some pedigree. Tens of thousands of those members were names in a database and not much more, places where the Interex tabloid HP World could be mailed to generate advertising revenues. A core group of users, devoted to volunteering and rich with tribal, contributed knowledge about HP's servers, was far smaller.
Interex was all-in on support and cooperation with the Hewlett-Packard of 2005, but only up to a point on a crucial user group mission. The group was glad to re-label its annual conference after the vendor, as well as that monthly tabloid. HP held the rights to both of those names once the group made that transition. There was an HP liaison to the group's board for decades. The key managers in the 3000 division made their first-person 2002 articles explaining HP's 3000 exit available to the Interex publications. Winston Prather wrote "it was my decision" on pages published by Interex.
But in 2004, HP sowed the seeds of change that Interex watered with a no-collaboration decision. User groups from the Digital VMS community agreed to cooperation with HP on a new user conference, one to be funded by HP. Interex's directors polled the member base and chose to follow an independent route. The Interex board would stick to its plans to exclusively produce the next HP World. Advocacy was at stake, they said, and Interex's leaders believed the group would need its own annual meeting to keep asking HP to do better.
HP began to sell exhibitor space for an HP Technology Forum against the Interex HP World booths. Just before the HP World San Francisco Moscone Center wanted its final payment — and a couple of weeks after exhibitors' payments were in hand — the tune the 3000 world heard was Boom-boom, out go the lights.The user group struggled to maintain a financial balance in the years following the Y2K ramp-up, according to one of its directors, an era when attendance at the group's annual shows fell steadily. Membership figures for the group, inflated to six figures in press releases during 2004, included a very broad definition of members. Hotels were reserved for two years in advance, with payments made by the group and still outstanding for millions of dollars.
One conference sponsor, Acucorp, was told by an Interex ad rep that the staff was led to the door. A user community labored mightily to recover contributed white papers, articles, and software from a company that was selling conference memberships right up to July 17.
Ten years ago on this very date, HP was already at work gathering up the orphaned attendees who held prepaid tickets and registrations as well as exhibitors with no show to attend. HP offered a complimentary, comparable registration to the Technology Forum for paid, registered attendees of HP World 2005. HP also offered discounted exhibition space at its Forum to "non-HP competitors" exhibiting at or sponsoring HP World 2005. If you were IBM, or EMC, and bought a booth at the Interex show, you had no recourse but to write off the loss.
The shutdown was not orchestrated with the cleanest of messages. Interex.org, a website archived hundreds of times by the Internet Wayback Machine since 1996, posted a report that was the equivalent of a busy signal.
It is with great sadness, that after 31 years, we have found it financially necessary to close the doors at Interex. Unfortunately our publications, newsletters, services and conference (HPWorld 2005) will be terminated immediately. We are grateful to the 100,000 members and volunteers of Interex for their contributions, advocacy and support. We dearly wish that we could have continued supporting your needs but it was unavoidable.
Within a week, planning from the 3000 user community was underway to gather together any customers who were going to the HP World venue of San Francisco anyway -- since they were holding those nonrefundable tickets, or had already paid for hotel rooms.
Companies go broke every day, victims of poor management, bad luck, or unavoidable catastrophe. Few organizations can avoid closing, given enough time. But for a founding constituency that based its careers on a server that rarely died, the sudden death of the group that'd been alive as long as the 3000 was striking, sad — and a mark of upcoming struggles for any group built to serve a single vendor's customer base. Even a decade earlier, according to former Interex chair Jane Copeland, a proposal to wrap up the group's mission was offered in an ever-growing heterogenous computing world.
“When I left, I said they ought to have a dissolution plan,” said Copeland, owner of API International. “The former Executive Director of Interex Chuck Piercey and I tried to get the board to do it — because we didn’t see the purpose of a vendor-specific group in an open systems market.”
A change in HP’s CEO post sealed the user group’s fate, she added. The arrival of Carly Fiorina shifted the vendor’s focus away from midrange computer users such as HP 3000 and HP 9000 customers.
“I think HP is probably the cause of this more than anything,” Copeland said. “As soon as [CEO] Lew Platt left HP, that was the end of Interex. Carly Fiorina wasn’t interested in a user group. She just wasn’t user-oriented. Before Fiorina, HP had one of the most loyal customer bases in the industry. She did more to kill the HP brand than anyone. She killed it in such a way that the user group’s demise was guaranteed as soon as her reorganization was in place. She didn’t want midrange systems. All she was interested in was PCs.”
Another HP 3000 community member saw HP's declining interest in the server as a signal the user group was living on borrowed time. Olav Kappert, whose IOMIT International firm has served 3000 customers for nearly 30 years, said HP looked eager to stop spending on 3000-related user group events.
"HP would rather not spend another dime on something that has no future with them,” he said. “It will first be SIG-IMAGE, then other HP 3000 SIGs will follow. Somewhere in-between, maybe even Interex will disappear."
July 17, 2015
Do Secure File Transfers from the 3000
I'm trying to use ftp.arpa.sys to FTP a file to a SFTP server and it just hangs. Is there a way to do a secure FTP from the HP 3000?
Brian Edminster replies:
The reason that using MPE's FTP client (ftp.arpa.sys) fails is because as similar as they sound, FTP and SFTP are very different animals. Fortunately, there is a SFTP client available for the 3000 -- the byproduct of work by Ken Hirsh and others.
It used to be hosted on Ken's account on Invent3K, but when that server was taken out of service, so was Ken's account. As you've no doubt already noticed, it's available from a number of sources (such as Allegro). I'd like to highlight another source: www.MPE-OpenSource.org
Edminster goes on to explain he administers that site, as well as puts together the 'pre-packaged' install available there. It's in a single store-to-disc file in Reflection 'WRQ' format, making it easy for the majority of sites to retrieve and use.
I have a customer that's been using SFTP daily as part of their PCI compliance solution for several years. They push and pull data hourly from dozens of Point-of-Sale systems all over the country, and have moved lots of data this way.
The biggest caveat from that customer's implementation is that if you're moving data over a WAN, SFTP seems to be more sensitive to jitter and latency issues than conventional FTP. We ended up having to upgrade a couple of their more anemic 'last mile' circuits to accomodate that.
In all other respects, it’s quite a robust solution, and can be tightly integrated with existing legacy apps. I know; I've done it.
If you have any questions about how to use the pre-packaged install -- or how to get around any limitations you might run into,-- don't hesitate to contact me. I've used this on dozens of systems over the last decade, and have transferred many, many gigabytes of data with it.
July 16, 2015
Bringing the 3000's Languages Fourth
Documenting the history and roots of IMAGE has squirted out a stream of debate on the 3000 newsgroup. Terry O'Brien's project to make a TurboIMAGE Wikipedia page includes a reference to Fourth Generation Languages. His sentence below that noted 4GLs -- taken as fact by most of the 3000 community -- came in for a lively debate.
Several Fourth Generation Language products (Powerhouse, Transact, Speedware, Protos) became available from third party vendors.
While that seems innocent enough, retired 3000 manager Tom Lang has told the newsgroup there's no such thing as a Fourth Generation of any computer language. "My problem with so-called Fourth Generation Languages is the use of the term 'Language' attached to a commercial product," he wrote. The discussion has become a 59-message thread already, threatening to be the longest discussion on the newsgroup this year.
Although the question doesn't seem to merit debate, it's been like catnip to some very veteran developers who know MPE and the 3000. The 4GL term was probably cooked up by vendors' product managers and marketing experts. But such languages' value did exceed third generations like COBOL. The term has everything to do with advancing developer productivity, and the use of generations was an easy way to explain that benefit.
In fact, Cognos -- the biggest vendor of 4GLs in the 3000 world -- renamed its Powerhouse group the Advanced Development Tools unit, using ADT instead of 4GL. This was largely because of the extra value of a dictionary associated with Powerhouse. The dictionary was offered up as a distinction of a 4GL by Birket Foster. Then Stan Sieler, who's written a few compilers including SPLash!, a refreshed version of the 3000's SPL, weighed in with some essentials.One way to measure a language is to see if it's got a BNF (Backus Normal Form), one of two main notation techniques for context-free grammars. According to Wikipedia -- that resource again -- a BNF "is often used to describe the syntax of languages used in computing, such as programming languages." Sieler said that the refreshed SPLash! had a BNF for awhile. Then it didn't. And really, languages don't need one, he added.
The list of the 3000's 4GLs is not a long one. HP dubbed Allbase as a 4GL at the same time that name signified a 3000 database alternative. It was a tool to develop more rapidly, HP said. Transact appears on some 4GL lists for MPE, but it's more often called a 3.5 GL, as is Protos. Not quite complete in their distinctions, although both have dictionaries. These languages all promised speed of development. They rose up in an era when object-oriented computing, with reusable elements, was mostly experimental.
Foster explained what made a 4GL an advanced tool.
The dictionary made the difference in these languages, allowing default formatting of fields, and enforcing rules on the data entry screens. I am a sure that a good Powerhouse or Speedware programmer can out-code a cut and paste COBOL programmer by about 10 to one. It also means that a junior team member is able to code business rules accurately, since the default edits/values come directly from the dictionary, ensuring consistency.
Sieler outlined what he believes makes up a language.
We all know what a 4GL is, to the extent that there’s a ’cloud’ / ’fuzzy shape’ labelled “4GL” in our minds that we can say “yes or no” for a given product, program, language, 4GL, package, or tarball. And we know that Speedware, etc., fit into that cloud.
Does a language have to have a published grammar? (Much less one published by an international standards organization?) Hell no! It’s better if it does, but that’s not only not necessary, but the grammar is missing and/or incomplete and/or inaccurate for many (probably most) computer languages, as well as almost all human languages (possibly excluding some post-priori languages). I speak as a compiler author of many decades (since about 1973).
Our SPLash! language (similar to HP’s SPL/V) had a BNF — at the start. (Indeed, we think we had the only accurate BNF for SPL/V.) But, as we added things to the language, they may or may not have been reflected in the BNF. We tried to update the manual, but may not have always been successful … if we got the change notice updated, I was happy.
Adding the word "product" behind 4GL seems to set things in perspective. O'Brien offered his summary of the 3000's rapid languages.
Speedware, Powerhouse, and Protos all had components (Powerhouse Quick, Speedware Reactor) that had a proprietary language syntax that offered Assignment, IO, and Conditional Logic. As such, they meet the minimum requirements to be referenced as a computer language. TurboIMAGE has a syntax for specifying the database schema, but does not have any component that meet the IO, Assignment, Conditional Logic, so it does not meet the minimum requirements.
Speedware and Powerhouse have had similar histories, both offered as ADT products. But the companies that control them have diverged in their missions. PowerHouse is now owned by Unicom Systems. Speedware's focus is now on legacy modernization services and tools, although its own 4GL is still a supported product.
There's an even more audacious tier of languages, one that the HP 3000 never saw. Fifth-generation languages, according to Wikipedia, "make the computer solve a given problem without the programmer. This way, the programmer only needs to worry about what problems need to be solved and what conditions need to be met, without worrying about how to implement a routine or algorithm to solve them." Prolog is one example of this fifth generation. But even Wikipedia's editors are wary of bringing forth a fifth generation.
July 15, 2015
How to Keep Cloud Storage Fast and Secure
Editor's Note: HP 3000 managers do many jobs, work that often extends outside the MPE realm. In our series of Essential Skills, we cover the non-3000 skillset for multi-talented MPE pros.
By Steve Hardwick, CISSP
One of the many cloud-based offerings is storage. It moves data from the end device to a remote server that hosts massive amounts of hard disk space. While this saves local storage, what are some of the challenges and risks associated with the type of account?
Cloud data storage applications have been compromised through different weaknesses. Firstly, there is the straight hack. The hacker gains administrative access into the server containing the data and then can access multiple user accounts. The second one is obtaining a set of usernames and passwords from another location. Many people use the same usernames and passwords for multiple accounts. So a hack into an email server can reveal passwords for a cloud storage service. What are the ways to defend against this level of attack?
Encryption is always a good option to protect data from unauthorized users. Many service providers will argue that they already provide encryption services. However, in a lot of cases this is what is called bulk encryption. The data from various users is bundled together in a single data store. Then the whole data store is encrypted with the same password. This gives a certain level of protection, for example of the disk is stolen. But, if administrative access is gained, these systems can be compromised. A better solution is to choose a service that offers encryption at the account level.Another option is to encrypt the data before it is stored.This is probably the safest method, as the encryption application is not part of the cloud server, and neither is the password. There is a penalty of performance and time in creating and restoring the file, as it has to be encrypted/decrypted. Today's computer systems normally make short work of this task.
Finally, there is a common misconception that an encrypted file is bigger than the original. For good encryption they should be about the same size. The only challenge with any encryption is to make sure the password is safe.
If you use the same username and password, the best solution is not to do it. But the difficulty is having 20 different usernames and passwords and remembering them all. One option is to let the browser do the remembering. Browsers have the option of remembering passwords for different websites. The browser creates its own local store of the passwords. However, if the computer's hard drive crashes, so does the password storage.
The next option is to use an on-line password account. The bad news is that they have the same weakness as other types of on-line storage. LastPass was recently hacked, so many users were worried that their password lists were compromised. I use a password vault that locally encrypts the vault file. That file can then be stored in online data storage safely. Plus, if you chose the right password application, the vault is shared across multiple devices. This way, different accounts and passwords can be used for each account and still be available from a secure, but available location.
Online storage, offline access
Most of the time many of us have access to the cloud. But there are times when I would like to have access to my data, but I don't have Internet access. The best example of this is on the plane. Although Internet service is available on many planes, not everyone has access. So it is good to choose a service that has a client application to synchronize the data. This will allow copies of the same file to be kept locally and in the cloud. This can be important when looking at mobile solutions.
In many cases, mobile storage is preserved by moving the data into an online storage location. Storing all the music files in the cloud, and then finding that they are not available offline, can be very infuriating on a plane ride.
Compression to be free
Free storage on-line services are limited to a set amount of storage. One way to get around this is to use data compression. Most raw data files can be compressed to some extent. But bear in mind that most media formats, such as mpeg, mp4, or jpeg, have already been created using compression. Many other files, though, can be compressed before they are stored. Some applications — for example back-up apps — will give the user a choice to compress the file before it is stored. Not only does this reduce the amount of space the data takes in the online storage, it is also faster to upload and download.
July 14, 2015
A Fleet of Trucks That Couldn't Fell MPE
Out on the HP 3000 newsgroup, Tracy Johnson inquired about the state of the 3000's and MPE's durability. Johnson, who's worked with OpenMPE in the past while managing 3000s for Measurement Specialties, was addressing the Truck Factor for the 3000 and its OS. "In what year did MPE reach the Truck Factor?" he asked, referring to the number of developers who'd have to get hit by a truck before development would be incapacitated.
MPE long ago stopped counting the names of such authors. Development ended for the OS when HP retired or reassigned its lab staff during 2009. But the tribal operating and administrative knowledge of the OS has a high truck factor, if you account for global connectivity. Dozens of MPE experts who are known to the community would have to fall under the wheels of trucks for MPE's operational knowledge to expire.
"I honestly don't think it applies any longer to MPE," Art Bahrs commented on the list, "as MPE has now stabilized and has a support base in people like Stan Sieler, Birket Foster, Donna Hofmeister, Neil Armstrong, Alfredo Rego and such. I know I'm forgetting lots more.""Now if there aren't people out there who are willing to learn new "old" things," Bahrs added, "then MPE will fade out as this community fades away."
One advantage to moving out of the active development phase of life is that a technology becomes stable. It won't acquire new capabilities, and newer technologies will struggle to be relevant in an environment like MPE. But newer wonders like netty, a "client server framework which enables quick and easy development of network applications such as protocol servers and clients," have a TF of 1. If just one developer got taken out by a truck, more than half the GitHub code for netty would be orphaned.
Birket Foster has long used the examples of a rogue bus or a lottery victory to illustrate the delicate state of MPE knowledge at many customer sites. Winning a lottery and immediately retiring, or meeting your end under the wheels of a bus (or truck) could start a local demise of MPE practices. Finding seasoned help to take over in such a tough circumstance would not be impossible. But recovering the knowledge of custom apps will be a challenge for any company who doesn't document crucial applications and practices.
The senior status of MPE among technologies evoked another use of the truck metaphor for Scott Gates. Commenting on the newsgroup, he evoked the history of the 3000 at his school district.
For me, MPE and the HP 3000's "Truck Factor" has been it's like an old pickup truck — you put the key in, turn it over, and it's running. In my four years at Bellefonte, we had one unscheduled downtime when an original system drive failed after almost 10 years of constant use.
July 13, 2015
Celebrating a 3000 Celebrity's (im)migration
Eugene Volokh is among the best examples of HP 3000 celebrity. The co-creator of MPEX (along with his father Vladimir) entered America in the 1970s, a Jewish immigrant who left Russia to arrive with his family as a boy of 7, destined for a notable place on America's teeming shores.
Those teeming shores are associated with another American Jew, Anna Lazurus, whose poem including that phrase adorns a wall of the Statue of Liberty. More than 125 years of immigrants have passed by that monument, people who have created some of the best of the US, a fact celebrated in the announcement of this year's Great Immigrants award from the Carnegie Corporation. Eugene is among the 38 Pride of America honorees appearing in a full-page New York Times ad (below, in the top-right corner) from over the Independence Day weekend.
Those named this year include Saturday Night Live's creator Lorne Michaels, Nobel laureate Thomas Sudhof, and Pulitzer Prize novelist Geraldine Brooks, along with Eugene -- who's listed as a professor, legal scholar, and blogger. All are naturalized citizens.
Eugene's first notable achievement came through his work in the fields of MPE, though, computer science that's escaped the notice of the Carnegie awards board. Given that the success of Vesoft (through MPEX and Security/3000) made all else that followed possible, a 3000 user might say that work in MPE brought the rest of the legal, scholarly, and blogging (The Volokh Conspiracy) achievements within his grasp.An entry in the Great Immigrants website sums up what's made him an honoree:
A law professor at the UCLA School of Law, Eugene Volokh is cofounder of the blog, The Volokh Conspiracy, which runs on the Washington Post’s website (which is independent of the newspaper). Before joining UCLA, where he teaches a myriad of subjects, including free speech law and religious freedom law, Volokh clerked for Justice Sandra Day O’Connor on the U.S. Supreme Court. Volokh was born in Kiev, Ukraine, when it was still part of the Soviet Union, and immigrated to the United States at age seven.
It's not difficult to find Eugene in the firmament of the American culture, with articles in the Post, the New York Times op-ed page, and interviews on TV networks and National Public Radio. But each time a 3000 user starts up MPEX, they light up the roots of somebody who migrated long ago, in an era when the 3000 itself was a migration destination, a refuge from the wretched existence of mainframes. We pass on our congratulations.
July 10, 2015
User group manufactures new website
CAMUS is the Computer Applications for Manufacturing User Society that now has a fresh website to go with its quaint name. While Computer Aided Manufacturing pretty much describes everything outside of the tiny Chinese enterprises doing piecework for the world, CAMUS is unique. It's devoted to a significant interest of the remaining HP 3000 homesteaders. Manufacturing remains an HP 3000 heartland.
Keeping a website up to date is no small feat. In the face of declining use of HP 3000-related products, some websites have disappeared. The legendary Jazz server from the Hewlett-Packard labs went dark long ago. The full retreat of HP's 3000 knowledge seems more obvious all the time. The old www.hp.com/go/e3000 address, once HP's portal for things MPE-related, now returns the message above.
Which is why the camus.org update is heartening. Terri Glendon Lanza reports that the site serves MANMAN, MK, MAXCIM, and migrated manufacturing companies.
Members will now be able to edit their profiles and search the membership for others with similarities such as geographics, software modules and platforms, or associate supplier services.
Our free membership still includes upcoming webinar meetings, connecting with 'birds of a feather', a listserv for questions to the community, and photo gallery of former events.
Society members receive access credentials to a members-only section. Just about anybody can become a member. Pivital Solutions and Stromasys are Associate members, which will tell you about the 3000 focus the group can count upon.
July 09, 2015
Throwback: When IMAGE Got Its SQL Skin
During the current Wikipedia project to document IMAGE, Terry O'Brien of DISC asked where he might find resources that point to IMAGE facts. Wikipedia is all about facts that can be documented by outside sources, especially articles. O'Brien was searching for InterACT articles, perhaps thinking of the grand series written by George Stachnik for that Interex user group magazine.
While the user group and its website are gone, many of those articles are available. 3K Associates has an archive of more than a dozen of them, including several on IMAGE. (That website has the most comprehensive collection of MPE and 3000 lore, from tech how-to's to an HP 3000 FAQ.) As part of his introductory article in the database subset of The HP 3000 For Novices, Stachnik notes how IMAGE got its SQL interface, as well as why it was needed.
Most new client-server applications that were developed in the 1980s made extensive use of the SQL language. In order to make it possible for these applications to work with the HP 3000, HP literally taught TurboIMAGE a new language--the ANSII standard SQL.
The resulting DBMS was named IMAGE/SQL -- which is the name that is used today. IMAGE/SQL databases can be accessed in two ways: either using the traditional proprietary interfaces (thus protecting customers' investments in proprietary software) or using the new industry standard SQL interface (thus enabling standard client-server database tools to access the data stored on HP 3000s).
The enhanced IMAGE came to be called TurboIMAGE/SQL, to fully identify its roots as well as its new prowess. Stachnik wrote the article in an era when he could cite "new technologies such as the World Wide Web."
HP removed many of the restrictions that had pushed developers away from the HP 3000, making it possible to access the HP 3000's features (including its database management system) through new industry standard interfaces, while continuing to support the older proprietary interfaces. In the final months of the 20th century, interest in the IMAGE database management system and sales of the HP 3000 platform are both on the rise.
That rise was a result of user campaigning that started in earnest 25 years ago this summer, at an Interex conference. Old hands in this market call that first salvo the Boston Tea Party because it happened in a Boston conference meeting room. More than nine years later, Stachnik wrote that "interest in the IMAGE database management system and sales of the HP 3000 platform are both on the rise."There are many places to discover the history and deep, elegant engineering of IMAGE. Adager's website contains the greatest concentration of writing about IMAGE. It's possible that references from adager.com articles will make their way in the new Wikipedia entry. They wouldn't be relevant without that rebellion of 25 years ago, because HP wanted to release IMAGE from its pairing with the 3000. The users wouldn't permit it, bad press from the meeting ensued, and an IMAGE-free HP 3000 became much harder to purchase.
SQL arrived about three years later. The story had a happy ending when Stachnik wrote his article.
Any HP 3000 application that used IMAGE/3000 (and virtually all HP 3000 applications did) was locked into the HP 3000 platform. It couldn't be ported to another platform without some fairly major rework. This was almost the kiss of death for the HP 3000 in the open-systems-obsessed 1990s. In fact, many platforms did "go under" in the UNIX shakeout that took place in the early part of the decade.
Many industry observers expected that Hewlett-Packard would choose to jettison its proprietary HP 3000 platform in favor of its faster growing younger brother, the UNIX-based HP 9000. Fortunately, these observers did not understand a very basic fact about the company.
HP was (and is) very focussed on protecting its customers' investments. Instead of jettisoning the HP 3000 platform, the company chose to invest in it.
Whatever HP intended for the fate of the computer, the investment in SQL remains a way to keep the heartbeat of the 3000 pumping data to the world of non-MPE machines.
July 08, 2015
Freeing HP's Diagnostics Inside the 3000
When HP officially closed its formal HP 3000 support, the vendor left its diagnostics software open for use by anybody who ran a 3000. Throughout the years when HP sold 3000 support, CSTM needed a password that only HP's engineers could supply. But the CSTM diagnostics tools started to run in 2011 without any HP support-supplied password.
However, managers need a binary patch to free up the diagnostics. Support providers who've taken over for HP know how to enable CSTM. The community has a former Hewlett-Packard engineer to thank, Gary Robillard, for keeping the door to the diagnostics open. Robillard says he is the engineer who last worked on CSTM for MPE/iX when he was a contractor at HP.
A 3000 site must request a patch to get these expert tools working. HP arranged for 3000 sites to get such patches for free at the end of 2010. We tracked the procedure in a NewsWire story, since the HP link on how to get these patches, once on the old division's webpages, has gone dead.
One such patched version of CSTM needs a binary patch. Robillard created this binary patch fix.
Versions of CSTM [patched] with ODINX19A or ODINX25A allow the expert tools with no licensing, but you still have to issue the HLIC command.
If you install ODINX25A/B/C (6.5, 7.0, 7.5) you won't need to do anything except issue the HLIC command with any password. The HLIC command might say it was not accepted, but the license is activated anyway
When HP halted all its support of the 3000, Robillard said his patch corrects the problem with ODINX19A -- and gives 3000 managers access to these system diagnostics -- for servers running the 6.5, 7.0, or 7.5 versions of CSTM.
If you have installed ODINX19A, you need to do the following:
Logon as MANAGER.SYS. It's safest to create an input file to sompatch.
1. Run editor.pub.sys
2. Add the following three lines EXACTLY. The sompatch will only work if the instruction at offset 268 matches 86a020c2. The message "Error: Old value does not match" is displayed and no changes are made)
Here are the contents of BINPCHIN file (You will want to copy and paste these):
~~~~~The 3 lines are below~~~~~~~
; Fix problem in DIAGMOND after 12/19/2010
modify ms_init_manage_sys + 268,1 86a020c2|08000240
~~~~The 3 lines are above~~~~~~~~
• Make sure DIAGMOND is not running (run STMSHUT.DIAG.SYS)
• copy /usr/sbin/stm/uut/bin/sys/diagmond,DIAGMOND
• run sompatch.pub.sys;stdin=BINPCHIN;INFO='DIAGMOND'
• copy DIAGMOND,/usr/sbin/stm/uut/bin/sys/diagmond;YES
• Restart DIAGMOND (run STMSTART.DIAG.SYS)
After a few minutes, a "SHOWPROC 1;TREE;SYSTEM" should show the DIAGMOND process, and either the mapping processes, or memlogd, diaglogd and maybe cclogd (on A/N Class 3000s only).
July 07, 2015
Migrated systems ready for app portfolios
Once an HP 3000 is migrated, its mission-critical applications are ready to join a wider portfolio of corporate IT assets. Managers who want a place at the boardroom table have learned to place a valuation on these resources. Many of them gained their value while working as MPE-based software.
Studies show that managers spend 80 percent of the IT budget maintaining their current assets. If you are forced to do anything radical you run into real issues, then overrun your budget. At most companies, the IT budget is set at operating level.
Migration can be a radical step. But the duty of an IT manager who oversees a 3000 is to keep track of what is productive. It’s not about the migration, it’s about the whole portfolio. You must assess the 3000’s risk versus the rest of the applications in the portfolio.
MB Foster is covering the high-level issues for APM in a Webinar on July 8 (tomorrow) starting at 2 PM Eastern time. Birket Foster's team says that a successful engagement to implement APM should yield a defined inventory and an action plan specific to your needs along with the business value, a desired strategic landscape and technical conditions for each application.Foster has said that migrating sites should consider the share of budget that a move to Windows requires. "There are lots of people who have never managed where they spend their money," he's said. "So APM offers the chance for some consciousness-raising going on. Sometimes, there's also the possibility that the senior management team doesn’t understand what their investment in IT should be."
Most important in the APM strategy? The concept of getting executive buy-in on IT projects by showing the applications' asset valuation. Just like a portfolio of stocks, or a stringer full of fish, the applications running HP 3000s can be assayed and then assigned the resources to maintain them — or tossed back to start over again.
July 06, 2015
Work launches on TurboIMAGE Wiki page
History is a major element in the HP 3000's everyday life. A computer that received its last vendor-released enhancement in 2009 is not in need of a lot of tracing of new aspects. But a serious chronicle of its features and powers is always welcome for homesteading customers. A new effort on Wikipedia will help one of its longer-standing database vendors, one who's moved onward to Windows.
Terry O'Brien still holds management reins at DISC, makers of the Omnidex indexing tool for TurboIMAGE. He's begun a distinct entry on Wikipedia for the database that's been the heartbeat of MPE almost since the server's beginning. O'Brien is enlisting the memory of the user community to take the page from stub status to full entry. "My original intent was to create an Omnidex page, since DISC is ramping up marketing efforts in the Windows and Linux space for Omnidex 6.0," he said.
During my ramp up within Wikipedia, I noticed the TurboImage article had little information and had no cited references. Although I have been a heavy utilizer of Wikipedia the past several years, I had never looked behind the covers. Wikipedia has a rich culture with a lot of information to digest for new authors. It is a bit daunting for new authors.
I originally was just going to add some general information and mention Fred White. Needing to cite references led me to an article Bob Green wrote on the history of the HP 3000 as well as numerous other articles from Robelle that I am citing. That let me to articles on 3000 NewsWire, so thanks Ron for your prolific prose on all things HP 3000.
Journalism, however, is not the best entry point for a Wikipedia entry. The most dispassionate prose conceivable is best-suited for Wikipedia. Think of software manual language and you're closest to what's accepted. A broad-interest topic like yoga gets a good deal more Wiki Editor scrutiny than a chronicle on a minicomputer's database. That doesn't mean there's not a wealth of accuracy that can be supplied for the current TurboIMAGE stub, however. O'Brien is asking for help
His posts to the 3000 newsgroup include such a request. "I also need to solicit other unbiased parties to collaborate. And what better place to get feedback on TurboIMAGE then from HP3000L!"
"So if there are any Wikipedia authors interested in added to the article or debating anything I stated, please do so in the TurboIMAGE talk page."
Wikipedia authors will know exactly how Talk works to get a page written and improved. And it's dead-simple to become a Wikipedia author. As O'Brien suggests, creating a page is much more complex than improving an existing one.
July 02, 2015
Throwback: When HP touted Java/iX
Editor's Note: We're taking Friday off this week to make time to celebrate the US Independence Day.
Fifteen years ago this month, the prospects for HP 3000 growth were touted at an all-Java conference. HP engineers took the 3000 and the new version of Java/iX to Java One, which at the time in 2000 was billed as the world's largest show devoted to the "write once, run everywhere" programming tool.
The 3000 division exhibited an entry-level HP 3000 on the show floor at the conference. HP’s Java expert for the e3000 Mike Yawn was at the show, along with division engineers Eric Vistica and OnOn Hong. Marketing representative Peggy Ruse was also in attendance from the division.
“In previous years, we’ve had literature available and 3000 ISVs in attendance at other booths,” Yawn said at the time. “This year you could actually go to an HP booth and find Java applications running on e3000 servers.”
Yawn reported Java’s Reflection Technology (not related to the WRQ product of the same name) “is a way to discover information about an object at runtime. It’s very analogous to using DBINFO calls to get structural info about a database. Reflection was introduced in JDK 1.1 to support JavaBeans. The APIs were improved in 1.2, with minor refinements coming in the 1.3 release.”After the conference, Yawn said Java Reflection could be used to dynamically determine everything you might need to know about objects. On an evolving product front, HP gave demo space in its booth to third-party solutions that rely on Java for e3000 users. A precursor to Javelin, Minisoft’s Web Dimension, was also demonstrated.
HP also showed off LegacyJ software in its booth. The software converted VPlus screens to Java. It automatically generated menus and maps function keys to menu items.
For all of the forward progress on bringing a new development platform inside of MPE/iX, Java/iX was having its biggest heyday in that July of 15 years ago. JavaOne is still the central event in the Java calendar. The conference will be held in October.
July 01, 2015
Reflection dives deeper into new brand
Last fall, Micro Focus announced it was acquiring Attachmate and several other companies. The merger of these IT firms marked another step for a popular HP server connection product, Reflection, toward a new life with a new name, even if its functionality remains the same.
The Chief Operating Officer of Micro Focus, Stephen Murdoch, has reported to customers about the strategy to meld the products from Borland, NetIQ, Attachmate, Novell and SUSE. The scope of what these companies have offered is significant. Development, networking, connectivity and evironments make up these acquisitions.
We will be simplifying the branding and packaging of our portfolios. As an example, we will combine our leading host connectivity solutions of Reflection and Rumba into one set of Micro Focus branded solutions offering the best of both technologies. A similar approach of simplification and alignment will be taken systematically, resulting in one company operating two product portfolios, namely Micro Focus and SUSE.
By all reports, Rumba didn't meet HP 3000 manager standards in its versions available before Attachmate acquired Reflection. That was in the days when the blended firm was called AttachmateWRQ. Few HP 3000 sites, if any, have learned to rely on Rumba for their connectivity. Now the tracking will commence on how the feature sets of Reflection and Rumba survive this combination.The deepest level of 3000 integration in Reflection lies in its scripting language. When the news first broke about the Micro Focus acquisition of Attachmate, we checked in with a long-time Reflection user to see how Rumba might fill in. Reflection's macros have to be converted to a Rumba format called ELHAPPI, Enhanced High Level Language Application Program Interface. As with any acronymn that has seven letters, it's a design choice that's got quite a, well, legacy air to it. According to Glenn Mitchell of BlueCross BlueShield of South Carolina
It's an API that goes back to the early PC days, and allowed a program running on the PC to "scrape" data from a terminal emulator session running on the PC. So it represents a big move backwards in technology from Reflection VBA.
Our guys figured out a way to run our VBA scripts in Excel and trap most of the Reflection API calls (e.g. getdisplaytext) and convert them to equivalent EHLAPPI calls for Rumba. The gotcha is that they've only done the most frequently used API functions, and Rumba doesn't support all of the functions Reflection makes available via API.
Scripting inside of a terminal emulator product represents a deep level of technology. Just the sort of tool a 3000 shop deploys when it can command petabytes of data and tens of thousands of users. When things change with vendor plans, whether it's a system maker or a provider of software, support staff shifts its support to migration tasks.
As an interesting footnote to the changes in the outlook for Reflection -- given that Rumba has been offered as a replacement -- we turn to the a recent comment by Doug Greenup of Minisoft. "Minisoft has NS/VT in its HP terminal emulator," he noted when we described the unique 3000 protocol in some versions of Reflection. "And unlike WRQ, we remain independent. We still have HP 3000 knowledgeable developers and support people." The company's terminal emulator for 3000s, Minisoft Secure 92, has a scripting language called TermTalk.