September 15, 2014

WRQ's Reflection goes deeper into coffers

Micro Focus logoNews came to me today about the Sept. 15 deal between Attachmate and Micro Focus. Two of the larger enterprise software makers which matter to 3000 vendors, the connectivity company and world's biggest COBOL vendor, will be doing a merger. With this, the consolidation of enterprise vendors takes another step into its future, and Reflection goes deeper into another software corporation's coffers.

Below is some of the story as told by Micro Focus, in a message to its clients and customers, about a $1.2 billion all-stock deal that leaves Micro Focus owning 60 percent of Attachmate.

Our intention is to preserve the full portfolios of strong, leading products in both Micro Focus and Attachmate going forward. We will draw on our recent acquisitions’ track record of successfully integrating any overlapping product sets.

Business logic and data that lies at the heart of operational effectiveness is increasingly exposed to very complex IT environments, as well as recent technology developments such as the cloud, mobility and virtualization. The combination of Micro Focus and Attachmate creates a leading technology company that will be well positioned to give organizations the ability to exploit the opportunities these trends produce whilst also leveraging prior investments and established IT assets to effectively bridge the old and the new.

For those who are counting up what kinds of products will be preserved -- in addition to the Reflection line -- the merger also brings Novell, NetIQ, and SUSE Linux under the control of Micro Focus. It would take some detailed calculating to figure the total number of products being preserved. But more than 200 in the portfolio would not be an errant guess.

This sort of reverse takeover is popular for merger deals today. JDA pulled the same sort of strings when it acquired Red Prairie early in 2013. It's billed as a merger, but the terms are not equal ownership once the deal has been approved. Shareholders of both of these companies still must approve this reverse deal, but Micro Focus is already announcing the transaction is expected to close on November 3.

Regulatory approval is required for this merger, but that won't include much regulation from the customers of the Attachmate product set. Micro Focus has absorbed a 3000-related vendor before. The company bought Acucorp, makers of AcuCOBOL, outright in 2007. The complier, for a short time, had MPE COBOL II awareness and was positioned as an upgrade to the HP-built COBOL II. Then HP announced its 3000 exit strategy, and the ranks of COBOL vendors for MPE got shorter.

It can take years to discover what might become of a product vital to an HP systems manager, software that's been acquired this way. AcuCOBOL, which had a cross-platform prospect before HP's exit activity, is still in the Micro Focus price list. (Well, maybe not as AcuCOBOL anymore. Following Acucorp's lead, it's now called extend. But it's been in, and then out, and then back in favor for COBOL futures. The last in-person report we heard was in 2009, when an AcuCOBOL rep told the HP 3000 faithful at the Meeting by the Bay that his compiler was on the rise again -- or at least no longer falling.

The definition of coffer includes strongbox, money chest, and casket. In the first, the Reflection products would be tucked away safely and receive whatever development they could earn. In a world where much of IT connection takes place over the Web, a standalone terminal emulator is going to have restricted earning prospects.

The ideal for customers would be Reflection to become a money chest through the increased sales opportunity that Micro Focus might supply to it. We can pause for a moment to consider how often this has happened for acquired products. In the JDA instance, some Red Prairie product managers were no longer on the scene, post-reverse takeover. Rightsizing operations drives the shareholder approval of these things. There's an Attachmate sales force, and there's the products. Only the latter is certain to survive beyond this first year.

The casket is the kind of option where a software vendor's assets -- developers, locations, and cash -- are the prize in the deal. This seems unlikely given the size of Attachmate. This deal was big enough that Micro Focus chose a reverse-takeover approach. But Reflection didn't have the same profile at Attachmate as when WRQ sold itself to Micro Focus. Within a couple of years, the company called AttachmateWRQ became simply Attachmate.

Reuters reports that the owners of Attachmate are four asset management firms: Francisco Partners Funds, the Golden Gate Funds, the Thoma Bravo Funds and the Elliott Management Fund. Golden Gate bought up Ecometry long ago. By now, after its addition of the products of Novell et al, Attachmate is owned by a parent corporation called Wizard LLC. 

Terminal emulation to HP servers doesn't require much wizardry these days, but the Reflection product does understand the NS/VT protocol for MPE/iX. There's sure to be someone in Seattle who's in charge of that this week, and probably beyond November 3, too.

Posted by Ron Seybold at 03:44 PM in Homesteading, Migration | Permalink | Comments (2)

Follow the 3000 NewsWire on Twitter
for immediate feeds of our latest news
and more twitter.com/3000newswire.

September 12, 2014

Can HP's cloud deals ground enterprises?

Editors at The New York Times seem to believe the above is true -- or more to the point, that cloud business will come at the expense of HP's hardware revenues. Nobody knows whether this is the way that HP's clouds will rise. Not yet. But a deal to buy an open source software company caught notice of a writer at the NYT, and then came a saucy headline.

Storm-cloudsHP Is Committed to the Cloud, Even If It Kills. The bulk of the story was about Marten Mickos, who sold his company Eucalyptus to Hewlett-Packard and got himself named as General Manager of HP Cloud Business, or somesuch. Open source followers will know Mickos as the man who sold mySQL to Sun, sparking some fury in a customer base that didn't want any connection to major vendor. (As it turned out, Sun wasn't really a major vendor at all, just an object for Oracle acquisition.)

This only matters to migrating customers who use HP 3000s, so if you're still reading and you're homesteading -- or migrating away from HP altogether -- what follows is more for sport than strategic planning. But once more, I'll remind readers that HP is looking for anything that can lift its fortunes. Selling enterprise hardware, like the Integrity servers which are the only island where HP-UX can live, has got a dim outlook. Selling cloud services instead of hardware has plenty more promise, even if it's largely unrealized at HP today.

The rain-clouds in HP's skies come from Amazon, mostly, whose Amazon Web Services is the leader in a growing segment. Eucalyptus works with AWS, and that seems to be the major reason that Mickos gets to direct-report to HP's CEO Meg Whitman. Eucalyptus manages cloud computing systems. HP still sells hardware and software to host private clouds, but an AWS arrangement is a public cloud concept. HP wants to be sure an AWS user can still be an HP customer.

Clouds have a penchant for carrying a customer away from a vendor. Or at least a vendor's hardware. In the NYT story, "HP will have to rely less on revenue from selling hardware, and more on software and service contracts. 'Success will be a tight alignment of many parts of the company,' said Mr. Mickos. 'We have to figure out how to work together.' "

If you go back 24 years, you can find some roots of this HP desire on a stranded pleasure boat in the San Francisco harbor. But until the business critical HP iron stopped selling, the company never believed it would have to set a rapid course for services.

In the fall of 1990, HP hosted a CIMinar conference, mostly for the press and some big customers. The letters stood for Computer Integrated Manufacturing. There was a dinner party on a nice cruise boat as part of the event. When the engines died after dinner, the boat sat in the bay for awhile. We all went out to the deck to lean on the rail and catch some cool air and wait for the tug. That's when Charlie, HP media relations guy, explained that hardware would be on its way out.

"We don't want to sell servers in the long run," Charlie said, while we were talking long enough that he got to the soul of what he believed. He was a former trade press reporter and a good media guy, too. "HP wants to be in the services business, and maybe selling some software."

So here we are, close to a quarter-century later, and now HP's finally found a reason to buy open source software and the fellow who guided it into several hundred companies. Then name him head of HP Cloud making. They hope the whole deal will turn him and his software into a rainmaker for HP enterprise revenues. 

While the Times article has got its problems, it got one stretch of the story pretty accurate. HP, like it has said for several decades, is just following its customers. Apparently, away from relying on HP's hardware.

Putting services and hardware together in new ways is part of "the hard hill we are in the process of climbing," said Martin Fink, HP’s chief technology officer and the head of HP Labs, where much of the development is taking place. "Is there uncertainty? There is always uncertainty." He added that Ms. Whitman has determined that this is where customers are going, so HP needs to adjust its business accordingly.

For years, the HP 3000 community wanted Hewlett-Packard to make recommendations to customers about which HP solution to employ. No dice. "We just want to be trusted advisors," HP said over and over. "The customer will tell us what they need, and then we will provide it."

And if the customer needs more legroom to use Integrity and HP-UX? Well, there's always that uncertainty that Mr. Fink mentioned.

Posted by Ron Seybold at 09:48 PM in Migration, News Outta HP | Permalink | Comments (0)

September 08, 2014

Who else is still out there 3000 computing?

MaytagEmploying an HP 3000 can seem as lonely as being the Maytag Repairman. He's the iconic advertising character who didn't see many customers because a Maytag washing machine was so reliable. HP 3000s have shown that reliability, and many are now in lock-down mode. Nothing will change on them unless absolutely necessary. There is less reason to reach out now and ask somebody a question.

And over the last month and into this one, there's no user conference to bring people together in person. Augusts and Septembers in the decades past always reminded you about the community and its numbers.

Send me a note if you're using a 3000 and would like the world to know about it. If knowing about it would help to generate some sales, then send it all the sooner.

But still today, there have been some check-ins and hand-raising coming from users out there. A few weeks back, Stan Sieler of Allegro invited the readers of the 3000-L newsgroup to make themselves known if they sell gifts for the upcoming shopping season. "As the holiday shopping season approaches," he said, "it occurred to me that it might be nice to have a list of companies that still use the HP 3000... so we could potentially consider doing business with them."

If September 9 seems too early to consider the December holidays, consider this: Any HP 3000 running a retail application, ecommerce or otherwise, has gone into Retail Lockdown by now. Transitions to other servers will have to wait until January for anybody who's not made the move.

Sieler offered up a few companies which he and his firm know about, where 3000s are still running and selling. See's Candies, Houdini Inc, and Wine Country Gift Baskets are doing commerce with gift consumers. We can add that Thompson Cigar out of Tampa is using HP 3000s, and it's got a smoking-hot gift of humidor packs. (Sorry, couldn't resist.) Then there's American Musical Supply, which last year was looking for a COBOL programmer who has Ecometry/Escalate Retail experience.

Another sales location that could provide gifts for the holiday season is in airports. The duty free shops in some major terminals run applications on MPE systems. HMS Host shops, at least four of them, sell gifts using 3000s. Pretty much anything you'd buy in a duty free shop is a gift, for somebody including yourself.

The discussion of who's still using, and feeling a little Maytag solitude, prompted a few other customers to poke up their heads. We heard again from Deane Bell at the University of Washington, where there could be another 10 years of homesteading for the 3000. The first three finished. All in archival mode.

Beechglen furnishes an HP 3000 locally hosted system meeting the following minimum specifications

· Series A500 Server
· 2GB ECC memory
· 365 GB disk space consisting of 73GB operating system and temporary storage for system backups, and 292GB in a software RAID-1 configuration yielding 146GB of usable disk storage
· DDS3 tape drive
· DLT8000 tape drive

There were other check-ins from Cerro Wire, from the California Department of Corrections and Rehabilitation (where one 3000 wag quipped, "the users are not allowed access to files) and one from MacLean Power Systems -- that last, another data point in the migration stats under the column "Can't shut down the HP 3000 as quickly as originally believed." Wesleyan Assurance Society in the UK raised its hand, where Jill Turner reports that "they have been looking to move off for years, but are only now just getting round to looking at this, which will take a while so we will still be using them. Far more reliable than the new kit."

In our very own hometown of Austin, Firstcare is still a user, but nearly all of its medical claims processing has been migrated to a new Linux platform. That's one migration that didn't flow the way HP expected, toward its other enterprise software platforms.

There is Cessna, still flying its maintenance applications under the HP 3000's wingspan. Locating other 3000 customers can be like finding aircraft in your flight pattern. A visual search won't yield much. That's one reason we miss the annual conferences that marked our reunions. This month will be the five-year anniversary of the last "Meeting by the Bay" organized by ScreenJet's Alan Yeo, for example. But the Wide World of the Web brings us all closer.

As a historical Web document that might have some current users on it -- including retail outlets for gift giving -- you can look at the "Companies that Use MPE" page of the OpenMPE website. (That's at openmpe.com these days). That list is more than 10 years old, so it represents the size of the community in the time just after HP's exit announcement. The list is more than 1,200 companies long. And there are plenty of Ecometry sites among the firms listed, including 9 West for shoes and Coldwater Creek for its vast range of clothing. The latter may very well be remaining on a 3000 for now, since retailers' fortunes define the pace of migrations.

And so, in an odd sort of way, patronizing a 3000-based retailer this season might help along a migration -- by increasing revenues that can be applied to an IT budget. It can make for a happier holiday when you can buy what you want, even when that includes a new application and enterprise environment.

Posted by Ron Seybold at 08:18 PM in Homesteading, Migration, User Reports | Permalink | Comments (0)

September 04, 2014

TBT: Practical transition help via HP's files

2004 HPWworld Transition PartnersA 2004 slide of partner logos from an HP presentation.

10 years ago, at the final HP World conference, Hewlett-Packard was working with the Interex user group to educate 3000 users. The lesson in that 2004 conference room carried an HP direction: look away from that MPE/iX system you're managing, the vendor said, and face the transition which is upon you now.

And in that conference room in Atlanta, HP presented a snapshot to prove the customers wouldn't have to face that transition alone.

The meeting was nearly three years after HP laid out its plans for ceasing to build and support the 3000. Some migration was under way at last, but many companies were holding out for a better set of tools and options. HP's 3000 division manager Dave Wilde was glad to share the breadth of the partner community with the conference goers. The slide above is a Throwback, on this Thursday, to an era when MPE and 3000 vendors were considered partners in HP's strategy toward a fresh mission-critical future.

The companies along the top line of this screen of suppliers (click for a larger view) have dwindled to just one by the same name and with the same mission. These were HP's Platinum Migration partners. MB Foster remains on duty -- in the same place, even manning the phones at 800-ANSWERS as it has for decades -- to help transitions succeed, starting with assessment and moving toward implementations. Speedware has become Fresche Legacy, and now focuses on IBM customers and their AS/400 futures. MBS and Lund Performance Solutions are no longer in the transition-migration business.

Many of these companies are still in business, and some are still helping 3000 owners remain in business as well. ScreenJet still sells the tools and supplies the savvy needed to maintain and update legacy interfaces, as well as bring marvels of the past like Transact into the new century. Eloquence sells databases that stand in smoothly for IMAGE/SQL on non-3000 platforms. Robelle continues to sell its Suprtool database manager and its Qedit development tool. Suprtool works on Linux systems by now. Sure, this snapshot is a marketing tool, but it's also a kind of active-duty unit picture of when those who served were standing at attention. It was a lively brigade, your community, even years after HP announced its exit.

There are other partners who've done work on transitions -- either away from HP, or away from the 3000 -- who are not on this slide. Some of them had been in the market for more than a decade at the time, but they didn't fit into HP's picture of the future. You can find some represented on this blog, and in the pages of the Newswire's printed issues. Where is Pivital Solutions on this slide, for example, a company that was authorized to sell new 3000s as recently as just one year earlier?

HP probably needed more than one slide, even in 2004.

From large companies swallowed up by even larger players -- Cognos, WRQ -- to shirt-pocket-protector sized consultancies, there's been a lot of transition away from this market, as evidenced by the players on this slide from a decade ago. Smaller and less engaged, pointed at other enterprise businesses, some even gone dark or into the retirement phase of their existence -- these have been the transitions. This kind of snapshot of partners never would have fit on even two PowerPoint slides in 1994, ten years before that final HP World. Today the busy, significant actors in the 3000's play would not crowd one slide, not from the ones among the company pictured above. 

If you do business with any of these companies above, and that business concerns an HP 3000, consider yourself a fortunate and savvy selector of partners here in 2014. We'd like to hear from you about your vendor's devotion to the MPE Way, whether that's a way to continue to help you away from the server, or a way to keep it vital in your enterprise.

Posted by Ron Seybold at 08:24 PM in History, Homesteading, Migration | Permalink | Comments (0)

September 03, 2014

Moves to Windows open scheduler searches

Some HP 3000 sites are migrating from HP 3000s to Windows .NET systems and architecture. While there's one great advantage in development environment during such a transition -- nothing could be easier to hire than experts in Visual Studio, nee Visual Basic -- companies will have to find a scheduler, one with job handling powers of MPE/iX. Native Windows won't begin to match the 3000's strengths.

Scheduler demo shotMore than three years ago, MB Foster built a scheduler for Windows sites, and customers are sizing up this MBF Scheduler. There's even been interest from IT shops where a HP 3000 has never booted up. They are ofter users of the JDA Direct Commerce (formerly Ecometry-Escalate Retail) software on  Windows servers. These companies never seen an MPE colon prompt, but some need that level of functionality to manage its jobs.

"If senior management has simply decided that Windows was the place to be," said CEO Birket Foster, "we could help automate the business processes -- by managing batch jobs in the regular day and month-end close, as well as handling Ecometry jobs and SQL Server jobs." Automating jobs makes a Windows IT shop manager more productive, like creating another set of hands to help team members. And for a 3000 shop making a transition, something like an independent job handler means they'll be able to stay on schedule with the expected level of productivity.

Companies that use Windows eventually discover how manual their job scheduling process becomes while hemmed in with native tools for the environment. Credit card batches must be turned in multiple times a day at online retailers, for example. The site that orginally sparked the MBF-Scheduler design didn't have a 3000's tools, either. It did had 14,000 jobs a day running, however.

Job listings, also known as standard lists (STDLISTs), are common to both the 3000 and Windows environment, and the software was built to provide the best of both 3000 and Windows worlds, Foster said. The software's got its own STDLIST reviewer, one that's integrated with a scripting language called MBF-UDAX. Ecometry sites working on HP 3000s usually rely on a tool as advanced as Robelle's Suprtool for job scheduling.

Foster's Scheduler includes filtering buttons in job reports by user, by job name, by status and by subqueue. A recent addition to the product introduced a custom category that managers can use to select or sort jobs. While running thousands of batch jobs a day, some are in distinct categories. Customers like the idea of managing factory floor jobs separately from finance jobs, for example. Managers 

Measurement Systems, the manufacturer which runs a dozen HP 3000s in sites across North America, China and Europe, uses the MBF Scheduler. The product manages a complementary farm of MBF Scheduler Windows servers to move jobs among servers throughout Measurement Systems' 3000s.

Terry Simpkins at Meaurement Specialties has been devoted to Infor's MANMAN implementations well beyond the vendor's ability to support the application. Like other customers around the community, Simpkins and his team have compared the Scheduler to MPE's mature tools, and favorably. Sites like this don't need a separate Unix or Linux server for job scheduling, which is the usual way to keep Windows IT on schedule.

Windows schedulers serve HP 3000s, but also serve the Windows-only IT environments where some MPE/iX operations will be headed. At Measurement Specialities, for example, the IT pro who handles scheduling never sees the HP 3000. But enterprise server-born concepts such as job fences are tools which are at that IT pro's command.

Posted by Ron Seybold at 05:33 PM in Migration | Permalink | Comments (2)

August 25, 2014

Shopping While Lines are Dropping

Trendline ESGHP's third quarter financial report showed that a company making adequate profits can also be making products that are not popular any more. The time comes to every product line, but the Hewlett-Packard of 2014 has made steady progress toward commodity-style enterprise computing. The pull into Windows has become a vortex -- and in a bit of irony, Windows' age helped HP's sales this quarter.

Share of Sales Q3 2014The overall numbers were impressive to the markets. Investors lifted the price of HP's stock more than $2 a share, after the briefing, sending it closer to $40 than it's been in years. Meanwhile, the continued downturn of Business Critical Systems scarcely earned a minute's mention. It's off 18 percent from the same 2013 quarter. But it gets less than a minute because BCS products like the HP Unix line, and VMS computing systems -- even the steady but meager business of NonStop -- only comprise 3 percent of the company's enterprise sales. In the circle above, BCS is the rounding error, the most slender slice. Click it to see a bigger picture of that smallest piece.

NetRevbySegment Q3 2014And Enterprise represents just one dollar out of every four that HP earns in sales. This is activity in the Industry Standard Systems. These are the ProLiant servers driving Windows and Linux, business that grew 9 percent. Specialized operating environments like HP-UX just aren't producing new business, and they're losing old customers. If you look over the last three years of Q3 numbers, each and every one shows a double-digit BCS decline. There's only so much clock time on that product wall before irrelevancy pushes a community off HP's futures map. It happened to the HP 3000, but MPE never ruled over HP business computing like Unix once did.

"When I look at the way the business is performing, the pipeline of innovation and the daily feedback that I receive from our customers and partners, my confidence in the turnaround grows stronger." -- Meg Whitman, CEO

So when HP's business in your installed platform shows poor numbers, what do you do? The rest of the company's report looked tame, although you'd wonder why anyone could be sanguine about the future of the company. Printing, Services, Software and Financial Services all dropped their sales top lines. The Enterprise Group grew its business 2 percent overall on a $27.5 billion HP sales quarter. This was accomplished by $57 million of expense cuts. 

Only PC sales grew along with enterprise business. How can a company reporting a 27 percent drop in profits, one that missed its forecast by more than 10 percent, be rewarded on the trading floor? Jim Cramer of MSNBC said there's just enough to like about HP now. That might be due to the history the company has turned back. Everybody on the trading floor remembers HPQ at $12 a share with a fired CEO having followed an ousted CEO. Historic lows are a faded memory now, although the company's gotten no bigger over that stretch of clock time. The good feelings come from a turnaround tale that's still in the middle of its story.

But it is history that is the biggest concern for owners of the plunging VMS and HP-UX servers. Hewlett-Packard may never kill off an enterprise product line again like it did with the HP 3000. But becoming irrelevant is a fatal blow, too. Customers choosing to manage their own datacenters are taking shelter in Linux and Windows, according to HP's report. The analysts are pleased with the company's net operating cash position, which at $4.9 billion is 80 percent higher than last quarter. But that's $2.2 billion extra not being spent on R&D for the company's specialized technology running in HP 3000 migrated sites.

Q3 2014 HPQ EnterpriseThis kind of sour outlook is like a chart-topping record back on '60s radio. As a customer you hear it all the time. But it's not a concern to the corporation making the Unix servers, or to the investors who propel that company into the future. You have to wonder why anyone else would care. It's an even more distant piece of history to recall the day MPE slipped into HP's under-a-minute bin. Enterprise outlook is not part of what Cramer likes about HP this month.

There's shopping until you drop, and shopping until the product line is dropped. The latter's probably years away for the enterprise products that are not commodities. The HP 3000 migrated base has experience in how to manage their investments in a line HP won't watch any longer. At some point they'll draw the line on whether their servers need to be powering Unix or VMS applications. It's the apps that steer operations investments.  By this point, it would be news if BCS numbers did not drop.

It's a good thing that Windows XP dropped out of Microsoft's support plans. The demise of a popular OS left companies with a problem that required replacing aged PCs. The CEO is getting good at keeping HP's aged strategy from pulling down growth, but it's been flat for a long while. If there's a future to owning and staying loyal to Hewlett-Packard enterprise systems, few analysts can see it. You must look at customer applications -- and the dug-in nature of legacy computing -- to see the staying power. 

Posted by Ron Seybold at 09:45 PM in Migration, News Outta HP | Permalink | Comments (1)

August 13, 2014

When a taxing situation might shuffle plans

Out in the 3000 community some select customers are seeing subpoenas. According to a source familiar with the matter, a vendor's been having some issues with the Internal Revenue Service, and the US Government is intent on gathering what it believes it's owed.

1120pictureTax matters go to subpoena when information is being demanded in a case against a corporation or an individual. We're still seeking confirmation of the information about which vendor's name is now out among its customers, attached to a subpoena. [Update: And we have gotten it, plus a copy of the vendor's response. It's a long-term battle with the IRS, the vendor says. We've found documents going back more than 15 years. They claim that the fight is personal, not related to their company. Nonetheless, the vendor's customers got subpoenas.]

It illustrates the unpredictable nature of doing long-term business in the IT industry. HP 3000 users often do long-term business. They have a reputation for sticking to suppliers, especially in these days when companies are shifting focus away from MPE. When you get a tool that works, and a company that pledges to support it, you stick with it while you stay with the 3000.

"What do I do if they go out of business?" one of the customers has asked. The answer is simple enough: the products will go onto the open market to be purchased as assets. Software with customers who pay support fees, well, that's likely to be bought up sooner than later. An IT manager will have to manage new product ownership -- and perhaps new strategy and roadmaps for the product.

But just because there's change at the top of a product's ownership doesn't mean all else changes. It's pretty easy for a company to acquire a product and change little. Especially if the customer base is providing a profit to the vendor at the same time that the software continues to earn support contract renewals.

A sale of assets is the situation that Interex fell into when it declared Chapter 7 bankruptcy in 2005. There was not much of value for another company to purchase. Nobody was taking over the services Interex provided, so there was no customer base to buy as an asset. The only thing that wound up being transferred was the Interex customer list, transferred in a blind auction.

But software that's running in enterprises, across a scope of platforms even broader than MPE -- that's an asset that the government could sell. It's a typical outcome; for example, the trustee of the Interex bankruptcy managed the sale of the user group's assets.

Sometimes taxing issues can be resolved with negotiation. The government wants to be paid, and if there's fraud involved, the "accuracy-related penalties" can be steep. Lawyers with tax experience handle these things to everybody's satisfaction. Watch out for any company representing itself in tax court. Not recommended.

One flag about an imminent forced asset transfer could be an email sent out by the vendor, claiming the government has no right to tax anybody like they're being taxed. That's politics, not business. Nobody ever advised withholding support payments in this kind of matter. But you have to consider where that payment might be used, and whether it will end up someplace besides a support lab. Better to be current, and considered a customer, in case anything changes in ownership of an asset.

Posted by Ron Seybold at 09:36 PM in Homesteading, Migration | Permalink | Comments (0)

August 12, 2014

Where a Roadmap Can Lead You

In preparation for its upcoming VMS Boot Camp, Hewlett-Packard has removed some elements of its roadmap for the operating environment. What's disappeared are no small thing: dates.

HP 3000 customers saw their roadmap get less certain about its destination. At the end of the vendor's interest in selling and creating more systems, an elaborate PowerPoint slide showing multiple levels of servers. The roadmap actually got a cloud creeping in from the right hand margin.

Okay, that was 13 years ago this very month in Chicago. But it was not the last HP World conference -- that would be one decade ago, this month -- not any more than next month's Boot Camp for VMS enthusiasts and customers will be the last. But there have been times when VMS had promises from HP's management of another decade of service. Here's the before, and the after. 

OpenVMS MapOnly seven months elapsed before the OS releases started being named things like "V8.next."

OpenVMS Rolling Map

 Very few products last for lifetimes. Knowing when they're going, and how soon to make plans for replacement, is serious business for an IT manager.

During an August in 2001 when the future looked certain and solid for some customers, a PowerPoint slide told more than could be easily read in Chicago for HP 3000 customers. For the record, the slide below delivered everything promised up until 2003. The PA-8800 never made an entry into the N-Class.

HP 3000 Roadmap 2001 Chicago

That would be known, in the roadmap parlance, as a PA-8xxx. The PA-8yyy (8900) never made it into a 3000, either.

Roadmaps might be an old tradition, but they're moments to establish and renew trust in a partner. Specific, and follow-through, make that possible. Some VMS customers are already underway with their migration assessments.

Posted by Ron Seybold at 06:58 PM in History, Migration, News Outta HP | Permalink | Comments (0)

August 11, 2014

Classic lines push homestead tech designs

Sometime this week I expect to be updated on the latest restructure at Stromasys. That's the company that has created a 3000 hardware-virtualization product installed in more sites than we first thought. They hold their cards close to the vest at Stromasys, especially about new installs. But we keep running into MPE support vendors who mention they have emulator-using clients. These companies are reticent about reporting on emulation.

Lakshorelimitedposter3000 people have dreamed about emulators ever since 2002. And for the next eight years, people figured emulation wouldn’t matter by the time HP approved MPE emulator licensing. Better not tell that to the customers who have plans to go deep into the second decade of the 21st century with their 3000. Emulation was rolling by 2012 for the 3000. Within a couple of years between now and 2023, that technology could be well polished for MPE. Enough to stop using HP's 3000 hardware, boxes that will be at least 20 years old by that time. Most of them are at least 15 years old right now.

A great deal of time has passed since the 9x9 3000s had their coming-out, but much has changed that we couldn't predict back then. Come with me to the magical year of 1997. We had little idea what we'd see in just 10 years' time.

It’s 1997. (Humor me a minute, and turn back the year.) You're here? Okay, think about what we don’t have yet. Google. BluRay. DVDs, for that matter. Hybrid cars. Portable MP3 players of any kind. PayPal. Amazon turning a profit. YouTube. eBay was so new it was called AuctionWeb. Thumb drives. Digital TV. Viagra. Caller ID. Smartphones, warmed baby wipes, online banking, Facebook and Twitter. Blade servers, cloud computing, Linux, virtualization — the list of technologies and designs we didn’t have 17 years ago is vast.

We don’t even have to talk about clouds, tablet computers or 3D TVs. Now, roll ahead to 2023. In that year, there will still an HP 3000 running a factory in Oklahoma. That’s the plan for Ametek’s Chandler Engineering unit. By that year MPE will be 50 years old, COBOL more than 75. And what will keep those two technologies viable? Well, probably technology that we don’t even have out of design now, nine years ahead of that shutdown date. People have been throwing rocks at old stuff for years, but it hangs on if it’s built well.

Northbynorthwest2Four years ago I took a train ride from New York toward Chicago on the Lakeshore Limited. Just like Cary Grant rode that same line with Eva Marie Saint in the year I was born, in North by Northwest. The train remains the best value to get a night’s worth of sleep and end up 800 miles west of where you started. C'mon, railroads? Passenger service with berths went on lines, as it were, in the 19th century. How could it remain viable 150 years later? Like the HP 3000, the values that propel such elder technologies are efficiency and entropy. Railroads still call their carriages rolling stock, because you can roll freight three times farther on a train than a truck for the same expense.

The HP 3000 hardware, virtualized or not, still preserves business rules because Hewlett-Packard built the boxes like armored cars. The investment was so great back in those '90s that people expected it to last more than a decade between upgrades. The downside to switching to newer technology? The stuff we haven’t invented yet might not stick around. Perhaps the Oracle database will still be in widespread use in 2023. That’s the software where Ametek is taking its migration, using a plan developed by people who probably won’t be at the company is 2023.

That Ametek date was so far out that I wondered if it was a typo in an email. (Oh, we had email in 1997. But it wasn’t considered grandpa’s technology back then, the way the young turks think of email today. but now even grandpa's tech reputation has changed. So much noise on Twitter and Facebook. A personal email, from a known colleague -- you open that one first.) So when you plan your transition to tomorrow — whether it’s your personal retirement, or parking that armored car of a computer — don’t sell the future short. Go ahead and be independent to get the work finished on your timetable. But if you're going, now would be a good time to start. It will take until 2016, at best, if you began assessments today.

Posted by Ron Seybold at 11:14 PM in History, Homesteading, Migration | Permalink | Comments (0)

August 07, 2014

Who's got our history, and our future?

Migration takes on many problems and tries to solve them. A vendor stops supporting the server. Investing in a vendor's current product by migrating makes that go away. Applications slide into disrepair, and nobody knows how to re-develop them. Ah, that's a different sort of problem, one that demands a change in people, rather than products.

Yesterday we heard a story of a company in Ohio, running a 3000, whose IT manager planned to retire rather than migrate. Telling top management about your retirement plans is not mandatory. Frankly, having an option to retire is a special situation in our modern era. Figuring that you could be replaced, along with all of your in-company experience and know-how about things like COBOL, is far from certain. Legacy systems still run much of the world, but the people who built and tend to them are growing older, out of the workforce.

NutsfordsPair2004It's a glorious thing, knowing that your server's environment was first crafted four decades ago. Some of the brightest players from that era are still around, though not much active. Fred White built IMAGE, alongside Jon Bale at HP. Neither are at work today. Fred's now 90, as of April. 
NutfordsIn another example of a seasoned 3000 expert, Ken Nutsford's LinkedIn account reports that he celebrates 45 years at Computometric Systems, the development company he founded with his wife Jeanette. In a Throwback Thursday entry, here they are, 10 years ago and now, still together. Not all of us wear so well, but they've retired enough to have travelled the world over, several times, on cruise ships. That's what more than 40 years will earn you.

HPWorld2004SIGCOBOLIt's been a decade since there's been a HP World conference like the one pictured at left, hosted by the Nutsfords, complete with a hospitality buffet as well as a board of trivia (below, click for detail) technical details that just a tiny set of experts might know. The number of people who know the operating system and the hardware at hand at that level has been on the decline. Not just in the MPE world, but throughout the computer industry. SIGCOBOLTrivia

BusinessWeek recently ran an article titled, "Who'll keep your 50 year old software running?" Even though the Nutsfords retired from leading SIGCOBOL in 2004, there's plenty of COBOL around. But not anywhere near enough people to maintain it, although companies continue to try.

The baby boomers that brought us the computer revolution, developing the products and programs we now rely on, are retiring. But many companies are still using programs written in such software languages as COBOL and Fortran that were considered “cutting edge” 50 years ago. Indeed, the trade publication Computerworld has reported that more than half of the companies they surveyed are still developing new COBOL programs

"Staffing is the first thing to go these days," said Birket Foster in a Webinar briefing this week. His MB Foster company is still doing migrations, including moving a Unix customer off the Progress database and onto SQL Server. Progress is a youngster compared to COBOL and IMAGE. Some people decide to migrate because of the migration of their most expert people.

50-yo-sw.frontThe BusinessWeek article didn't supply easy answers to the brain-drain dilemma that every company seems to face. The firms that put computers into their business processes during the last 35 years -- that's just about every company -- are working with new staff by now, or watching their tech foundation head out to the pleasure cruise life.

The article notes that up to one-half of all COBOL and Fortran programmers are at least 50 years old. Younger developers arrive with experience in newer languages. There's a gap to cross between what's operational and what's state-of-the-art. "Smart companies have recruitment and succession plans, of course," the business magazine said. "What they don’t have is access to an adequate supply of workers with the technical expertise they need."

The staffing issues complicate the timing of migrations. How long can you depend on legacy software while you get a replacement up and tested, something the younger set of developers can understand? A migration takes at least 18 months, Foster says. He adds that getting started on the assessment is pretty much a do-it-now item. August is a month that hosted HP World conferences for a good business reason: this is the time of the year when companies are planning their IT budgets for the year to come.

Posted by Ron Seybold at 07:47 PM in History, Homesteading, Migration | Permalink | Comments (0)

August 06, 2014

Password advice for migrating managers

PasswordsStolenMore than a billion password-ID combos were stolen by a Russian gang, according to a report from a cybersecurity company. Mission-critical, revenue-centric passwords are probably the ripest targets.

Once you're making a migration of mission-critical systems from MPE to more-exposed servers, passwords will become a more intense study for you. Windows-based servers are the most exposed targets, so a migrated manager needs to know how to create high-caliber passwords and protect them. Given the headlines in current news, today's probably the day when you'll get more questions about how safe your systems are -- especially in the coming era of cloud computing. Here's some answers from our security expert Steve Hardwick.

By Steve Hardwick, CISSP
Oxygen Finance

Everything needs a password to access it. One of the side effects of the cloud is the need to be able to separate information from the various users that access a centrally located service. In the case where I have data on my laptop or desktop, I can create one single password that controls access to all of the apps that reside on the drive, plus all of the associated data. There is a one to one physical relationship between the owner and the physical machine that hosts the information. This allows a simpler mechanism to validate the user. 

In the cloud world it is not as easy. There is no longer a physical relationship with the user. In fact, a user may be accessing several different physical locations when running applications or accessing information. This has lead to a dramatic increase in the number of passwords and authentication methods that are in use. 

I just did a count of my usernames and passwords and I have 37 different accounts (most with unique usernames and password). Plus, there are several sites where I use the same usernames and password combinations. You may ask why are some unique and why are some shared. The answer is based on the risk of a username or password be compromised. If I consider an account to have a high value, high degree of loss/impact if hacked, then it gets a unique username or password. Let's look at email accounts as a good example.

I have a unique username and password for my five email accounts. However, I do have one email account that is reserved solely for providing a username for other types of access. When I go to a site that requires an email address to set up an account, that is the one I use. Plus I am not always selecting a unique password. The assumption is that if that username and password is stolen, then the other places it can be used are only website accounts of low value. I also have a second email account that I use to set up more sensitive assess, Google Drive, for example. This allows me to limit the damage if one of the accounts is compromised and not end up with a daisy chain of hacked accounts.

So how do you go about generating a bunch of passwords? One easy way is to go into your favorite search engine and type in password generator. You will get a fairly good list of applications that you can use to generate medium to strong passwords. When I used to teach security this was one trick I would share with my students. Write a list of 4 or 5 short words that are easy to remember. Since my first name is Steve we can use that. Add to this password a short number (4-5 digits in length),1999 for example. Now pick a word and number combination and intersperse the numbers and letters S1t9e9v9e would be the result of Steve and 1999.

Longer words and longer numbers make strong passwords -- phone numbers and last names works well. With 5 words and 5 numbers you get 25 passwords. One nice benefit of this approach comes when you need to change your password. Write the number backwards, and merge the word and data back together.

Next challenge: how to remember them all. Some of the passwords I use I tend to remember due to repetitive use. Logging into my system is one I tend to remember, even through it is 11 characters long. But many of my passwords I use infrequently, my router for example, and many have the “remember me” function when I log on. What happens when I want to recall one of these? Well the first thing is not to write them down unless you absolutely have to. You would be amazed how many times I have seen someone’s password taped on the underside of their laptop. A better option is to store them on your machine. How do you do that securely? Well there are several ways.

One easy way is to use a password vault or password manager. This creates a single encrypted file that you can access with a single username and password. Username and password combinations can then be entered into the password vault application together with their corresponding account. The big advantage is that it is now easy to retrieve the access data with one username and password. The one flaw is: what happens if the drive crashes that contains the vault application and data? If you use an encrypted vault, then you can place the resulting file on a cloud drive. This solved the machine dependency and has the added advatage that the password is generally available to multiple machines. If you want to get started with a password vault application, here is a good article that compares some leading products.

Another option is to roll your own. Create a text file and enter all of your account/username/password combinations. Once you are done, obtain some encryption technology. There are open source products, truecrypt is the leader, or you can use the encryption built into your OS. The advantage of using open source is that it runs on multiple OS. Encrypt the text file using your software. Caution: do not use the default file name the application gives you as it will be based on your text file name. 

Once you have created your encrypted file from the text file, open the text file again. Select all the text in the file and delete it. Then copy a large block of text into the file and save it (more then you had with the passwords). Then delete the file. This will make sure that the text file cannot easily be recovered. If you know how to securely delete the file do that instead. Now you can remotely store the encrypted password file in a remote location, cloud storage, another computer, USB drive etc. You will then have a copy of your password file you can recover should you lose access to the one on your main machine.

Now, if you do not want to use encryption, then there is a very geeky option. But why wouldn’t you use encryption? Most programs use specific file extensions for their encrypted file. When auditing, the first thing I would look for is files with encryption extensions. I would then look for any files that were similar in size or name to see if I could find out the source. This included looking through the deleted file history.

The other option is steganography, or stego for short. The simple explanation is the ability to bury information into other data - for example pictures. Rather than give a detailed description of the technology here, take a look at its Wikipedia page   There is also a page with some tools on it. For a long time, my work laptop had a screen saver that contained all my passwords. I am thinking of putting a picture up on Facebook next.

So here are a few simple rules on handling multiple passwords:

1) Try and use uniques usernames and password for sensitive account. You can use the same username password combination for low sensitive accounts.

2) Run through an exercise and ask yourself, what happens if this account is hacked. i.e don't use the same username and password for everything.

3) Do NOT write down your passwords to store them, unless you have a very secure place to store the document e.g. a safe.

4) Make sure you have a secure back-up copy of your passwords, use encryption or steganography.

Posted by Ron Seybold at 05:55 PM in Migration, Newsmakers | Permalink | Comments (0)

August 05, 2014

Boot Camper laying down migration steps

HmBannerOpenVMSMore than a decade after HP began its migration away from MPE and HP 3000, there's another underway among the vendor's enterprise systems. OpenVMS customers are starting to look into what's needed to make a transition off the Digital servers. HP's announced that it will curtail the use of the newest VMS to the very latest generation of hardware. Thousands of servers are going to be stuck on an older OpenVMS.

That will be one element to spark the offers at next month's VMS Boot Camp in Bedford, Mass. We heard from a veteran HP 3000 and MPE developer, Denys Beauchemin, that his company is headed to the Boot Camp for the first time this year. There's engagements and consulting to be made in moving HP enterprise users to less HP-specific environments.

"We migrate them from VMS to Linux or other platforms," said Beauchemin, who was the last working chairman at the Interex user group before the organization went dark in 2005. "Another HP operating system comes to an end."

Boot Camp is a VMS tradition among HP's most-loyal general purpose computing community. (You can't call the 3000 community HP's any longer, now that the vendor is coming up on seven years without a working lab.) Boot Camps in the past were annual meetings to advance the science and solutions around VMS. But in more recent times they haven't been annual. Now there's migration advice on hand for the attendees. Some may view it with disdain, but when a vendor sends up signals of the end of its interest, some kinds of companies make plans right away to migrate.

There is a strong presence in the VMS community for the Stromasys virtualized server solutions. Stromasys made its bones helping VAX and Alpha customers get away from DEC-branded servers; the company was established by the leader of the Digital migration center in Europe.

VMS might be just as essential in some companies as MPE has turned out to be. This is what's made Stromasys CHARON HPA a quiet success in your own community. As VMS customers face the end of HP's support for older hardware -- the latest OpenVMS won't run -- some of them may be looking to a virtualized version of the newer VMS systems. This strategy isn't without its efforts, too. Comparing migration to virtualization as a way into the future is likely to become a diligent task for another HP operating system customer base.

Posted by Ron Seybold at 09:16 PM in Migration, News Outta HP | Permalink | Comments (1)

August 04, 2014

Webinar advice outlines migrating in-house

In-houseThe biggest share of HP 3000 applications have been written by the owners of the systems. Custom code either began out of raw materials and the needs of a company's business processes -- or they were customized from third-party applications. In the most dynamic part of the 3000's history, companies bought source code from vendors along with the software products.

That's why this Wednesday's Webinar from MB Foster will strike so close to the hearts of MPE users. Migrating Custom In-House Developed HP 3000 Applications begins at 2 PM Eastern, 11 AM Pacific. Birket Foster leads a 45-minute talk and answers questions about risk, mitigating challenges, and how to get started. Regardless of how much life a 3000 has left in any company, the transition process revolves around the applications. Moving one can teach you so much about what might be next.

"It may seem frightening to migrate, but when properly planned, 'risks' are mitigated," says the introduction to the webinar. You can register right up to the starting time by following the links at the MB Foster website

"You can do nothing and not migrate," Foster says, "and risk that your system performs without complications or downtime. Alternatively, you can choose to migrate, rejuvenate your business applications, improve performance, and resolve operational and technical challenges before going live."

With our guidance and as part of this webinar, MB Foster will deliver to you and your management teams, answers to its immediate questions and more 

  • How do we get started?
  • What’s the process?
  • Average time it takes to migrate? 

Our goals are the same as yours

  • Reduce risk
  • Mitigate and resolve technical challenges
  • Enhance and protect investments in an organization’s custom in-house developed applications
  • Reduce the time it takes to transition
  • Offer opportunities that current and future technologies provide
  • Increase scalability
  • Reduce year over year costs

Posted by Ron Seybold at 08:20 PM in Migration | Permalink | Comments (0)

August 01, 2014

HP doubles down on x86 Intel, not HP-UX

IBM's giving up on another market that HP continues to prize, but the latest one is more relevant to the small-sized enterprise where HP 3000 migrators hail from. (Years back, IBM sold its consumer PC business to Lenovo.) Now the modest-horsepower x86 server field's going to Lenovo, since IBM's decided to exit another Intel-based hardware market.

A longtime HP 3000 software vendor took note of this transition. He wondered aloud if the message HP now sends to its x86 prospects has a shadowy echo of another advisory, one delivered a decade ago. From our correspondent Martin Gorfinkel:

Hewlett-Packard has been running full page ads in the New York Times with the lead, “Building a cloud? Your future is uncertain.” (The “un” in “uncertain” is crossed out.) The ad goes on to say that the "IBM decision to exit the x86 server market impacts your cloud strategy." Thus, they say, move to HP and be assured that HP will not leave you stranded.

Would I be the only former user/vendor in the HP 3000 market to find that advertising hypocritical -- and further evidence that the company we once relied on no longer exists?

FutureIsUncertainThe hypocrisy is probably on display for any 3000 customer who was told Hewlett-Packard was making an exit from the 3000 hardware market (and by extension, the MPE software world). Every vendor exits some part of their business, once the vendor gets large enough to sell a wide array of products. IBM is dropping away from x86. HP invites enterprises to "join us to plan your forward strategy." This forwarding strategy of moving to Windows and Linux differs from HP advice of 10 years ago. Going to HP-UX was the strategy du jour, beyond a 3000 exit in 2004.

The full-page ads in four colors in a national daily announce a redoubling of effort to win Intel x86 business. That's going to suck up some energy and mindshare, effort that 3000 customers who followed HP forward on HP-UX are probably going to miss.

It won't come as much news to the migrated customer who's been listening to HP management comment about the future of its Business Critical Systems Unix products. "A formerly growing business" is the best that HP's chairman and CEO Meg Whitman can manage in quarterly briefings.

IBM's moving in different directions than HP these days. A recent announcement pulls Big Blue into step with Apple to win enterprise business for both companies at once. Microsoft was once the savior of Apple in hard times. Now it looks like Apple, which has a valuation well above IBM's, is going to perform some salvation. HP had a shot at working with Apple in consumer business, but it was back in the days when selling re-badged iPods seemed like a good idea.

HP's attraction of IBM customers has been a give-and-take that goes back decades. In 1995, IBM wanted HP 3000 customers to switch to AS/400s. Database issues stood in the way of that effort, but certainly a very few companies made the transfer once HP announced an exit of the 3000.

In the same way, HP executives are claiming wins for business in the hundreds, according to an article in eWeek

According to Antonio Neri, senior vice president and general manager of HP's servers and networking businesses, the efforts over the past six months are paying off. The company has seen its win rate against IBM increase more than 40 percent, accounting for several hundred new deals won against Big Blue.

Customers in those deals might be the only parties who still have to figure out how they feel about this change. IBM is happy to let loose of server business that was killing its profits, according to a NY Times article. The changes say a lot about how important these big vendors consider enterprise server business. On one hand, IBM says there's no enterprise-caliber profit in selling x86. On the other, HP is happy to take on whatever customers IBM was passing over to Lenovo.

[Vendors'] businesses like PCs are losing ground to mobile devices like smartphones, and the once-formidable computer server is increasingly viewed as one more commodity piece of globe-spanning cloud computing projects from a few elite players.

“We need to get an inventor’s profit, not a distributor’s profit,” said Steve Mills, senior vice president of software and systems at IBM. “Our investment in research and development is what makes IBM go. It’s hard to do that in markets that don’t give you credit for the innovations you bring.”

It’s stark how quickly that margin fell away. A year ago, IBM was talking about a sale of its server business to Lenovo for what was reported at the time to be $6 billion. Today’s deal for $2.3 billion kept for IBM some higher-value servers, like those that perform complex data analytics. But according to Mr. Mills, it also included agreements for IBM to buy from Lenovo some of the commodity, or x86, servers for IBM’s growing cloud business.

And so there's the interesting wrinkle for anybody considering their migration off HP 3000s. IBM isn't giving up on cloud computing, not any more than HP has; both vendors want to host your applications on cloud servers they'll set up and maintain for you. (So does Amazon, of course, and probably at a better price.) Clouds might be the only way to get a 3000 migration that carries a budget similar to sticking with HP 3000s. Everyone wants to know more about security on clouds, but they want to know about security everywhere these days.

One combination you won't see is clouds and HP-UX computing. HP's own Cloud cannot host HP-UX apps, just those running Windows or Linux. It's an Intel party up there in the HP Cloud. (In a big piece of irony, Apple's OS X Unix is one of the supported HP Cloud installs.) Going forward from the 3000 with HP has more options than going forward than with IBM, right? It's true if you don't count Unix. Hewlett-Packard shows its strategy, with full-page splashes, that Unix counts for much less at enterprises.

We invite any correspondents who see the full-page ads about HP-UX enterprise to alert us. Twenty years ago, the HP 3000 customers were measuring the HP love by way of ads and alliances. To reply to the other part of Gorfinkel's question, we believe that old HP still exists. The company that 3000 customers relied upon in the '90s is repeating its behavior. It's just leaving a different OS out of its forward strategy this time. 

Gorfinkel added that he managed to put his opinions into the inbox of the HP CEO. "I got a promotional email from HP that included – if you follow enough links – an opportunity to email Meg Whitman herself," he said. "Could not resist sending the following:

I cannot believe that HP is running full page ads pointing out that IBM decided to exit the x86 server market and that HP can be trusted to keep your future certain. Is there no corporate memory of the HP exit from the HP3000 market? None of us who felt our future was certain with the most reliable, most secure hardware/software combination in the industry have forgotten. HP left us stranded with a few independent vendors working to pick up the slack. Those who know of HP's history will just laugh (or cry) over the ad; others may be fooled.

It is certainly ironic!

Posted by Ron Seybold at 06:34 PM in Migration, News Outta HP | Permalink | Comments (0)

July 28, 2014

Taking a :BYE before a :SHUTDOWN

BYEHP 3000 systems have been supporting manufacturing for almost as long as the server has been sold. ASK Computer Systems made MANMAN in the 1970s, working from a loaned system in a startup team's kitchen. MANMAN's still around, working today.

It might not be MANMAN working at 3M, but the Minnesota Minining & Manufacturing Company is still using HP 3000s. And according to a departing MPE expert at 3M, the multiple N-Class systems will be in service there "for at least several more years."

Mike Caplin is taking his leave of 3000 IT, though. Earlier this month he posted a farewell message to the 3000-L listserve community. He explained that he loved working with the computer, so much so that he bet on a healthy career future a decade-and-a-half ago. That was the time just before HP began to change its mind about low-growth product lines with loyal owners.

Tomorrow, I’ll type BYE for the last time. Actually, I’ll just X out of a Reflection screen and let the N-Class that I’m always logged in to log me out.

I started on a Series II in 1976 and thought I died and went to heaven after working on Burroughs and Univac equipment.  The machine always ran; no downtime, easy online development, and those great manuals that actually made sense and had samples of code. I still have the orange pocket guide for the Series II.

I found this list about the same time that getting help from HP became a hit or miss. I always got a usable answer after posting a question, usually in under an hour.  So the purpose of this is to say goodbye, but also to say thank you for all of the help over the years.

I was in a headhunter’s office about 15 years ago and he told me that I needed to get away from the 3000 because I’d never be able to make a living until I was ready to retire. I told him that he may be right, but that I was counting on knowing enough to be able to stay employed and that I intended to outlast MPE. I guess I got lucky and won that argument.

We love the part of Caplin's message where he gambled on his expertise and spent the last 15 years staying employed, instead of running from the 3000. We've been doing something similar here. This summer is the 13th we're writing and publishing since HP announced its end-date with the 3000 business. It can be sporting to try to figure who'll be the last to turn out the lights, but there's a good chance we won't be working anymore when it happens to MPE.

So that devoted MPE user has typed his last BYE. But MPE -- at least in some transitional mission at 3M -- has outlasted his days with the server. The community is still full of people who will make their exits before their HP 3000s do. Terry Floyd of the Support Group has said that at some manufacturing sites, there's a good chance the expertise will retire before the hardware does a shutdown. The marvel is to be able to go into retirement operating the same flavor of enterprise server as when you performed your first COLDSTART.

Posted by Ron Seybold at 08:02 PM in History, Migration, User Reports | Permalink | Comments (0)

July 25, 2014

Pen testing crucial to passing audits

Migrated HP 3000 sites have usually just put sensitive corporate information into a wider, more public network. The next audit their business applications will endure is likely to have a security requirement far more complicated to pass. For those who are getting an IT audit on mission-critical apps hosted on platforms like Windows or Linux, we offer this guide to penetration testing.

By Steve Hardwick
CSIPP, Oxygen Finance

Having just finished installing a new cable modem with internal firewall/router, I decided to complete the installation by running a quick and dirty on-line penetration test. I suddenly realized that I am probably a handful of home users that we actually run a test after installing the model. I used the Web utility Shields Up, which provides a quick scan for open ports. Having completed the test -- successfully I may add -- I thought it would be a good opportunity to review Pen, or penetration, testing as a essential discipline.

SecuritypenetrationPenetration testing is a crucial part of any information security audit. They are most commonly used to test network security controls, but can be used for testing administrative controls too. Testing administrative controls, i.e. security rules users must follow, is commonly called social engineering. The goal of penetration testing is to simulate hacker behavior to see if the security controls can withstand the attack.

The key elements of either tests fall into three categories

1) Information gathering: This involves using methods to gain as much information about the target without contacting the network or the system users.

2) Enumeration: To be able to understand the target, a set of probing exercises are conducted to map out the various entry points. Once identified, the entry points are further probed to get more detail about their configuration and function.

3) Exploitation: After review of the entry points, a plan of attack is constructed to exploit any of the weaknesses discovered in the enumeration phase. The goal is get unauthorized access to information in order to steal, modify or destroy it.

Let's take a look at how all this works in practice.

Information gathering

There are a lot of techniques that can be used to gain information about a target. A simple whois on target URLs may reveal contact information that can be used in social engineering for example. (I used it once to get a personal cell phone number of a target by looking at the registration of their web page). 

Another commonly used method is dumpster diving This is where trash from a target is examined for any useful information. Finding out the middle name of a CIO can often confuse an IT admin and open the door to masquerading as a company employee (I have person experience of this one). There may even be old network diagrams that have been thrown out in the trash.

Another good technique is Google hacking. This is a technique where advanced Google commands are used to find information that may not be immediately apparent. For example, searching a website for any text files that contain the word “password.” Sounds amazing, but it can work. For more information, download a copy of this book published by the NSA.

Enumeration

For social engineering, this can be as simple as chatting to people on their smoke breaks. Other activities can include taking zoom photographs of employee badges, or walking around a building looking for unlocked exits and entry doors.

For networks this typically comes in multiple stages. First, the externally facing portions of the network are probed. Ports are scanned to see which ones are accepting traffic -- or open. Equipment can be queried for its make and its installed software. Also, the presence of other network devices; this can include air conditioning controllers, security camera recorders, and other peripherals connected directly to the Internet. 

Exploitation

An obvious question at this point: How can you tell if the person attacking your security systems is a valid tester or an actual hacker? The first step in any penetration test is to gain the approval of someone who can legitimately provide it. For example, approval should be from a CEO or CIO, not a network admin. The approval should also include the scope of any testing. This is sometimes called a get out of jail card.

Once a list of potential entry points and their weakness has been compiled, a plan of attack can be put together. In the case of social engineering, this can include selecting a high-ranking employee to impersonate. Acting as a VP of Sales, especially if you include their middle name, and threatening a system admin with termination if they don't change their password can be a good way of getting into a network.

On the technical side, there are a lot of tools out there that can be used to focus on a specific make of equipment with a specific software level. Especially if it has not been patched in a while. Very often the enumeration and exploitation steps are repeated as various layers of defense are breached. There is a common scene in movies as the hacker breaches one firewall after another. Each time it is a process of enumeration followed by exploitation.

Useful tools

Once of the most useful tools for performing penetration testing is BackTrack. This is a useful site for two reasons. One, it contains a set of penetration testing tools on a live CD version of Linux (now maintained by Kali). The live CD version is very useful if you gain physical access, as you may be able to use it on an existing PC. Two, it contains a wide set of how-to's and training videos. This is a good first stop for those looking to understand what is available and how penetration testing is done. The tools and training is targeted to both beginners and experienced practitioners.

Another site that provides a variety of tools is insecure.org. The site provides links to individual tools that are focused on various parts of pen testing. The listing is broken down for the various sections and the tools listed. Both free and commercial tools are listed in the site's compendium. There is also a directory of relevant articles on different security topics.

Finally, there is Open Web Application Security Project (OWASP). This site is hosted by a non-profit organization that is solely focused on Web application security. OWASP provides a great deal of information and tools regarding testing and securing web applications, as this is a very common target for hackers. This can include a corporate web site, but also a web interface for controlling an HVAC unit remotely. There is even a sample flawed website, web goat, that can be used to hone testing skills.

Penetration testing is a very important part of security audit. It provides a methodology for analyzing vulnerabilities in security controls within a company's infrastructure. In many cases testing will be performed by internal resources on a more frequent basis, with annual or semiannual tests conducted by qualified third-party testers. In all cases, the testing should be performed by someone who is qualified to the level required. A improperly executed pen test provides a dangerous level of false security. Plus in many cases, security compliance will necessitate a pen test.

Posted by Ron Seybold at 04:50 PM in Migration | Permalink | Comments (0)

July 23, 2014

Migrators make more of mobile support app

A serious share of HP 3000 sites that have migrated to HP's alternative server solutions have cited vendor support as a key reason to leave MPE. Hewlett-Packard has been catering to their vendor-support needs with an iPhone/Android app, one which has gotten a refresh recently.

HPSCm screenshotsFor customers who have Connected Products via HP's Remote Support technologies, the HP Support Center Mobile (HPSCm) app with Insight Online will automatically display devices which are remotely monitored. The app allows a manager to track service events and related support cases, view device configurations and proactively monitor HP contracts, warranties and service credits.

Using the app requires that the products be linked through the vendor's HP Passport ID. But this is the kind of attempt at improving support communication which 3000 managers wished for back in the 1990s. This is a type of mobile tracking that can be hard to find from independent support companies. To be fair, that's probably because a standard phone call, email or text will yield an immediate indie response rather than a "tell me who you are, again" pre-screener.

But HPSCm does give a manager another way to link to HP support documents (PDF files), something that would be useful if a manager is employing a tablet. That content is similar to what can be seen for free, or subject to contract by public audiences, via the HP Business Portal. (Some of that content is locked behind a HP Passport contract ID.) This kind of support -- for example, you can break into a chat with HP personnel right from the phone or tablet -- represents the service that some large companies seem to demand to operate their enterprise datacenters.

Weird-zucchiniThere's also a Self-Solve feature in the HP mobile app, to guide users to documents most likely to help in resolving a support issue. Like the self-check line in the grocery, it's supposed to save time -- unless you've got a rare veggie of a problem to look up.

Remote system administration isn't unheard of in the 3000 world. Allegro Consultants' iAdmin got an update to iOS 7 this month. It supports MPE servers, as well as HP-UX, Solaris, Linux and OS X. iAdmin requires a back-end subscription for each server monitored, just like the HPSCm app. But iAdmin draws its information from a secure server in the cloud; the monitored systems feed their status to that secure server.

HPSCm offers one distinction from independent service arrangements: managers and companies can report they're getting mobile updates via HP itself -- instead of a more focused support company, like Pivital Solutions, which specializes in 3000 issues. Migrated sites have stopped caring about 3000 support, but those who are still mulling over the ideal of using more modern servers might try out the HP app. They can do so if they've already registered monitoring access for servers and such via HP Passport.

Posted by Ron Seybold at 05:06 PM in Migration, News Outta HP | Permalink | Comments (0)

July 21, 2014

Maximum Disc Replacement for Series 9x7s

Software vendors, as well as in-house developers, keep Series 9x7 servers available for startup to test software revisions. There are not very many revisions to MPE software anymore, but we continue to see some of these oldest PA-RISC servers churning along in work environments.

9x7s, you may ask -- they're retired long ago, aren't they? Less than one year ago, one reseller was offering a trio for between $1,800 (a Series 947) and $3,200. Five years ago this week, tech experts were examining how to modernize the drives in these venerable beasts. One developer figured in 2009 they'd need their 9x7s for at least five more years. For the record, 9x7s are going to be from the early 1990s, so figure that some of them are beyond 20 years old now.

"They are great for testing how things actually work," one developer reported, "as opposed to what the documentation says, a detail we very much need to know when writing migration software. Also, to this day, if you write and compile software on 6.0, you can just about guarantee that it will run on 6.0, 6.5, 7.0 and 7.5 MPE/iX."

BarracudaSome of the most vulnerable elements of machines from that epoch include those disk drives. 4GB units are installed inside most of them. Could something else replace these internal drives? It's a valid question for any 3000 that runs with these wee disks, but it becomes even more of an issue with the 9x7s. MPE/iX 7.0 and 7.5 are not operational on that segment of 3000 hardware.

Even though the LDEV1 drive will only support 4GB of space visible to MPE/iX 6.0 and 6.5, there's always LDEV2. You can use virtually any SCSI (SE SCSI or FW SCSI) drive, as long as you have the right interface and connector.

There's a Seagate disk drive that will stand in for something much older that's bearing an HP model number. The ST318416N 18GB Barracuda model -- which was once reported at $75, but now seems to be available for about $200 or so -- is in the 9x7's IOFDATA list of recognized devices, so they should just configure straight in. Even though that Seagate device is only available as refurbished equipment, it's still going to arrive with a one-year warranty. A lot longer than the one on any HP-original 9x7 disks still working in the community.

One developer quipped to the community, five years ago this week, "On the disc front at least that Seagate drive should keep those 3000s running, probably longer than HP remains a Computer Manufacturer."

But much like the 9x7 being offered for sale this year, five years later HP is still manufacturing computers, including its Unix and Linux replacement systems for any 3000 migrating users. 

So to refresh drives on the 9x7s, configure these Barracuda replacement drives in LDEV1 as the ST318416N -- it will automatically use 4GB (its max visible capacity) on reboot.

As for the LDEV2 drives, there are no real logical size limits, so anything under 300GB would work fine -- 300GB was the limit for MPE/iX drives until HP released its "Large Disk" patches for MPE/iX, MPEMXT2/T3. But that's a patch that wasn't written for the 9x7s, as they don't use 7.5.

Larger drives were not tested for these servers because of a power and heat dissipation issue. Some advice from the community indicates you'd do better to not greatly increase the power draw above what those original equipment drives require. The specs for those HP internal drives may be a part of your in-house equipment documentation. Seagate offers a technical manual for the 18GB Barracuda drive at its website, for power comparisons.

Posted by Ron Seybold at 07:51 PM in Hidden Value, Homesteading, Migration, User Reports | Permalink | Comments (2)

July 14, 2014

Protecting a Server from DDoS Attacks

For anybody employing a more Web-ready server OS than MPE, or any such server attached to a network, Distributed Denial of Service (DDoS) presents a hot security and service-level threat. Migrating sites will do well to study up on these hacks. In the second of two parts, our security writer Steve Hardwick shares preventative measures to reduce the impacts to commodity-caliber enterprise computing such as Linux, Unix or Windows.

By Steve Hardwick, CISSP
Oxygen Finance

SecurityScrabbleDDoS attacks can be very nasty and difficult to mitigate. However, with the correct understanding of both the source and impact of these attacks, precautions can be taken to reduce their impact. This includes preventing endpoints from being used as part of a botnet to attack other networks. For example, a DDoS virus may not affect the infected computer, but it could wreak havoc on the intended target.

One legitimate question is why a DDoS attack be would used. There are two main reasons:

1) As a primary attack model. For example, a group of hacktivists want to take down a specific website. A virus is constructed that specifically targets the site and then is remotely triggered. The target site is now under serious attack.

2) As part of a multi stage attack. A firewall is attacked by an amplified Ping Flood attack. The firewall can eventually give up and re-boot (sometimes referred to as “failing over”). The firewall may reboot in a “safe” mode, fail over, or back-up configuration. In many cases this back-up configuration contains minimal programming and is a lot easier to breach and launch the next phase of the attack. I've had experiences where the default fail-over configuration of a router was wide open -- allowing unfiltered in-bound traffic.

DDoS attacks are difficult to mitigate, as they attack several levels of the network. However, there are some best practices that can be employed to help lessen the threat of DDoS attacks.

1) Keep all software up to date. This includes end user machines, servers, security devices (IDPs for example as they can be targets of DDoS attacks to disable them) routes and firewalls. To be truly effective, an attack needs to secure a network of machines to source the attacks, so preventing these machines from becoming infected reduces the source of attacks.

2) Centralized Monitoring: By using a central monitoring system, a clear understanding of the network operation can be gained. Plus any variance in traffic patterns can be seen, this especially true of multistage attacks.

3) Apply filtering: Many firewalls contain specific sections for filtering out DDoS attacks. Plus disabling PING responses can also help reduce susceptibility. Additionally, firewall filtering policies must be continually reviewed. This includes audit of the policies themselves, or a simulated DDoS attack on networks at period of low activity. Don't forget to make sure that firewall backup configurations are reviewed and set correctly.

4) Threat intelligence: Constantly review the information regarding new threats. There are now many media services that will provide updates about newly detect threats.

5) Outsource: There are also several DDoS mitigation providers out there that assist in providing services that help corporations secure their networks against DDoS attacks. A quick web search will show many of the well-known companies in this space.

6) Incident Response plan: Have a good plan to respond to DDoS level threats. This must include an escalation path to a decision maker that can respond to a threat as this may include isolating critical systems from the network.

Posted by Ron Seybold at 10:12 AM in Migration | Permalink | Comments (0)

July 11, 2014

Understanding the Roots of DDoS Attacks

Editor’s Note: While the summertime of pace of business is upon us all, the heat of security threats remains as high as this season's temperatures. Only weeks ago, scores of major websites, hosted on popular MPE replacement Linux servers, were knocked out of service by Distributed Denial of Service DDoS attacks. Even our mainline blog host TypePad was taken down. It can happen to anybody employing a more Web-ready server OS than MPE, to any such server attached to a network -- so migrating sites will do well to study up on these hacks. Our security writer Steve Hardwick shares background today, and preventative measures next time.

By Steve Hardwick, CISSP
Oxygen Finance

DDOS-AttackDistributed Denial of Service (DDoS) is a virulent attack that is growing in number over the past couple of years. The NSFOCUS DDoS Threat Report 2013 recorded 244,703 incidents of DDoS attacks throughout last year. Perhaps the best way to understand this attack is to first look at Denial Of Service, (DoS) attacks. The focus of a DoS attack is to remove the ability of a network device to accept incoming traffic. DoS attacks can target firewalls, routers, servers or even personal computers. The goal is to overload the network interface such that it either it unable to function or it shuts down.

A simple example of such an attack is a Local Area Network Denial. This LAND attack was first seen around 1997. It is accomplished by creating a specially constructed PING packet. The normal function of ping is to take the incoming packet and send a response to the source machine, as denoted by the source address in the packet header. In a LAND attack, the source IP address is spoofed and the IP address of the target is placed in the source address location. When the target gets the packet, it will send the ping response to the source address, which is its own address. This will cause the target machine to repeatedly send responses to itself and overload the network interface. Although not really a threat today, some older versions of operating systems -- such as the still-in-enterprises Windows XP SP2, or Mac OS MacTCP 7.6.1 -- are susceptible to LAND attacks.

So where does the Distributed part come from? Many DoS attacks rely on the target machine to create runaway conditions that cause the generation of a torrent of traffic that floods the network interface. An alternative approach uses a collaborative group of external machines to source the attack. For example, a virus can be written that sends multiple emails to a single email address. The virus also contains code to send it to everyone in the recipient's email address book. Before long, the targeted server is receiving thousands of emails per hour -- and the mail server becomes overloaded and effectively useless.

Another DoS example is a variant of the LAND attack, a Ping flood attack. In this attack a command is issued on a machine to send ping packets as fast as possible without waiting for a response (using the -f option in the ping command, for example). If a single machine is used, then the number of packets may not overwhelm the target. However, if a virus is constructed such that the ping flood will occur at a specific time, then it can be sent to multiple machines.

When a predefined trigger time is reached, all of the infected machines start sending ping flood to the target. The collection of infected machines, called Zombies, is called a botnet or an amplifications network. A good example is Flashback Trojan, a contagion that was found to have infected more than 600,000 Mac OS X systems. This created a new phenomenon -- MAC based botnets.

Before discussing some other attacks, it is necessary to understand a little more about firewalls and servers. In the examples above, the target was at the IP address layer of the network interface. However, network equipment has additional functionality on top of the IP processing function. This includes session management of the IP connections and application level functions.

Newer attacks have now started focusing on these session and application functions. This requires less resources and can create broader based attacks that can target multiple network elements with a single virus. A good example of this class are HTTP floods. For example, repeated HTTP Get requests are made to retrieve information from a web server. The sending machine does not wait for the information to be sent, but keeps sending multiple requests. The web server will try to honor the request and send out the content. Eventually the multiple requests will overload the web server. Since these look like standard HTTP requests, they are difficult to mitigate.

Next time: Why DDoS is used, and how to reduce the threats to servers.

Posted by Ron Seybold at 06:56 PM in Migration | Permalink | Comments (0)

July 08, 2014

That MPE spooler's a big piece to replace

PrintspoolerMigration transitions have an unexpected byproduct: They make managers appreciate the goodness that HP bundled into MPE/iX and the 3000. The included spooler is a great example of functionality which has an extra cost to replace in a new environment. Unlike in Windows with MBF Scheduler, Unix has to work very had to supply the same abilities -- and that's the word from one of the HP community's leading Unix gurus.

Bill Hassell spread the word about HP-UX treasures for years from his own consultancy. While working for SourceDirect as a Senior Sysadmin expert, he noted a migration project where the project's manager noted Unix tools weren't performing at enterprise levels. Hassell said HP-UX doesn't filter many print jobs.

MPE has an enterprise level print spooler, while HP-UX has very primitive printing subsystem. hpnp (HP Network Printing) is nothing but a network card (JetDirect) configuration program. The ability to control print queues is very basic, and there is almost nothing to monitor or log print activities similar to MPE. HP-UX does not have any print job filters except for some basic PCL escape sequences such as changing the ASCII character size.

While a migrating shop might now be appreciating the MPE spooler more, some of them need a solution to replicate the 3000's built-in level of printing control. One answer to the problem might lie in using a separate Linux server to spool, because Linux supports the classic Unix CUPS print software much better than HP-UX.

The above was Glen Kilpatrick's idea as a Senior Response Center Engineer at Hewlett-Packard. Like a good support resource, Kilpatrick was a realist in solving the "where's the Unix spooler?" problem.

The "native" HP-UX scheduler / spooler doesn't use (or work like) CUPS, so if you implement such then you'll definitely have an unsupported solution (by HP anyway). Perhaps you'd be better off doing "remote printing" (look for that choice in the HP-UX System Administration Manager) to a Linux box that can run CUPS.

This advice shovels in a whole new environment to address an HP-UX weakness, however. So there's another set of solutions available from independent resources -- third-party spooling software. These extra-cost products accomodate things like default font differences between print devices, control panels, orientation and more. Michael Anderson, the consultant just finishing up a 3000 to Unix migration, has pointed out these problems that rose up during the migration.

My client hired a Unix guru (very experienced, someone I have lots of respect for) to set this up a year or more ago. They recreated all the old MPE printer LDEVs and CLASS names in CUPS, and decided on the "raw" print format so the application can send whatever binary commands to the printers. Now they have some complaints about the output not being consistent. My response was, "Absolutely! There were certain functions that the MPE spooler did for you at the device class/LDEV level, and you don't have that with CUPS on HP-UX."

Anderson has faith that learning more about CUPS will uncover a solution. "One plus for CUPS, it does make the applications more portable," he added.

There's one set of tasks can solve the problem without buying a commercial spooler for Unix, but you'll need experience with adding PCL codes and control of page layouts. Hassell explains:

Yes, [on HP-UX] it's the old, "Why doesn't Printer 2 print like Printer 3?" problem. So unlike the Mighty MPE system, where there is an interface to control prepends and postpends, in HP-UX you'll be editing the model.orig directory where each printer's script is located. It just ASMOS (A Simple Matter of Scripting). The good news is that you already have experience adding these PCL codes and you understand what it takes to control logical page layouts. The model.orig directory is located in /etc/lp/interface/model.orig

What Anderson needs to accomplish in his migration is the setup of multiple config environments for each printer, all to make "an HP-UX spooler send printer init/reset instructions to the printer, before and after the print job. In other words: one or more printer names, each configured differently, yet all point to the same device."

You won't get that for HP-UX without scripting, the experts are saying, or an external spooling server under Linux, or a third party indie spooler product. 

3000 managers who want third party expertise to support a vast array of print devices are well served to look at ESPUL and PrintPath spooling software from veteran 3000 developer Rich Corn at RAC Consulting. Corn's the best at controlling spoolfiles for 3000s, and he takes networked printing to a new level with PrintPath. Plenty of 3000 sites never needed to know all that his work could do, however -- because that MPE spooler looks plenty robust compared to what's inside the Unix toolbox.

Posted by Ron Seybold at 01:56 PM in Migration, Web Resources | Permalink | Comments (0)

July 07, 2014

User says licensing just a part of CHARON

Dairylea-Districts-0809Licensing the CHARON emulator solution at the Dairylea Cooperative has been some work, with some suppliers more willing to help in the transfer away from the compay's Series 969 than others. The $1.7 billion organization covers seven states and at least as many third party vendors. “We have a number of third party tools and we worked with each vendor to make the license transfers,” said IT Director Jeff Elmer. 

“We won’t mention any names, but we will say that some vendors were absolutely wonderful to work with, while others were less so. It’s probably true that anyone well acquainted with the HP 3000 world could make accurate guesses about which vendors fell in which camp.”

Some vendors simply allowed a transfer at low cost or no cost; others gave a significant discount because Dairylea has been a long-time customer paying support fees. ”A couple wanted amounts of money that seemed excessive, but in most cases a little negotiation brought things back within reason,” Elmer said. The process wasn’t any different than a customary HP 3000 upgrade: hardware costs were low, but software fees were significant.

“The cumulative expense of the third party software upgrades was nearly a deal-breaker,” he said. “In the end, our management was concerned enough about reliance on old disk drives that they made the decision to move forward. In our opinion it was money very well spent.”

Just as advertised, software that runs on an HP RISC server runs under CHARON. ‘Using those third party tools on the emulator is completely transparent,” Elmer said. “We had one product for which we had to make a command change in a job stream, and we had to make a mind-shift in evaluating what our performance monitoring software is telling us. Apart from that, it is business as usual.”

Posted by Ron Seybold at 07:48 AM in Migration, User Reports | Permalink | Comments (0)

June 30, 2014

Update: Open source, in 3000 ERP style

OpenBravo roadmapAn extensive product roadmap is part of the OpenBravo directions for this open source ERP commercial solution

Five years ago today, we chronicled the prospects of open source software for HP 3000s. We mentioned the most extensive open source repository for MPE systems, curated by Brian Edminster and his company Applied Technologies. MPE-OpenSource.org has weathered these five years of change in the MPE market and still serves open source needs. But in 2009 we also were hopeful about the arrival of OpenBravo as a migration solution for 3000 users who were looking for an ERP replacement of MANMAN, for example -- without investing in the balky request-and-wait enhancement tangle of proprietary software.

Open source software is a good fit for the HP 3000 community member, according to several sources. Complete app suites have emerged and rewritten the rules for software ownership. An expert consulting and support firm for ERP solutions is proving that a full-featured ERP app suite, Openbravo, will work for 3000 customers by 2010.

[Editor's note: "We meant work for 3000 customers" in the sense of being a suitable ERP replacement for MPE-based software]. 

A software collective launched in the 1990s by the University of Navarra which has evolved to Openbravo, S.L., Openbravo is utilized by manufacturing firms around the world. Openbravo is big stuff. So large that it is one of the ten largest projects on the SourceForge.net open source repository, until Openbravo outgrew SourceForge. The software, its partners and users have their own Forge running today. In 2009, Sue Kiezel of Entsgo -- part of the Support Group's ERP consulting and tech support operations -- said, “We believe that within six to nine months, the solution will be as robust as MANMAN was at its best.”

From the looks of its deep Wiki, and a quick look into the labs where development is still emerging for advanced aspects such as analytics, Entsgo's premonition has come to fruition. Managing manufacturing is easily within the pay-grade of open source solutions like OpenBravo.

What we reported on five years ago is no less true today. Open source is an essential part of enterprise IT by now, though. Entsgo's predictions were spot-on.

Open source solutions can span a wide range of organization, from code forges with revisions and little else to the one-stop feel of a vendor, minus the high costs and long waits. Openbravo is in the latter category, operating with hundreds of employees after having received more than $18 million in funding. If that doesn't sound much like the Apache and Samba open source experience, then welcome to Open Source 2.0, where subscription fees have replaced software purchases and partner firms join alongside users to develop the software.

Openbravo says the model is "commercial open source business model that eliminates software license fees, providing support, services, and product enhancements via an annual subscription." Entsgo says you have a company that supports it, and you can subscribe to it and verifies it, upgrades it and maintains it — all of that under one company name.

“In the 3000 community, we’re used to the independence of the open source model,” said Kiezel. “We’re used to tools that are intuitive, and if you look at us, we should be able to embrace open source more than any other community.”

Open source practices turn the enhancement experience upside down for an application. In the traditional model, a single vendor writes software at a significant investment for high profits, then accepts requests for enhancements and repairs. A complex app such as ERP might not even get 10 percent of these requests fulfilled by the average vendor.

The open source community around Openbravo operates like many open source enterprises. Companies create their own enhancements, license them back to the community, and can access bug fixes quickly—all because the ownership is shared and the source code for the app is open.

Posted by Ron Seybold at 07:33 PM in Migration, Newsmakers, Web Resources | Permalink | Comments (0)

June 27, 2014

Mansion meet takes first comeback steps

A few hours ago, the first PowerHouse user group meeting and formation of a Customer Advisory Board wrapped up in California. Russ Guzzo, the guiding light for PowerHouse's comeback, told us a few weeks ago that today's meeting was just the first of several that new owner UNICOM Global was going to host. "We'll be taking this on the road," he said, just as the vendor was starting to call users to its meeting space at the PickFair mansion in Hollywood.

We've heard that the meeting was webcast, too. It's a good idea to extend the reach of the message as Unicom extends the future of the PowerHouse development toolset.

CoeThis is a product that started its life in the late 1970s. But so did Unix, so just because a technology was born more than 35 years ago doesn't limit its lifespan. One user, IT Director Robert Coe at HPB Management Ltd. in Cambridge, wants to see PowerHouse take a spot at the table alongside serious business languages. Coe understands that going forward might mean leaving some compatibility behind. That's a step Hewlett-Packard couldn't ever take with MPE and the HP 3000. Some say that decision hampered the agility of the 3000's technical and business future at HP. Unix, and later Linux, could become anything, unfettered by compatibility.

Coe, commenting on the LinkedIn Cognos Powerhouse group, said his company has been looking at a migration away from Powerhouse -- until now.

I would like to see Powerhouse developed into a modern mainstream language, suitable for development of any business system or website. If this is at the expense of backwards compatibility, so be it. We are developing new systems all the time, and at the moment are faced with having to use Java, c# or similar. I would much rather be developing new systems in a Powerhouse based new language, with all the benefits that provides, even if it is not directly compatible with our existing systems. 

The world would be a better place if Powerhouse was the main platform used for development! I hope Unicom can provide the backing, wisdom and conviction to enable this to happen.

There were many business decisions made about the lifecycle and sales practices for PowerHouse over the last 25 years that hampered the future of the tool. Coe found technical faults with the alternatives to PowerHouse -- "over-complicated, hard to learn, slow to develop, difficult to maintain, prone to bugs, with far too much unnecessary and fiddly syntax."

But he was also spot-on in tagging the management shortcomings of the toolset's previous owners:

  • Cognos concentrated on BI tools, as there appeared to be more money in them 
  • IBM bought Cognos for its BI tools for the same reason 
  • Powerhouse development more or less stopped many years ago 
  • Licences were very expensive compared to other languages. which were often open source and free 
  • Powerhouse was not open source and therefore didn’t get the support of the developer community 
  • Backwards compatibility was guaranteed, stifling major development
Powerhouse is a far superior platform for development of business systems. I cringe at the thought of having to use the likes of Java to replace or current systems or to develop our future systems!

Bob Deskin, hired by UNICOM to advise the new owners on a growth strategy for the toolset, reminded Coe that things like Java, Ruby, Python and Perl were not purpose-built for business.

Don't be too hard on those other languages. Some of them aren't what I would call complete programming languages. Some are scripting languages. And some are trying to be all things to all people. PowerHouse was always focused on business application development. Hang in for a while longer and watch what UNICOM can do.

Posted by Ron Seybold at 07:51 PM in Homesteading, Migration, Newsmakers | Permalink | Comments (0)

June 25, 2014

What level of technology defines a legacy?

Even alternatives to the HP 3000 can be rooted in legacy strategy. This week Oracle bought Micros, a purchase that's the second-largest in Oracle history. Only buying Sun has cost Oracle more, to this point in the company's legacy. The twist in the story is that Micros sells a legacy solution: software and hardware for the restaurant, hospitality and retail sectors. HP 3000s still serve a few of those outlets, such as duty-free shops in airports.

Stand By Your Unix SystemMicros "has been focused on helping the world’s leading brands in our target markets since we were founded in 1977," said its CEO. The Oracle president who's taking on this old-school business is Mark Hurd, an executive who calls to mind other aspects of legacy. Oracle's got a legacy to bear since it's a business solution that's been running companies for more than two decades. Now the analysts are saying Oracle will need to acquire more of these customers. Demand for installing Oracle is slowing, they say.

In the meantime, some of the HP marketplace is reaching for ways to link with Oracle's legacy. There's a lot of data in those legacy databases. PowerHouse users, invigorated by the prospects of new ownership, are reaching to find connection advice for Oracle. That's one legacy technology embracing another.

Legacy is an epithet that's thrown at anything older. It's not about one technology being better than another. Legacy's genuine definition involves utility and expectations.

It's easy to overlook that like Oracle, Unix comes in for this legacy treatment by now. Judging only by the calendar, it's not surprising to see the legacy tag on an environment that was just finding its way in the summer of 1985, while HP was still busy cooking up a RISC revolution that changed the 3000's future. Like the 3000's '70s ideal of interactive computing -- instead of batch operations -- running a business system with Unix in the 1980s was considered a long shot.

An article from a 1985 Computerworld, published the week that HP 3000 volunteer users were manning the Washington DC Interex meet, considered commercial Unix use something to defend. Like some HP 3000 companies of our modern day, these Unix pioneers were struggling to find experienced staff. Unix was unproven, and so bereft of expertise. At least MPE has proven its worth by now.

In the pages of that 1985 issue, Charles Babcock reported on Unix-for-business testimony.

NEW YORK -- Two large users of AT&T Unix operating systems in commercial settings told attendees at the Unix Expo conference that they think they have made the right choice. Both said, however, that they have had difficulty building a professional staff experienced in Unix.

The HP 3000 still ran on MPE V in that month. Apple's Steve Jobs had just resigned from the company he founded. Legacy was leagues away from a label for Unix, or even Apple in that year. It was so far back that Oracle wondered why they'd ever need to build a version of its database for HP 3000s. IMAGE was too dominant, especially for a database bundled with a business server. The 3000, even in just its second decade of use, was already becoming a legacy.

That's legacy as in a definition from Brian Edminster of Applied Technologies. The curator of open source solutions, and caretaker of a 3000 system for World Duty Free Group, shared this.

A Legacy System is one that's been implemented for a while and is still in use for a very important reason: Even if it's not pretty -- It works.

A Legacy System is easy to identify in nearly any organization:  It's the one that is constructed with tools that aren't 'bleeding edge.'

Posted by Ron Seybold at 09:22 PM in History, Migration | Permalink | Comments (0)

June 20, 2014

Time to Sustain, If It's Not Time to Change

LarsHomesteadIn the years after HP announced its 3000 exit, I helped to define the concept of homesteading. Not exactly new, and clearly something expected in an advancing society. Uncle Lars' homestead, at left, showed us how it might look with friendly droids to help on Tattooine. The alternative 3000 future that HP trumpeted in 2002 was migration. But it's clear by now that the movement versus steadfast strategy was a fuzzy picture for MPE users' future.

What remains at stake is transformation. Even to this week, any company that's relying on MPE, as well as those making a transition, are judging how they'll look in a year, or three, or five. We've just heard that software rental is making a comeback at one spot in the 3000 world. By renting a solution to remain on a 3000, instead of buying one, a manager is planning to first sustain its practices -- and then to change.

Up on the LinkedIn 3000 Community page I asked if the managers and owners were ready to purchase application-level support for 3000 operations. "It looks like several vendors want to sell this, to help with the brain-drain as veteran MPE managers retire." I asked that question a couple of years ago, but a few replies have bubbled up. Support has changed with ownership of some apps, such as Ecometry, and with some key tools such as NetBase.

"Those vendors will now get you forwarded to a call center in Bangalore," said Tracy Johnson, a veteran MPE manager at Measurement Specialties. "And by the way, Quest used to be quick on support. Since they got bought by Dell, you have to fill in data on a webpage to be triaged before they'll even accept an email."

Those were not the kind of vendors I was suggesting. Companies will oversee and maintain MPE apps created in-house, once the IT staff changes enough to lose 3000 expertise. But that led to another reply about why anyone might pursue the course to Sustain, when the strategy to Change seems overwhelming.

Managed Business Systems, one of the original HP Platinum Migration partners, was ready to do this sustaining as far back as a decade ago. Companies like the Support Group, Pivital Solutions -- they're still the first-line help desks and maintainers for 3000 sites whose bench has grown thin. Fresche Legacy made a point of offering this level of service, starting from the last days when it was called Speedware. There are others willing to take over MPE app operations and care, and some of these vendors have feet planted firmly in the Change camp, as well as staking out the Sustain territory.

Todd Purdum of Sherlock Systems wondered on LinkedIn if there really was a community that would take on applications running under MPE. We ran an article last year about the idea of a backstop if your expertise got ill or left the company. Five years earlier, we could point to even smaller companies, and firms like 3K Ranger and Pro 3K are available to do that level of work. Purdum, by his figuring, believes such backstops are rare.

Although I agree with the need for sustained resources to keep an HP3000 running, I'm not sure that "several vendors" can provide this. We have been in the business for over 23 years, and as a leader in providing hardware and application support for HP 3000s and MPE, I don't see many other vendors truly being capable of providing this.

Purdum asked, tongue-in-cheek, if there was a 3000 resurgence on the way he didn't see coming. No one has a total view of this market. But anecdotal reports are about all anyone has been able to use for most of a decade. Even well-known tool vendors are using independent support companies for front-line support. Purdum acknowledged that the support would be there, but wondered who'd need it.

Customers who use MPE (the HP 3000) know their predicament, and offering more salvation does not help them move into the right direction. I am only a hardware support company (that had to learn all HP 3000 applications) and it disappoints me a little that the companies you mentioned, most of which are software companies, haven't developed software that will allow these folks to finally move on and get off of this retired platform. 

I can't change it, I just sustain it... applications and all.

Sustaining mission-critical use of MPE is the only choice for some companies have in 2014. Their parent corporations aren't ready for a hand-off, or budget's not right, or yes, their app vendor isn't yet ready with a replacement app. That's what's leading to software rentals. When a company chooses to homestead, it must build a plan to Sustain. HP clearly retired its 3000 business more than three years ago. But that "final" moving on, into the realms of real change, follows other schedules, around the world. On the world of Tattooine, Lars first changed by setting up a moisture farm, then sustained. And then everything changed for him and Luke Skywalker. Change-sustain-change doesn't have a final state.

 

 

 

Posted by Ron Seybold at 07:07 PM in Homesteading, Migration, User Reports | Permalink | Comments (0)

June 19, 2014

Making Sure No New Silos Float Up

SilosCloud computing is a-coming, especially for the sites making their migrations off of the HP 3000. But even if an application is making a move to a cloud solution, shouldn't its information remain available for other applications? Operational systems remain mission-critical inside companies that use things like Salesforce.

To put this question another way, how do you prevent the information inside a Salesforce.com account to become a silo: that container that doesn't share its contents with other carriers of data?

The answer is to find a piece of software that will extract the data in a Salesforce account, then transform it into something that can be used by another database. Oracle, SQL Server, Eloquence, even DB2. All are active in the community that was once using TurboIMAGE. Even though Salesforce is a superior ERP application suite, it often operates alongside other applications in a company. (You might call these legacy apps, if they're older than your use of Salesforce. That legacy label is kind of a demerit, though, isn't it?)

Where to find such an extraction tool? A good place to look would be providers of data migration toolsets. This is a relatively novel mission, though. It doesn't take long for the data to start to pile up in Salesforce. Once it does, the Order Entry, CRM, Shipping, Billing and Email applications are going to be missing whatever was established in Salesforce initially. The popular term for this kind of roadblock is Cloud Silo.

It reminds me of the whole reason for getting data migration capabilities, a reason nearly as old as what was once called client-server computing. Back in the days when desktop PCs became a popular tool for data processing, information could start out on a desktop application, not just from a terminal. Getting information from one source to another, using automation, satisfies the classic mission of  "no more rekeying." 

It's a potent and current mission. Just because Salesforce is a new generation app, and based in the cloud, doesn't make it immune to rekeying. You need a can opener, if you will, to crack open its data logic. That's because not every company is going all-in on Salesforce.

The trick here is to find a data migration tool that understands and works well with the Salesforce API. This Application Program Interface is available to independent companies, but it will require some more advanced tech help to embrace it, for anyone who's limited to a single-company, in-house expertise pool. You want to hire or buy someone or something who's worked with an API for integration before now.

"How do you get stuff in and out of Salesforce? It not something unto itself," says Birket Foster. "It's a customer relationship management system. It's nice to have customer data in Salesforce, but you want to get it into your operational systems later."  

You want to get the latest information out of Salesforce, he adds, and nobody wants to re-key it. "That started in 1989," Foster says, "when we tried to help people from re-keying spreadsheets." For example, a small business data capture company, one that helps other small businesses get through the process, needs a way to get the Salesforce data into its application. Even if that other app is based in the cloud, it needs Salesforce data.

Silos are great for storing grains, but a terrible means to share them. The metaphor gets a little wiggly when you imagine a 7-grain bread being baked -- that'd be your OE or Shipping system, with data blended alongside Salesforce's grains of information. The HP 3000 once had several bakery customers -- Lewis Bakeries (migrated using the AMXW software, or Twinkie-maker Continental/Interstate Brands -- which mixed grains. They operated their mission-critical 3000s too long ago to imagine cloud computing, though.

Clouds deliver convenience, reliability, flexibility. Data migration chutes -- to pull the metaphor to its limit -- keep information floating, to prevent cloud silos from rising up.

Posted by Ron Seybold at 09:57 PM in Migration | Permalink | Comments (0)

June 18, 2014

The Long and Short of Copying Tape

Is there a way in MPE to copy a tape from one drive to another drive?

Stan Sieler, co-founder of Allegro Consultants, gives both long and short answers to this fundamental question. (Turns out one of the answers is to look to Allegro for its TapeDisk product, which includes a program called TapeTape.)

Short answer: It’s easy to copy a tape, for free, if you don’t care about accuracy/completeness.

Longer answer: There are two “gotchas” in copying tapes ... on any platform.

Gotcha #1: Long tape records

You have to tell a tape drive how long a record you with to read.  If the record is larger, you will silently lose the extra data.

Thus, for any computer platform, one always wants to ask for at least one byte more than the expected maximum record — and if you get that extra byte, strongly warn the user that they may be losing data.  (The application should then have the internal buffer increased, and the attempted read size increased, and the copy tried again.)

One factor complicates this on MPE: the file system limits the size of a tape record you can read.  STORE, on the other hand, generally bypasses the file system when writing to tape and it is willing to write larger records (particularly if you specify the MAXTAPEBUF option).

In short, STORE is capable of writing tapes with records too long to read via file system access. The free programs such as TAPECOPY use the file system; thus, there are tapes they cannot  correctly copy.

Gotcha #2: Setmarks on DDS tapes

Some software creates DDS tapes and writes “setmarks” (think of them as super-EOFs). Normal file system access on the 3000 will not see setmarks, nor be able to write them.

Our TapeDisk product for MPE/iX (which includes TapeTape) solves both of the above problems. As far as I know, it’s the only program that can safely and correctly copy arbitrary tapes on an HP 3000.

Posted by Ron Seybold at 04:53 PM in Hidden Value, Migration | Permalink | Comments (0)

June 17, 2014

How a Fan Can Become a Migration Tool

LaskofanWe heard this story today in your community, but we'll withhold the names to protect the innocent. A Series 948 server had a problem, one that was keeping it offline. It was a hardware problem, one on a server that was providing archival lookups. The MPE application had been migrated to a Windows app five years ago. But those archives, well, they often just seem to be easier to look up from the original 3000 system.

There might be some good reasons to keep an archival 3000 running. Regulatory issues come to mind first. Auditors might need original equipment paired with historic data. There could be budget issues, but we'll get to that in a moment.

The problem with that Series 948: it was overheating. And since it was a server of more than 17 years of service, repairing it required a hardware veteran. Plus parts. All of which is available, but "feet on the street" in the server's location, that can be a challenge. (At this point a handful of service providers are wondering where this prospective repair site might be. The enterprising ones will call.)

But remember this is an archival 3000. Budget, hah. This would be the time to find a fan to point at that overheating 17-year-old system. That could be the first step in a data migration, low-tech as it might seem.

From the moment the fan makes it possible to boot up, this could be the time to get that archival data off the 3000. Especially since the site's already got a replacement app on another piece of newer hardware, up and running. There's a server there, waiting to get a little more use.

Moving data off an archival server is one of the very last steps in decommissioning. If you've got a packaged application, there are experts in your app out there -- all the big ones, like Ecometry, MANMAN, Amisys -- that can help export that data for you. And you might get lucky and find that's a very modest budget item. You can also seek out data migration expertise, another good route.

But putting more money into a replacement Hewlett-Packard-branded 3000 this year might be a little too conservative. It depends on how old the 3000 system is, and what the hardware problem would be. If not a fan, then maybe a vacuum cleaner or shop vac could lower the temperature of the server, with a good clean-out. Funk inside the cabinet is common, we've seen.

Overheating old equipment could be a trigger to get the last set of archives into a SQL Server database, for example, one designated only for that. Heading to a more modern piece of hardware might have led you into another kind of migration, towards the emulator, sure. But if your mission-critical app is already migrated, the fan and SQL Server -- plus testing the migrated data, of course -- might be the gateway to an MPE-free operation, including your archives.

Posted by Ron Seybold at 06:08 PM in Migration, User Reports | Permalink | Comments (0)

June 13, 2014

User group's mansion meet sets deadline

JoinUsPowerHouse


June 15 is the first "secure your spot" registration date

PowerHouse customers, many of whom are still using their HP 3000 servers like those at Boeing, have been invited to the PickFair mansion in Hollywood for the first PowerHouse user conference. The all-day Friday meeting is June 27, but a deadline to ensure a reserved space passes at the end of June 15.

That's a Sunday, and Father's Day at that, so the PowerHouse patriarchy is likely to be understanding about getting a reservation in on June 16. Russ Guzzo, the marketing and PR powerhouse at new owners Unicom Global, said the company's been delighted at the response from customers who've been called and gathered into the community.

"I think it makes a statement that we're in it for the long haul," Guzzo said of gathering the customers, "and that the product's no longer sitting on the shelf and collecting dust. Let's talk." 

We're taking on a responsibility, because we know there are some very large companies out there that have built their existence around this technology. It's an absolute pleasure to be calling on the PowerHouse customers. Even the inactive ones. Why? Because they love the technology, and I've heard, "Geez, I got a phone call?"

Register at unicomglobal.com/PowerHouseCAB -- that's shorthand for Customer Advisory Board. It's a $500 ticket, or multiple registrations at $395 each, with breakfast and lunch included. More details, including a handsome flyer for justifying a one-day trip, at the event's webpage.

Posted by Ron Seybold at 11:34 PM in Homesteading, Migration, Newsmakers | Permalink | Comments (0)

June 11, 2014

HP to spin its R&D future with The Machine

Big SpiralCalling it a mission HP must accomplish because it has no other choice, HP Labs director Martin Fink is announcing a new computer architecture Hewlett-Packard will release within two years or bust. Fink, who was chief of the company's Business Critical Systems unit before being handed the Labs job in 2012, is devoting 75 percent of HP's Labs resources to creating a computer architecture, the first since the company built the Itanium chip family design with Intel during the 1990s.

A BusinessWeek article by Ashlee Vance says the product will utilitize HP breakthroughs in memory (memsistors) and a process to pass data using light, rather than the nanoscopic copper traces employed in today's chips. Fink came to CEO Meg Whitman with the ideal, then convinced her to increase his budget.

Fink and his colleagues decided to pitch Whitman on the idea of assembling all this technology to form the Machine. During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. "People in Labs see this as a once-in-a-lifetime opportunity."

Fast, cheap, persistent memory is at the heart of what HP hopes to change about computing. In the effort to build The Machine, however, the vendor harks back to days when computer makers created their own technology in R&D organizations as a competitive advantage. Commodity engineering can't cross the Big Data gap created by the Internet of Things, HP said at Discover today. The first RISC designs for HP computers, launched in a project called Spectrum, were the last such creation that touched HP's MPE servers.

Itanium never made it to MPE capability. Or perhaps put another way, the environment that drives the 3000-using business never got the renovation which it deserved to use the Intel-HP created architecture. Since The Machine is coming from HP's Labs, it's likely to have little to do with MPE, an OS the vendor walked away from in 2010. The Machine might have an impact on migration targets, but HP wants to change the way computing is considered, away from OS-based strategies. But even that dream is tempered by the reality that The Machine is going to need operating systems -- ones that HP is building.

SpiralOS compatibility was one reason that Itanium project didn't pan out the way HP and Intel hoped, of course. By the end of the last decade, Itanium had carved out a place as a specialized product for HP's own environments, as well as an architecture subsidized by Fink's plans to pay Intel to keep developing it. The Machine seems to be reaching for the same kind of "change the world's computing" impact that HP and Intel dreamed about with what it once called the Tahoe project. In a 74-year timeline of HP innovation alongside the BusinessWeek article, those dreams have been revised toward reality.

PA-RISC is denoted in a spiraling timeline of HP inventions that is chock-a-block with calculator and printing progress. The HP 2116 predecessor to the HP 3000 gets a visual in 1969, and Itanium chips are chronicled as a 2001 development.

The Machine, should it ever come to the HP product line, would arrive in three to six years, according to the BusinessWeek interview, and Fink isn't being specific about delivery. But with the same chutzpah he displayed in running Business Critical Systems into critical headwinds of sales and customer retention, he believes HP is the best place for tech talent to try to remake computing architecture.

According to the article, three operating systems are in design to use the architecture, one open-source and HP proprietary, another a variant of Linux, and a third based on Android for mobile dreams. That's the same number of OS versions HP supported for its first line of computers -- RTE for real time, MPE for business, and HP-UX for engineering, and later business. OS design, once an HP staple, need to reach much higher to meet the potential for new memory -- in the same way that MPE XL made use of innovative memory in PA-RISC.

Fink says these projects have burnished HP’s reputation among engineers and helped its recruiting. “If you want to really rethink computing architecture, we’re the only game in town now,” he said. Greg Papadopoulos, a partner at the venture capital firm New Enterprise Associates, warns that the OS development alone will be a massive effort. “Operating systems have not been taught what to do with all of this memory, and HP will have to get very creative,” he says. “Things like the chips from Intel just never anticipated this.” 

Posted by Ron Seybold at 06:24 PM in Migration, News Outta HP | Permalink | Comments (0)

June 10, 2014

Security patches afloat for UX, for a price

If an IT manager had the same budget for patches they employed while administering an HP 3000, today they'd have no patches at all for HP's Unix replacement system. That became even more plain when the latest Distributed Denial of Service (DDoS) alert showed up in my email. You never needed a budget to apply any patches while HP 3000s were for sale from the vendor. Now HP's current policy will be having an impact on the value of used systems -- if they're Unix-based, or Windows ProLiant replacements for a 3000. Any system's going to require a support contract for patches.

For more than 15 years, HP's been able to notify customers when any security breach puts its enterprise products at risk. For more than five years, one DDoS exploit after another has triggered these emails. But over the past year, Hewlett-Packard has insisted that a security hole is a good reason to pay for a support contract with the vendor.

The HP 3000 manager has better luck in this regard than HP's Unix system owners. Patches for the MPE/iX environment, even in their state of advancing age, are distributed without charge. A manager needs to call HP and be deliberate to get a patch. The magic incantation when dealing with the Response Center folks is to use transfer code 798. That’ll get you to an MPE person. And there's not an easy way for an independent support company to help in the distribution, either. HP insisted on that during a legal action last spring.

In that matter, a support company -- one that is deep enough to be hiring experts away from HP's support team -- was sued for illegal distribution of HP server patches. HP charged copyright infringement because the service company had downloaded patches -- and HP claimed those patches were redistributed to the company's clients. 

The patch policy is something to budget for while planning a migration. Some HP 3000 managers haven't had an HP support contract since the turn of this century. Moving to HP-UX will demand one, even if a more-competent indie firm is available to service HP-UX or even Windows on a ProLiant system. See, even the firmware patches aren't free anymore. Windows security patches continue to be free -- that is, they don't require a separate contract. Not even for Windows XP, although that environment has been obsoleted by Microsoft.

HP said the lawsuit was resolved when the support company agreed to suspend their practices that were alleged in the suit. 

HP, like Oracle (owners of Sun) and other OS manufacturers, have chosen to restrict updates, patches, and now firmware to only those customers that have a current support agreement. Indie support companies can recommend patches; in fact, they're a great resource for figuring out which patch will fix problems without breaking much else. But customers are required to have their own support agreement in order to download and install such patches and updates.

Even following the links in the latest HP emails landed me in a "you don't have a support agreement to read this" message, rather than the update about DDoS exposure. It's more than the patches for migration platforms that HP's walled away from the customer base. Now even the basic details of what's at risk are behind support paywalls.

The extra cost is likely to be felt most in the low to midrange end of the user community. Dell's not getting caught up in what HP calls an industry trend to charge for repairing malformed software or OS installations that get put at risk. Dell offers unrestricted access to BIOS and software updates for its entire server, storage, and networking line.

Posted by Ron Seybold at 06:29 PM in Migration, News Outta HP | Permalink | Comments (0)

June 04, 2014

Don't wait until a migrate to clean up

Not long ago, the capital of Kansas District Court in Topeka made a motion to turn off their HP 3000s. During the report on that affair -- one that took the court system offline for a week -- the IT managers explained that part of the migration process would include cleaning up the data being moved off an HP 3000.

This data conversion is one of the most important attributes of this project and is carefully being implemented by continuously and repeatedly checking thousands of data elements to ensure that all data converted is “clean” data which is essential to all users. When we finally “go live,” we would sincerely appreciate your careful review of data as you use the system.

Not exactly a great plan, checking on data integrity so late in the 3000's lifecycle, said ScreenJet's Alan Yeo. The vendor who supplies tools and service for migrations has criticism for the court's strategy statement that "we either move on to another system or we go back to paper and pen."

Fisker"Interesting, that pen and paper comment," Yeo said. "It has the ring of someone saying that we have an old car that's running reliably, but because it might break down at some time, the only options are to go back to walking or buy a Fisker." The Fisker, for those who might not know, was a car developed in 2008 as one of the very first plug-in hybrid models. About 2,000 were built before the company went bankrupt. Moving to any new technology, on wheels or online, should be an improvement over what's in place -- not an alternative to ancient practices.

"Oh, and what's all this crap about having to clean the data?" Yeo added. "That's like saying I'll only bother cleaning the house that I live in when I move. Yes, sure you don't want to move crap in a migration. But you probably should have been doing some housekeeping whilst you lived in the place. Blaming the house when you got it dirty doesn't really wash!"

Posted by Ron Seybold at 09:58 PM in Migration | Permalink | Comments (0)

May 27, 2014

Does cleaning out HP desks lift its futures?

Migration sites in the 3000 community have a stake in the fortunes of Hewlett-Packard. We're not just talking about the companies that already have made their transition away from MPE and the 3000. The customers who know they're not going to end this decade with a 3000 are watching the vendor's transformation this year, and over the next, too.

Clean-Out-Your-DeskIt's a period when a company that got bloated to more that 340,000 companies will see its workforce cut to below 300,000 when all of the desks are cleaned out. The HP CEO noted that the vendor has been through massive change in the period while HP was cleaning out its HP 3000 desks. During the last decade, Meg Whitman pointed out last week, Compaq, EDS, Automomy, Mercury Interactive, Palm -- all became Hewlett-Packard properties. Whitman isn't divesting these companies, but the company will be shucking off 50 percent more jobs than first planned.

Some rewards arrived in the confidence of the shareholders since the announcement of 16,000 extra layoffs. HP stock is now trading at a 52-week high. It's actually priced at about the same value as the days after Mark Hurd was served walking papers in 2010. Whitman's had to do yeoman work in cost-cutting to keep the balance sheet from bleeding, because there's been no measureable sales growth since all 3000 operations ceased. It's a coincidence, yes, but that's also a marker the 3000 customer can recall easily.

When you're cutting out 50,000 jobs -- the grand total HP will lay off by the end of fiscal 2015 in October of next year -- there's no assured way of retaining key talent. Whitman said during the analyst conference call that everybody in HP has the same experience during these cuts. "Everyone understands the turnaround we're in," she said, "and everyone understands the market realities. I don't think anyone likes this."

These are professionals working for one of the largest computer companies in the world. They know how to keep their heads down in the trenches. But if you're in a position to make a change in your career, a shift away from a company like HP that's producing black ink on its ledger through cuts, you want to engage in work you like -- by moving toward security. In the near term, HP shareholders are betting that security will be attained by the prospect of a $128 billion company becoming nimble, as Whitman vowed last week.

In truth, becoming nimble isn't going to be as important to an HP enterprise customer as becoming innovative. Analysts are identifying cloud computing as the next frontier, one that's already got profitable outposts and the kind of big-name users HP's always counted in its corral. During an interview with NPR on the day after the job cuts rolled out, Michael Regan of Bloomberg News pointed out that most of HP's businesses have either slipped, like printers and PCs, or are under fire.

Servers are under a really big threat from cloud computing. HP formerly, you know, their business was to sell you the server so that you can store all your data yourself and have customers access the data right off of your server from the Internet.

The big shift over the last few years has been to put it on a cloud, where basically companies are renting space on a server, and consumers a lot of times aren't even buying any Web applications. They're renting them over the cloud, too. All three of [HP's] main business lines are really under a lot of competition from tablets and cloud computing.

This isn't good news for any customer whose IT career has been built around server management and application development and maintenance. Something will be replacing those in-house servers at any company that will permit change to overturn its technology strategy.

Cloud computing is a likely bet to replace traditional server architectures at companies using the HP gear. But it's a gamble right now to believe that HP's strength in traditional computing will translate to any dominance in cloud alternatives. IBM and Amazon and Google are farther in front on these offerings. That's especially true for the small to midsize company where an HP 3000 is likely to remain working this year.

During the NPR interview, Regan took note of the good work that's come from Whitman's command of the listing HP ship. But the stock price recovery is actually behind both the Standard & Poors average and the average for technology firms during Whitman's tenure. She's floating desks out the door, but that probably won't be enough to float the growth trend line upward. When extra cuts are needed to keep all those shareholders happy, one drooping branch could be the non-industry standard server business.

Any deeper investment in any HP strategy that relies on catching up with non-standard technology should float away from procurement desks for now.

Posted by Ron Seybold at 11:01 PM in Migration, News Outta HP | Permalink | Comments (0)

May 22, 2014

HP's migration servers stand ground in Q2

ESG HP Q2

The decline of HP's 3000 replacement products has halted
(click on graphic for details)

CEO Meg Whitman's 10th quarterly report today promised "HP's turnaround remains on track." So long as that turnaround simply must maintain sales levels, she's talking truth to investors. During a one-hour conference call, the vendor reported that its company-wide earnings before taxes had actually climbed by $240 million versus last year's second quarter. The Q2 2014 numbers also show that the quarter-to-quarter bleeding of the Business Critical Systems products has stopped.

But despite that numerical proof, Whitman and HP have already categorized BCS, home of the Linux and HP-UX replacement systems for 3000, as a shrinking business. The $230 million in Q2 sales from BCS represent "an expected decline." And with that, the CEO added that Hewlett-Packard believes its strategy for enterprise servers "has put this business on the right path."

The increased overall earnings for the quarter can be traced to a robust period for HP printers and PCs. Enterprise businesses -- the support and systems groups that engage with current or former 3000 users -- saw profits drop by more than 10 percent. HP BCS sales also fell, by 14 percent versus last year's Q2. But for the first time in years, the numbers hadn't dropped below the previous quarter's report.

The decline of enterprise server profits and sales isn't a new aspect of the HP picture. But the vendor also announced an new round of an extra 10,000-15,000 job eliminations. "We have to make HP a more nimble company," Whitman said. CFO Cathie Lesjack added that competing requires "lean organizations with a focus on strong performance management." The company started cutting jobs in 2012, and what it calls restructuring will eliminate up to 50,000 jobs before it's over in 2015.

Enterprise business remains at the heart of Hewlett-Packard's plans. It's true enough that the vendor noted the Enterprise Systems Group "revenue was lower than expected" even before the announcement of $27.3 billion overall Q2 revenues. The ESG disappointments appeared to be used to explain stalled HP sales growth.

But those stalled results are remarkable when considered against what Whitman inherited more than two years ago. Within a year, HP bottomed out its stock price at under $12 a share. It was fighting with an acquired Autonomy about how much the purchased company was worth, and was shucking off a purchase of Palm that would have put the vendor into the mobile systems derby.

If nothing else, Whitman's tenure as CEO -- now already half as long as Mark Hurd's -- contains none of the hubris and allegations of the Hurd mentality. After 32 months on the job, Whitman has faced what analysts are starting to call the glass cliff -- a desperate job leading a company working its way back from the brink, offered to a woman.

As the conference call opened on May 22, HP's stock was trading at close to three times its value during that darkest month of November, 2012. At $31 a share valuation, HPQ is still paying a dividend to shareholders. Meanwhile, the company said it has "a bias toward share repurchases" planned for the quarters to come.

There's still plenty of profit at HP. But the profits for the Enterprise Group, which includes blades and everything that runs an alternative to MPE, have been on a steady decline. A year ago before taxes they were $1.07 billion, last quarter they were $1 billion, and this quarter they're $961 million. Sales are tracking on the same trajectory.

Whitman noted the tough marketplace for selling its business servers in the current market. She also expressed faith in HP's system offerings. It's just that the vendor will have to offer them with fewer employees.

"I really like our product lineup. But we need to run this company more efficiently," she said. "We're going to have to be quicker and faster to compete in this new world order."

When an analyst asked Whitman about morale in the face of job cuts, she said people at HP understand the economic climate.

"No company likes to decrease the workforce," she said. "Our employees live with it every single day. Everyone understands the turnaround we're in, everyone understands the market realities. I don't think anyone likes this." HP believes the extra job cuts will free up an additional $1 billion a year, "and some of that will be reinvested back into the business."

There's also money being spent in R&D. At first during the Q&A session, the CFO Lesjack said that "the increase of R&D year over year is very broad-based" across many product lines. Whitman immediately added that there have been increases for R&D in HP's server lines. The servers which HP is able to sell are "mission-critical x86" systems. That's represents another report that the Integrity-based lineup continues to decline. BCS overall represents just 3 percent of all Enterprise Systems sales in this quarter.

HP's internal enterprise systems -- which were once managed by HP 3000s -- are in the process of a new round of replacements. SAP replaced internal systems at HP last decade. Whitman said the churn that started in 2001 with the Compaq purchase has put the vendor through significant changes, ones that HP must manage better.

"This company has been through a lot," Whitman said during analyst questioning. "The acquisition of Compaq. The acquisition of EDS. Eleven to 20 software acquisitions. It's a lot of change. We're putting in new ERP programs and technology to automate processes that frankly, have not been done in awhile."

Posted by Ron Seybold at 07:16 PM in Migration, News Outta HP | Permalink | Comments (0)

May 21, 2014

Ops check: does a replacement application do the same caliber of power fail recovery?

Migrating away from an HP 3000 application means leaving behind some things you can replace. One example is robust scheduling and job management. You can get that under Windows, if your target application will run on that Microsoft OS. It's extra, but worth it, especially if the app you need to replace generates a great many jobs. We've heard of one that used 14,000.

Disk-bandaidA migrating site will also want to be sure about error recovery in case of a system failure. Looking at what's a given in the 3000 world is the bottom-rung bar to check on a new platform. This might not be an issue that app users care about -- until a brown-out takes down a server that doesn't have robust recovery. One HP 3000 system manager summed up the operations he needs to replace on HP's 3000 application server.

We're looking at recovery aspects if power is lost, or those that kick in whenever MPE crashes. On the 3000's critical applications, we can use DBCONTROL or FCONTROL to complete the I/O.  Another option would be to store down the datasets before the batch process takes place.

A couple of decades ago, this was a feature where the 3000's IMAGE database stood out in a startling, visual way. A database shootout in New Jersey pitted IMAGE and MPE against Unix and Oracle, or second-level entries such as Sybase or Informix. A tug on the power plug of the 3000 while it was processing data left the server in a no-data-loss state, when it could be rebooted. Not so much, way back then, for what we'd call today's replacement system databases.

Eloquence, the IMAGE workalike database, emulates this rock-solid recovery for any Windows or Linux applications that use that Marxmeier product. Whatever the replacement application will be for a mission-critical 3000 system, it needs to rely on the same caliber of crash or powerfail recovery. This isn't an obvious question to ask during the feature comparison phase of migration planning. But such recovery is not automatic on every platform that will take over for MPE.

Sometimes there's powerfail tools available for replacement application hosts, system-wide tools to aid in database recovery -- but ones that managers don't employ because of costs to performance. For example, a barrier is a system feature common in the Linux world. A barrier protects the state of the filesystem journal. Here's a bit of discussion from the Stack Exchange forum, where plenty of Linux admins seek solutions.

It is possible that the write to the journal on the disk is delayed, because it's more efficient from the head position currently to write in a different order to the one the operating system requested as the actual order -- meaning blocks can be committed before the journal is.

The way to resolve this is to make the operating system explicitly wait for the journal to have been committed before committing any more writes. This is known as a barrier. Most filesystems do not use this by default and would explicitly need enabling with a mount option.

mount -o barrier=1 /dev/sda /mntpnt

The big downside to barriers is they have a tendency to slow IO down, sometimes dramatically (around 30 percent) which is why they aren't enabled by default.

In the 3000 world, logging has been used as a similar recovery feature, focused on recovering IMAGE data. A long-running debate included concerns about whether logging penalized application performance. We've run a logging article written by Robelle's Bob Green that's worth a look.

Peering under the covers of any replacement application, to see the means to recover its data, is a best practice. Even if a manager doesn't have deep knowledge of the target environment, this peering is the kind of thing the typical experienced 3000 manager will embrace without question. Then they'll ask the powerfail recovery question.

Posted by Ron Seybold at 07:53 PM in Migration | Permalink | Comments (0)

May 16, 2014

Unicom returns PowerHouse expert to fold

CognosibmlogoBob Deskin fielded questions about the PowerHouse products for more than a decade on the PowerHouse-L mailing list. When a question from the vendor -- for many of those years, Cognos -- was required, Deskin did the answering. He was not able to speak for IBM in a formal capacity about the software. But he defined the scope of product performance, as well as soothed the concerns from a customer base when it felt abandoned.

UnicomgloballogoAfter retiring from IBM's PowerHouse ADT unit last year, Deskin's back in the field where he's best known. The new owners of the PowerHouse tools, Unicom Global, added him to the team in a consultant's capacity.

As part of UNICOM's commitment to the PowerHouse suite of products, I have been brought on board as a consultant to work with the UNICOM PowerHouse team to enhance the support and product direction efforts.

For anyone not familiar with my background, I started in this business in the early '70s as a programmer and systems analyst. I joined Cognos (then Quasar) in 1981 after evaluating QUIZ and beta testing QUICK for a large multinational. Over the years, I’ve been in customer support, technical liaison, quality control, education, documentation, and various advisory roles. For the past 12 years, until my retirement from IBM in 2013, I was the Product Manager for PowerHouse, PowerHouse Web, and Axiant.

New owners of classic MPE tools, like PowerHouse, are not always so savvy about keeping the tribal knowledge in place. In Deskin's case, he's been retained to know about the OpenVMS product owners' issues, as well as those from the 3000 community. There's another HP platform that's still available for PowerHouse customer, too.

The story behind the story of new PowerHouse ownership is a plan for enhancing the products, even as a site's sideways migration to a supported HP platform might be underway. In order to help retain a customer in the proprietary PowerHouse community, the new owners know they need to improve the products' capabilities

WainhomesballoonIn one example of such a sideways migration -- where PowerHouse remains the constant platform element while everything else changes -- Paul Stennett of house builder Wainhomes in the UK reported that an MPE-to-UX move began with a change in database. The target was not Oracle, either.

We actually migrated to Eloquence rather than Oracle, which meant the data conversion was pretty simple -- Eloquence emulates IMAGE on the HP 3000. The only issue was KSAM files which we couldn't migrate. However, Eloquence has a better solution, and allows third party indexes, and therefore generic key retrieval of data. For instance, surname = "Smi@" etc... Testing was around 3 months.

Our HP 3000 applications go back over 20 years and have been continually developed over time. I have had experience with other [replacement] packages for the housebuilding industry, in particular COINS, However, with the mission to keep the re-training and disruption to the business to a minimum, the migration option was the best route for us. 

I completely agree that you can gain major benefits from replacing a system [completely instead of migrating it.] I guess it depends on what type of business you are in. If you are an online retailer, for example, then technology can save costs and improve efficiency. As they say, it's horses for courses.

Posted by Ron Seybold at 03:39 PM in Migration | Permalink | Comments (0)

May 09, 2014

HP bets "Hey! You'll, get onto your cloud!"

TalltorideHewlett-Packard announced that it will spend $1 billion over the next two years to help its customers built private cloud computing. Private clouds will need security, and they'll begin to behave more like the HP 3000 world everybody knows: management of internal resources. The difference will reside in a standard open source stack, OpenStack. It's not aimed at midsize or smaller firms. But aiding OpenStack might help open some minds about why clouds can be simple to build, as well as feature-rich.

This is an idea that still needs to lift off. Among the 3000 managers we interview, there are few who've been in computing since the 1980s who are inclined to think of clouds much differently than time-sharing, or apps over the Internet. Clouds are still things in Rolling Stones or Judy Collins choruses.

The 3000 community that's moving still isn't embracing any ideal of running clouds in a serious way. Once vendor who's teeing up cloud computing as the next big hit is Kenandy. That's the company built around the IT experience and expertise of the creators of MANMAN. They've called their software social ERP, in part because it embraces the information exchange that happens on that social network level.

But from the viewpoint of Terry Floyd, founder of the manufacturing services firm The Support Group, Kenandy's still waiting for somebody from the 3000 world to hit that teed-up ball. Kenandy was on hand at the Computer History Museum for the last HP3000 Reunion. That gathering of companies now looks like the wrong size of ball to hit the Kenandy cloud ERP ball.

"Since we saw them at the Computer History Museum meeting, Kenandy seems to have has re-focused on large Fortune 1000 companies," Floyd said. There are scores of HP 3000 sites running MANMAN. But very few are measuring up as F1000 enterprises. Kenandy looks like it believes the typical 3000 site is not big enough to benefit from riding a cloud. There are many migrated companies who'd fit into that Fortune 1000 field. But then, they've already chosen their replacements.

The Kenandy solution relies on the force.com private cloud, operated by Salesforce.com. Smaller companies, the size of 3000 customers, use Salesforce. The vendor's got a force.com cloud for apps beyond CRM. But the magnitude of the commitment to Kenandy seems larger than the size of the remaining 3000 sites which manufacture using Infor's MANMAN app.

"Most MANMAN sites don't meet their size requirements," Floyd said. "I have a site that wants to consider Kenandy next year, but so far Kenandy is not very interested. We'll see if they are serious when the project kicks off next year, because we think Kenandy is a good fit for them."

The longer that small companies wait out such cloud developments as HP's $500 million per year, the better the value becomes for getting onto their cloud, migrating datacenter ops outside company walls. HP is investing to convince companies to build their own private clouds, instead of renting software from firms like Kenandy and Salesforce. Floyd and his company have said there's good value in switching to cloud-based ERP for some customers. Customization of the app becomes the most expensive issue.

This is the central decision in migrating to cloud-based ERP from a 3000. It's more important than how much the hardware to support the cloud will cost. HP's teaming up with Foxconn -- insert snarky joke here -- to drive down the expense of putting up cloud-optimized servers. But that venture is aimed at telecommunications companies and Internet service providers. When Comcast and Verizon, or Orange in Europe, are your targets, you know there's a size requirement.

You might think of the requirements for this sort of cloud -- something a customer would need to devote intense administrative resources to -- as that sign at the front of the best amusement park rides. "You must be Fortune 1000 tall to ride this ride," it might say. Maybe, over the period of HP's new cloud push, the number on the sign will get smaller.

Posted by Ron Seybold at 10:04 AM in Migration, News Outta HP | Permalink | Comments (0)

May 06, 2014

PowerHouse users study migration flights

737-lineA sometimes surprising group of companies continue to use software from the PowerHouse fourth generation language lineup on their HP 3000s. At Boeing, for example -- a manufacturer whose Boeing 737 assembly line pushes out one aircraft's airframe every day -- the products are essential to one mission-critical application. Upgrade fees for PowerHouse became a crucial element in deciding whether to homestead on the CHARON emulator last year.

PowerHouse products have a stickiness to them that can surprise, here in 2014, because of the age of the underlying concept. But they're ingrained in IT operations to a degree that can make them linchpins. In a LinkedIn Group devoted to managing PowerHouse products, the topic of making a new era for 4GL has been discussed for the past week. Paul Stennett, a group systems manager with UK-based housebuilder Wainhomes, said that his company's transition to an HP-UX version of PowerHouse has worked more seamlessly -- so far -- than the prospect of replacing the PowerHouse MPE application with a package. 

"The main driver was not to disrupt the business, which at the end of the day pays for IT," Stennett said. "It did take around 18 months to complete, but was implemented over a weekend. So the users logged off on Friday on the old system, and logged onto the new system on Monday. From an application point of view all the screens, reports and processes were the same."

This is the lift-and-shift migration strategy, taken to a new level because the proprietary language driving these applications has not changed. Business processes -- which will get reviewed in any thorough migration to see if they're still needed -- have the highest level of pain to change. Sometimes companies conclude that the enhancements derived from a replacement package are more than offset by required changes to business processes.

Enter the version of PowerHouse that runs on HP's supported Unix environment. It was a realistic choice for Stennett's company because the 4GL has a new owner this year in Unicom.

"With the acquisition of PowerHouse by UNICOM, and their commitment to developing the product and therefore continued support," Stennett posted on the LinkedIn group, "is it better to migrate PowerHouse onto a supported platform (from HP 3000 to HP-UX) rather than go for a complete re-write in Java, with all of its risks. To the user was seamless, other than they have to use a different command to logon. The impact to the businesses day to day running was zero."

The discussion began with requests on information for porting PowerHouse apps to Java. The 4GL was created with a different goal in mind than Java's ideal of "write once, run anywhere." Productivity was the lure for users who moved to 4GLs such as PowerHouse, Speedware, and variants such as Protos and Transact. All but Protos now have support for other platforms.

And HP's venerated Transact -- which once ran the US Navy's Mark 85 torpedo facility at Keyport, Wash. -- can be replaced by ScreenJet's TransAction and then implemented on MPE. ScreenJet, which partnered with Transact's creator David Dummer to build this replacement, added that an MPE/iX TransAction implementation would work as a testing step toward an ultimate migration to other environments.

Bob Deskin, a former PowerHouse support manager who retired from IBM last year, sketched out why the fourth generation language is preserving so many in-house applications -- sometimes on platforms where the vendor has moved on, or set an exit date as with HP's OpenVMS.

Application systems, like many things, have inertia. They tend to obey Newton's first law. A body at rest tends to remain at rest unless acted upon by an outside force. The need for change was that outside force. When an application requires major change, the decision must be made to do extensive modifications to the existing system, to write a new system, or to buy a package. During the '90s, the answer was often to buy a package. 

But packages are expensive so companies are looking at leveraging what they have. If they feel that the current 4GL application can't give them what they need, but the internal logic is still viable, they look for migration or conversion tools. Rather than completely re-write, it may be easier to convert and add on now that Java and C++ programmers are readily available.

Deskin added as part of his opinion of what happened to 4GLs that they were never ubiquitous -- not even in an environment like the HP 3000's, where development in mainstream languages might take 10 times longer during the 1970s.

There weren't enough programmers to meet the demand. Along came 4GLs and their supposed promise of development without programmers. We know that didn't work out. But the idea of generating systems in 10 percent of the time appealed to many. If you needed 10 percent of the time, maybe you only needed 10 percent of the programmers. 

The 4GL heyday was the '80s. With computers being relatively inexpensive and demand for systems growing, something had to fill the void. Some programmers caught the 4GL bug, but most didn't. There was still more demand than supply, so studying mainstream languages almost guaranteed a job. 

Now even mainstream languages like COBOL and FORTRAN are out of vogue. COBOL was even declared extinct by one misinformed business podcast on NPR. The alternatives are, as one LinkedIn group member pointed out, often Microsoft's .NET or Oracle's Java. (Java wasn't considered a vendor's product until Oracle acquired it as part of its Sun pickup. These days Java is rarely discussed without mention of its owner, perhaps because the Oracle database is so ubiquitous in typical migration target environments.)

Migration away from a 4GL like PowerHouse -- to a complete revision with new front end, back end databases, reporting and middleware -- can be costly by one LinkedIn member's accounts. Krikor Gellekian, whose name surfaces frequently in the PowerHouse community, added that a company's competitive edge is the reward for the lengthy wade through the surf of 4GL departures.

"It is not simple, it takes time and is expensive, and the client should know that in advance," Gellekian wrote. "However, I always try to persuade my clients that IT modernization is not a single project; it is a process. And adopting it means staying competitive in their business."

Deskin approached the idea that 4GLs might be a concept as extinct as that podcast's summary of COBOL. 

Does this mean that the idea of a 4GL is dead? Absolutely not. The concept of specifying what you want to do rather than how to do it is still a modern concept. In effect, object-oriented languages [like Java] are attempting to do the same thing -- except they are trying to be all things to all people and work at a very low level. However, it takes more than a language these days to be successful. It also requires a modern interface. Here's hoping.

Posted by Ron Seybold at 11:32 AM in Migration, User Reports | Permalink | Comments (0)

May 02, 2014

Timing makes a difference to MPE futures

Coming to market with virtualized 3000s has been a lengthy road for Stromasys. How long is a matter of perspective. The view of an emulated 3000's lifespan can run from using it for just a few years to the foreseeable future. I heard about both ends of the emulator's continuum over the last few weeks.

StopwatchIn the Kern County Schools in Bakersfield, Calif., a 3000 manager said the timetable for his vendor's app migration is going to sideline any steps into using CHARON. Robert Canales, Business Information Systems Analyst in the Division of Administration and Finance, was an eager prospect for the software last May, when the company's Training Day unfolded out in the Bay Area. But the pace of migration demonstrated by his MPE software vendor, who's moving customers to Linux, showed his team that 3000 computing was not going to outlast the vendor's expected migration timetable.

Our main software vendor has since migrated several of their California K-12 education customers off of the 3000. We believe that our organization will be able to successfully migrate over to their Linux-based platform within the next 18-24 months. So from that perspective, we simply couldn't justify the financial investment, or the time for our very limited number of personnel, to focus on utilizing the CHARON solution for backup, testing or historical purposes.

The analysis at the district draws the conclusion that two more school years using available HP 3000 iron -- at most, while awaiting and then undertaking a migration -- will be a better use of manpower and budget than preserving MPE software. This is understandable when a commercial application drives IT. You follow your vendor's plan, or plan to replace something. Replacement could be either the physical hardware with an emulator, because the vendor's leaving your MPE app behind. Or everything: your OS environment as well as applications. Getting two years of emulator use, or maybe a bit more, isn't enough to fit the Kern County Schools resources and budget.

On the other side of that timetable, we can point out a comment from the recent CAMUS user group conference call. It suggests people will want to do more than mimic their 3000 power. They'll want to trade up for a longer-term installation.

An MB Foster analyst noted that as hardware moves upward, from one level of emulation to a more powerful option, the changes might trigger application upgrading. That's a long schedule of use, if you consider that horsepower increases usually happened on 3- or 5-year timetables back when MPE ran only on 3000s. That mirrors a schedule that emulator vendors have reported as commonplace: several decades of lifespan.

Arnie Kwong clarified what he said on that call: that moving upward in the CHARON license lineup might be reason for a vendor -- like some in the 3000 world -- to ask for upgrading fees.

My understanding on CHARON is 1) If you change processor class (for example, from an 'A' license to an 'N' license) then you are likely to get 'upticks' from your third party vendors.  

2) If you change to 'more processors' (for example, from one 'A' license to more than one 'A' license so that you can run separate reporting machines or year-end processing or the like) then you have more licenses as you are running more processors.

This isn't a change for anything that has been in place -- it's just a clarification of ours, that we haven't heard of anyone who isn't doing this the same way as its always been done. Stromasys is vending the 'hardware' and the software suppliers are providing the 'code' as things have always been.

We don't know how likely such upticks will be in the community. 3000 shops use an array of third party vendors. Some vendors do charge for processor uplifts. Others do not, and the number of vendors who will do this has not been confirmed by the installed CHARON base. We heard a report that a PowerHouse user was facing a six-figure fee to emulate their 3000. We heard that report before PowerHouse ownership changed at the end of 2013.

But if you think about that kind of scenario for a bit, you come up with a company that's extending its MPE power while it emulates. That's an investment to cover more than a few years. Emulating customers, just like the vendors who are offering this virtualization, are often into their applications for a very long ride. Before Stromasys emerged as the survivor in the emulation derby, there was Strobe Data. Willard West at that vendor talked about a multiple decades of a timetable for its HP 1000 and Digital emulation customer base.

"Our major competition has been the used hardware market," West said a decade ago. "We’ve out-survived that." At the time that we talked, Strobe was emulating Data General servers that were obsoleted 15 years earlier.

Emulation vendors know that time can be on their side if an application is customized and critical to a company. When time is on your side, the costs to revitalize an in-house application can be applied over enough years. Emulation mimics more than hardware platforms. It preserves IT business rules for returns on investment which have often been on MPE's side. MPE applications have outlasted their hardware and triggered upgrades. The clock on the ROI determines IT investments, just like it always has.

Posted by Ron Seybold at 08:14 PM in Homesteading, Migration | Permalink | Comments (0)

April 30, 2014

Kansas court rings down gavel on its 3000

GavelThe District Court in the capital of Kansas is switching off its HP 3000 this week, a process that's going to pull the district clerk's office competely out of service over the first two days of May. The Topeka court's IT department said the alternative to replacing the 3000 software would be going back to paper and pen. The project will knock all court computing offline -- both old and new systems -- for one work week.

"Anyone who needs to file or pick up documents should do so between 8 AM and noon on Thursday and Friday," the court advised Topeka-area citizens on its website. The Topeka courts have been using HP 3000s since the 1980s. Four years ago the court commissioners voted to spend $207,800 for FullCourt software to replace the 3000 application. The court has been paying for the software -- which will be loaded with data May 5-9 -- over three years at no interest. All court data is being extracted and replaced during the workweek of May, when only jury trials, emergency hearings and essential dockets will be heard.

The court is predicting a go-live date of May 12. The HP 3000 will be shut off Friday, May 2, at 5 PM, according to a schedule "that may fluctuate."

The HP 3000 has "outlived its life expectancy, making it essential that we either move on to another system or we go back to paper and pen," according to a statement on the court's website. Converting data is the crucial part of the migration.

No other district court in the state of Kansas has attempted such a challenge.  This data conversion is one of the most important attributes of this project and is carefully being implemented by continuously and repeatedly checking thousands of data elements to ensure that all data converted is “clean” data which is essential to all users. When we finally “go live,” we would sincerely appreciate your careful review of data as you use the system.

32-year-old Justice Systems of Alberquerque sells FullCourt. The latest marketing materials for the software company's Professional Services include a testimonial from Chief Information Technical Officer Kelly O'Brien of the Kansas Judicial Branch. The court's announcements did not break out the cost of software versus the cost of professional migration services.

Chief Judge Evelyn Wilson said in a statement, “We know this system affects the entire community. There are bound to be some bumps in the road. While the court has tried to take into consideration the different issues that may arise, there is no way we can address all of them. Initially, we anticipate that productivity may be slower as people get accustomed to the new system. We’ll do our best to accommodate you, and we ask you to do the same."

FullCourt is an enterprise grade application that's broad enough in its scope that the Kansas court had to partition the project. According to the Topeka Capital-Journal, "to manage the conversion to FullCourt, the court broke down the project into several components."

The replacement software includes features such as e-filing of documents. Wyoming state courts have also implemented FullCourt, although an HP 3000 wasn't shut down there.

Posted by Ron Seybold at 01:09 PM in Migration | Permalink | Comments (0)

April 24, 2014

RUG talk notes emulator licensing, recovery

Second of two parts

When CAMUS held its recent user group conference call, MB Foster's Arnie Kwong had advice to offer the MANMAN and HP 3000 users about the CHARON emulator for PA-RISC systems like the 3000. A more complex environment than HP's decade-old 3000 hardware is in place to enable things like powerfail recovery while protecting data. And readying licenses for a move to the Stromasys CHARON 3000 emulator means you've got to talk to somebody, he said.

"Everybody is pretty helpful in trying to keep customers in a licensing move," Kwong said. "If anyone tells you that you don't even have to ask, and that you're just running a workalike, that would be a mistake. You have to have an open and fair conversation. Not doing so, and then having a software problem, could be a fairly awkward support conversation. You can't make the assumption you'll be able to make this move without any cost." 

If you create secondary processing capacity through CHARON, you'll have to execute new licenses for those licenses. But most of the third party vendors are going to be pretty reasonable and rational. We've all known each other for decades. People who do lots of IT procurement understand straightforward rules for handling that. 

Kwong said that CHARON prospects should make a catalog of their MPE software applications and utilities, and then talk to vendors about tech compatibility, too.

In manufacturing IT in particular, its cost has been declining recently. "Short of somebody paying $10-15 million to re-engineer around SAP, or Infor's other products, most of the incremental spending in the MANMAN and 3000 environments have been to extend life. People do a lot of stuff now on Excel spreadsheets and SQL Server databases around the ERP system. We look to see if the 3000 is the essential piece, and often it is. We look at what other things are affected if we change that 3000 piece."

Kwong said that MB Foster has not done MANMAN-specific testing against its in-lab CHARON installations yet.

Data integrity questions came up from Mike Hornsby, who wanted to know about comparison in using transactional testing to evaluate possible data loss. Of the HP 3000's powerfail environment,  Kwong said, "it's been one of the key strengths of the 3000 environment in particular." The tests at MB Foster haven't revealed any data loss. Kwong didn't dismiss the possibility, however.

"This is theory, but I'll say this: One of the things you have at risk during the crash recovery process is either in the CHARON emulator, or the underlying infrastructure in the cloud environment that you're running it in." In this meaning of the word cloud, Kwong was referring to the VMware hosting that's common to the 3000 CHARON experience.

"In those instances you could have failures that were never in their wildest imaginations considered by the folks who built this software-hardware combination. I have not seen anything personally in our testing where things have been horrendously corrupted, rolled over and died. But inherently in the environments they're running, there are assumptions of database logfiles, and particularly in certain key files and so forth, where your warmstart processing can be at risk." 

When such failures occur — and they can happen in HP's provided hardware — "You have the same predictability in an emulated environment as you do in the 3000 hardware environment. I don't think I'd lose a lot of sleep over it." However, networking and storage architecture issues are different for the emulated MPE hardware than for HP's native hardware, he added, 

But application expenses take the forefront over hardware and platform issues at the sites where MB Foster has discussed transitions of any kind. "When you take the context where the 3000 is running from a business standpoint, yes, you have licensing issues for maintenance and so forth," Kwong said. "But as a total percentage of the cost to the enterprise, the application's value and the application's cost to change anything, usually begins to predominate. 

"It's not the fact that you have no-cost terminals and low-cost hardware anymore, it's what that application's power brings you. We've seen that newer managers who come in from outside at these sites with stable HP applications have vastly different expectations for what the application's going to deliver — also, different demands for the applications portfolio — than people who've been there for decades running the same architecture. The platform discussions usually aren't major economic drivers. 

"Running a 3000 application in another environment, such as Windows or Linux, is never zero, although it's cheaper to do that in a Stromasys environment. We need to carefully consider the hardware scalability performance availability, and certain kinds of communication and networking interfaces that aren't qualified for use in the Stromasys environment yet."

"We look at how to approach the problem of migration and its processes. In talking to our customers and concerns they have at small one-person shops with boxes running for 20 years, a move will take a year or two years to do. People that we talk to say they're gotten by for a long time without having to pay the kind of money needed to migrate to SAP or Oracle, or FMS or JD Edwards. Those alternatives are on the list of things they look at.

"Few people are talking about development stages for the kinds of complex environments the folks on this call represent. The days of large scale development have pretty much gone by the board. Everybody's talking about what kind of capacity they can buy, and what kind of features can they buy, rather than concentrate on what kinds of things they could move to the new environment.

"For them, the Stromasys approach says they'll leave their software base the same and go to new hardware, essentially. There are a lot of business assumptions and a lot of applications assumptions that might change because you're running in that new hardware environment. Things that were always based on the 7x24 capability, running without a lot of staff expense — all of those things are now open to question and rethink. We encourage people to take a step back and look at their business planning assumptions and business models, because that's the foundation for why they bought the 3000 in the first place".

Kwong he believes most of the users on the call could agree HP didn't do badly by them in the initial offering of high-value, investment-protected systems. Now that the system is into its second decade beyond HP's exit announcement, protecting that value deserves some fresh assessment.

Posted by Ron Seybold at 11:38 AM in Homesteading, Migration | Permalink | Comments (0)

April 21, 2014

A week-plus of bleeds, but MPE's hearty

BleedingheartThere are not many aspects of MPE that seem to best the offerings from open source environments. For anyone who's been tracking the OpenSSL hacker-door Heartbleed, though, the news is good on 3000 vulnerability. It's better than more modern platforms, in part because it's more mature. If you're moving away from mature and into migrating to open source computing, then listen up.

Open source savant Brian Edminster of Applied Technologies told us why MPE is in better shape.

I know that it's been covered other places, but don't know if it's been explicitly stated anywhere in MPE-Land: The Heartbleed issue is due to the 'heartbeat' feature, which was added to OpenSSL after any known builds for MPE/iX.

That's a short way of saying: So far, all the versions of OpenSSL for MPE/iX are too old to be affected by the Heartbleed vulnerability. Seems that sometimes, it can be good to not be on the bleeding edge.

However, the 3000 IT manager -- a person who usually has a couple of decades of computing experience -- may be in charge of the more-vulnerable web servers. Linux is used a lot for this kind of thing. Jeff Kell, whose on-the-Web servers deliver news of 3000s via the 3000-L mailing list, outlined repairs needed and advice from his 30-plus years of networking -- in MPE and all other environments.

About 10 days after the news rocked the Web, Kell -- one of the sharpest tools in the drawer of networking -- posted this April 17 summary on the challenges and which ports to watch.

Unless you've had your head in the sand, you've heard about Heartbleed. Every freaking security vendor is milking it for all it's worth. It is pretty nasty, but it's essentially "read-only" without some careful follow-up. 

Most have focused on SSL/HTTPS over 443, but other services are exposed (SMTP services on 25, 465, 867; LDAP on 636; others). You can scan and it might show up the obvious ones, but local services may have been compiled against "static" SSL libraries, and be vulnerable as well.

We've cleaned up most of ours (we think, still scanning); but that just covers the server side.

There are also client-side compromises possible.

And this stuff isn't theoretical, it's been proven third-party... 

https://www.cloudflarechallenge.com/heartbleed

Lots of folks say replace your certificates, change your passwords, etc.  I'd wait until the services you're changing are verified secure.

Most of the IDS/IPS/detections of the exploits are broken in various ways.  STARTTLS works by negotiating a connection, establishing keys, and bouncing to an encrypted transport.  IDS/IPS can't pick up heartbleed encrypted. They're after the easy pre-authenticated handshake.

It's a mess for sure. But it’s not yet safe to necessarily declare anything safe just yet.

Stay tuned, and avoid the advertising noise.

Posted by Ron Seybold at 06:45 AM in Migration, Newsmakers, User Reports, Web Resources | Permalink | Comments (0)

April 18, 2014

Denying Interruptions of Service

DDoSFor the last 18 hours, the 3000 Newswire’s regular blog host TypePad has had its outages. (Now that you're reading this, TypePad is back on its feet.) More than once, the web resource for the Newswire has reported it’s been under a Denial of Service attack. I’ve been weathering the interruption of our business services up there, mostly by posting a story on my sister-site, Story for Business.

We also notified the community via Twitter about the outage and alternative site. It was sort of a DR plan in action. The story reminds me of the interruption saga that an MPE customer faces this year. Especially those using the system for manufacturing.

MANMAN users as well as 3000 owners gathered over the phone on Wednesday for what the CAMUS user group calls a RUG meeting. It's really more of an AUG: Applications User Group. During the call, it was mentioned there’s probably more than 100 different manufacturing packages available for business computers which are like the HP 3000. Few of them, however, have a design as ironclad against interruption as the venerable MANMAN software. Not much service could be denied to MANMAN users because of a Web attack, the kind that’s bumped off our TypePad host over the last day. MANMAN only employs the power of the Web if a developer adds that interface.

This is security through obscurity, a backhanded compliment that a legacy computer gets. Why be so condescending? It might be because MPE is overshadowed by computer systems that are so much newer, more nimble, open to a much larger world.

They have their disadvantages, though. Widely-known designs of Linux, or Windows, attract these attempts to deny their services. Taking something like a website host offline has a cost to its residents, like we reside on TypePad. Our sponsors had their messages denied an audience. In the case of a 3000, when it gets denied it’s much more likely to be a failure of hardware, or a fire or flood. Those crises, they’ve got more rapid repairs. But that’s only true if a 3000 owner plans for the crisis. Disaster Recovery is not a skill to learn in-situ, as it were. But practicing the deployment it’s about as popular as filing taxes. And just as necessary.

Another kind of disruption can be one that a customer invites. There are those 100 alternatives to MANMAN out there in the market, software an MPE site might choose to use. Manufacturing software is bedeviled with complexity and nuance, a customized story a company tells itself and its partners about making an object.

There’s a very good chance that the company using MPE now, in the obscurity of 2014, has put a lot of nuance into its storytelling about inventory, receivables, bill of materials and more. Translating that storytelling into new software, one of those 100, is serious work. Like any other ardent challenge, this translation — okay, you might call it a migration — has a chance to fail. That’s a planned failure, though, one which usually won’t cost a company its audience like a website service denial.

The term for making a sweeping translation happen lightning-quick is The Magic Weekend. 48 hours of planned offline transformation, and then you’re back in front of the audience. No journey to the next chapter of the MPE user’s story — whether it’s a jump to an emulator that mimics Hewlett-Packard computers, or the leap to a whole new environment — can be accomplished in a Magic Weekend. Business computers don’t respond to magic incantations.

The latest conference call among MANMAN users invoked that warning about magic. Turning the page on the story where Hewlett-Packard’s hardware was the stage for the software of MANMAN and MPE — that’s an episode with a lot longer running time than any weekend. Even if all you’re doing is changing the stage, you will want to test everything. You don’t want to be in middle of serving hundreds and hundreds of audience members at a time, only to have the lights grow too dim to see the action on the stage.

Posted by Ron Seybold at 04:45 PM in Homesteading, Migration | Permalink | Comments (0)

April 11, 2014

Again, the 3000's owners own a longer view

GeorgeBurnsHeartbleed needs a repair immediately. Windows XP will need some attention over the next three years, as the client environment most favored by migrating 3000 sites starts to age and get more expensive. XP is already "off support," for whatever that means. But there's a window of perhaps three years where change is not as critical as a repair to Heartbleed's OpenSSL hacker window.

Then there's MPE. The OS already has gone through more than a decade of no new sales. And this environment that's still propping up some business functions has now had more than five years of no meaningful HP lab support. In spite of those conditions, the 3000's OS is still in use, and by one manager's accounting, even picking up a user in his organization.

"Ending?" Tim O'Neill asks with a rhetorical tone. "Well, maybe MPE/iX will not be around 20 years from now, but today one of our people  contacted me and said they need to use the application that runs on our HP 3000. Isn't that great? Usage is increasing!"

VladimirNov2010GrayPondering if MPE/iX will be around in 20 years, or even 13 when the end of '27 date bug surfaces, just shows the longer view the 3000 owner still owns. Longer than anything the industry's vendors have left for newer, or more promising, products. My favorite avuncular expert Vladimir Volokh called in to leave a message about his long view of how to keep MPE working. Hint: This septuagenarian plans to be part of the solution.

Vladimir is bemused at the short-term plans that he runs across among his clientele. No worries from them about MPE's useful lifespan. "I'll be retired by then," say these managers who've done the good work of IT support since the 1980s. This retirement-as-futures plan is more common than people would like to admit.

Volokh took note of our Fixing 2028 update awhile back. "It's interesting that you say, "We've still got more than 13 years left. Almost every user who I've told you about has said, 'Oh, by then, I'll retire.' My answer is, 'Not me.' I will be just 90 years old. You call me, and we'll work out something.' "

I invite you to listen to his voice, delivering his intention to keep helping and pushing MPE into the future -- a longer one than people might imagine for something like XP.

Why do some 3000 experts say a longer view seems like a good chance? Yes, one obvious reason is that they don't want to say goodbye to the meaningful nature of their expertise, or the community they know. I feel that same way, even though I only tell the stories of this community.

But there's another reason for the long view. MPE has already served in the world for 40 years. HP thought this so unlikely that they didn't even program for a Y2K event. Then the vendor assumed more than 80 percent of sites will be off in four years' time after HP's "we're quitting" notice. Then it figured an extra two years would do the job.

Wrong on all three accounts. Change must prove its value, and right soon, if you intend to begin changing soon. There's another story to tell about that reality, one from the emulator's market, which I'll tell very soon. In the meantime, change your passwords

1. If a website you use is vulnerable to Heartbleed; check here with a free tool, or it has been (list below).

and

2. It has now been repaired.

Here's a list of websites which were vulnerable, from Github. Yahoo is among them, which means that ATT broadband customers have some password-changing to do. That's very-short-view change.

Posted by Ron Seybold at 01:46 PM in Homesteading, Migration | Permalink | Comments (0)

April 10, 2014

Heartbleed reminds us all of MPE/iX's age

The most wide-open hole in website security, Heartbleed, might have bypassed the web security tools of the HP 3000. Hewlett-Packard released WebWise/iX in the early 2000's. The software included SSL security that was up to date, back in that year. But Gavin Scott of the MPE and Linux K-12 app vendor QSS reminds us that the "security through antiquity" protection of MPE/iX is a blessing that's not in a disguise.

OldheartWebWise was just too late to the web game already being dominated by Windows at the time -- and even more so, by Linux. However, the software that's in near total obscurity doesn't use the breached OpenSSL 1.0.1 or 1.0.2 beta versions. Nevertheless, older software running a 3000 -- or even an emulated 3000 using CHARON -- presents its own challenges, once you start following the emergency repairs of Heartbleed, Scott says.

It does point out the risks of using a system like MPE/iX, whose software is mostly frozen in time and not receiving security fixes, as a front-line Internet (or even internal) server. Much better to front-end your 3000 information with a more current tier of web servers and the like. And that's actually what most people do anyway, I think.

Indeed, hardly any 3000s are used for external web services. And with the ready availability of low-cost Linux hosts, any intranets at 3000 sites are likely to be handled by that open-sourced OS. The list of compromised Linux distros is long, according to James Byrne of Harte & Lynne, who announced the news of Heartbleed first to the 3000 newsgroup. 

The versions of Linux now in use which are at risk, until each web administrator can supply the security patch, include

Debian Wheezy
Ubuntu 12.04.4 LTS1
CentOS 6.5
Fedora 18
OpenBSD 5.3
FreeBSD 10.0
NetBSD 5.0.2
OpenSUSE 12.2

The PA-RISC architecture of the HP 3000, emulated on CHARON HPA/3000, could also provide a 3000 manager with protection even if somehow an MPE/iX web server had been customized to use OpenSSL 1.0.1, Scott says.

I'm pretty certain that the vulnerable versions of OpenSSL have never been available on MPE/iX. However, it is possible that the much older OpenSSL versions which were ported for MPE/iX may have other SSL vulnerabilities. I haven't looked into it. Secure Apache or another web server dependent on OpenSSL would be the only likely place such a vulnerability could be exposed.

There's also a chance that MPE/iX, even with a vulnerable web server, might have different behavior -- as its PA-RISC architecture has the stack growing in the opposite direction from x86. As such, PA-RISC may do more effective hardware bounds checking in some cases. This checking could mitigate the issues or require MPE/iX-specific knowledge and effort on the part of an attacker in order to exploit vulnerabilities. All the out-of-the-box exploit tools may actually be very dependent on the architecture of the underlying target system.

Security through such obscurity has been a classic defense for the 3000 against the outside world of the web. But as Scott notes, it's a reminder of how old the 3000's web and network tools are -- simply because there's been little to nothing in the way of an update for things like WebWise Apache Server.

But there's still plenty to worry about, even if a migrated site has moved all of its operations away from the 3000. At the website The Register, a report from a white-hat hacker throws the scope of Heartbleed much wider than just web servers. It's hair-raising, because just about any client-side software -- yeah, that browser on any phone, or on any PC or Mac -- can have sensitive data swiped, too.

In a presentation given yesterday, Jake Williams – aka MalwareJake – noted that vulnerable OpenSSL implementations on the client side can be attacked using malicious servers to extract passwords and cryptographic keys.

Williams said the data-leaking bug “is much scarier” than the gotofail in Apple's crypto software, and his opinion is that it will have been known to black hats before its public discovery and disclosure.

Posted by Ron Seybold at 11:18 AM in Migration, Newsmakers | Permalink | Comments (0)

April 09, 2014

How SSL's bug is causing security to bleed

HeartbleedComputing's Secure Sockets Layer (SSL) forms part of the bedrock of information security. Companies have built products around SSL, vendors have wired its protocols into operating systems, vendors have applied its encryption to data transport services. Banks, credit card providers, even governments rely on its security. In the oldest days of browser use, SSL displayed that little lock in the bottom corner that assured you a site was secure -- so type away on those passwords, IDs, and sensitive data.

In a matter of days, all of the security legacy from the past two years has virtually evaporated. OpenSSL, the most current generation of SSL, has developed a large wound, big enough to let anyone read secured data who can incorporate a hack of the Heartbeat portion of the standard. A Finnish security firm has dubbed the exposed hack Heartbleed.

OpenSSL has made a slow and as-yet incomplete journey to the HP 3000's MPE/iX. Only an ardent handful of users have made efforts to bring the full package to the 3000's environment. In most cases, when OpenSSL has been needed for a solution involving a 3000, Linux servers supply the required security. Oops. Now Linux implementations of OpenSSL have been exposed. Linux is driving about half of the world's websites, by some tallies, since the Linux version of Apache is often in control.

One of the 3000 community's better-known voices about mixing Linux with MPE posted a note in the 3000 newsgroup over the past 48 hours to alert Linux-using managers. James Byrne of Harte & Lyne Ltd. explained the scope of a security breach that will require a massive tourniquet. To preface his report, the Transport Layer Security (TLS) and SSL in the TCP/IP stack encrypt data of network connections. They have even done this for MPE/iX, but in older, safe versions. Byrne summed up the current threat.

There is an exploit in the wild that permits anyone with TLS network access to any system running the affected version of OpenSSL to systematically read every byte in memory. Among other nastiness, this means that the private keys used for Public Key Infrastructure on those systems are exposed and compromised, as they must be loaded into memory in order to perform their function.

It's something of a groundbreaker, this hack. These exploits are not logged, so there will be no evidence of compromises. It’s possible to trick almost any system running any version of OpenSSL released over the past two years into revealing chunks of data sitting in its system memory.

The official security report on the bug, from OpenSSL.org, does its best to make it seem like there's a ready solution to the problem. No need to panic, right?

A missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server.

Only 1.0.1 and 1.0.2-beta releases of OpenSSL are affected, including 1.0.1f and 1.0.2-beta1.

Thanks for Neel Mehta of Google Security for discovering this bug and to Adam Langley and Bodo Moeller for preparing the fix.

Affected users should upgrade to OpenSSL 1.0.1g. Users unable to immediately upgrade can alternatively recompile OpenSSL with -DOPENSSL_NO_HEARTBEATS.

1.0.2 will be fixed in 1.0.2-beta2

For the technically inclined, there's a great video online that explains all aspects of the hack. Webserver owners and hosts have their work to do in order to make their sites secure. That leaves out virtually every HP 3000, the server that was renamed e3000 in its final HP generation to emphasize its integration with the Internet. Hewlett-Packard never got around to implementing OpenSSL security in its web services for MPE/iX. 3000 systems are blameless, but that doesn't matter as much as insisting your secure website providers apply that 1.0.1g upgrade.

The spookiest part of this story is that without the log evidence, nobody knows if Heartbleed has been used over the past two years. Byrne's message is directed at IT managers who have Linux-driven websites in their datacenters. Linux has gathered a lot of co-existence with MPE/iX over the last five years and more. This isn't like a report of a gang shooting that's happened in another part of town. Consider it more of a warning about the water supply.

In a bit of gallows humor, it looks as if the incomplete implementation of OpenSSL, frozen in an earlier edition of the software, puts it back in the same category as un-patched OpenSSL web servers: not quite ready for prime time.

Posted by Ron Seybold at 09:50 PM in Homesteading, Migration, Newsmakers, User Reports | Permalink | Comments (0)

April 08, 2014

Here it is: another beginning in an ending

Today's the day that Microsoft gives up its Windows XP business, but just like the HP 3000 exit at Hewlett-Packard, the vendor is conflicted. No more patches for security holes, say the Redmond wizards. But you can still get support, now for a fee, if you're a certain kind of Windows XP user.

New BeginningsIt all recalls the situation of January 2009, when the support caliber for MPE/iX was supposed to become marginal. That might have been true for the typical kind of customer who, like the average business XP user, won't be paying anything to Microsoft for Service Packs that used to be free. But in 2009 the other, bigger sort of user was still paying HP to take 3000 support calls, fix problems, and even engineer patches if needed. 

A lot of those bigger companies would've done better buying support from smaller sources. Yesterday we took note of a problem with MPE/iX and its PAUSE function in jobstreams, uncovered by Tracy Johnson at Measurement Specialties. In less than a day, a patch that seemed to be as missing as that free XP support of April 8 became available -- from an independent support vendor. What's likely to happen for XP users is the same kind of after-market service the 3000 homesteader has enjoyed.

Johnson even pointed us to a view of the XP situation and how closely it seems to mirror the MPE "end of life," as Hewlett-Packard liked to call the end of 2010. "Just substitute HP for Microsoft," Johnson said about a comparison with makers of copiers and makers of operating systems.

Should Microsoft Be Required To Extend Support For Windows XP? The question is being batted around on the Slashdot website today. One commenter said that if the software industry had to stick to the rules for the rest of the office equippers, things would be differerent. Remember, just substiture HP (and MPE) for Microsoft and XP.

If Windows XP were a photocopier, Microsoft would have a duty to deal with competitors who sought to provide aftermarket support. A new article in the Michigan Law Review argues that Microsoft should be held to the same duty, and should be legally obligated to help competitors who wish to continue to provide security updates for the aging operating system, even if that means allowing them to access and use Windows XP's sourcecode.

HP did, given enough time, help in a modest way to preserve the maintainability of MPE/iX. The vendor sold source code licenses for $10,000 each to support companies. In at least one case, the offer of help was proactive. Steve Suraci of Pivital Solutions said he was called by Alvina Nishimoto of HP and asked, "You want to purchase one of these, don't you?" The answer was yes. Nobody knew what good a source code license might do in the after-market. But HP was not likely to make the offer twice, and the companies who got one took on the expense as an investment in support in the future.

But there was a time in the 3000's run-up to that end-of-HP Support when the community wanted to take MPE/iX into open source status. That's why the advocacy group was named OpenMPE. Another XP commenter on Slashdot echoed the situation the 3000 faced during the first years of its afterlife countdown.

(Once again, just substitute HP and MPE for Microsoft and XP. In plenty of places, they'll be used together for years to come.)

XP isn't all that old, as evidenced by the number of users who don't want to get off of it. It makes sense that Microsoft wants to get rid of it -- there's no price for a support contract that would make it mutually beneficial to keep tech support trained on it and developers dedicated to working on it. But at the same time, Microsoft is not the kind of company that is likely to release it to the public domain either. The last thing they would want is an open source community picking it up, keeping it current with security patches and making it work on new hardware. That's the antithesis of the forced upgrade model.

Note: MPE/iX has been made to work with new hardware via the CHARON emulator. Patches are being written, too, even if they are of the binary variety. XP will hope to be so lucky, and it's likely to be. If not, there's the migration to Windows 7 to endure. But to avoid that expense for now, patches are likely to be required. The 3000 community can build many of them. That's what happens when a technology establishes reliability and matures.

Posted by Ron Seybold at 06:12 PM in Migration, News Outta HP | Permalink | Comments (1)

April 04, 2014

Save the date: Apr 16 for webinar, RUG meet

April 16 is going to be a busy day for MB Foster's CEO Birket Foster.

BirketLong known for his company's Wednesday Webinars, Foster will be adding a 90-minute prelude on the same day as his own webinar about Data Migration, Risk Mitigation and Planning. That Wednesday of April 16 kicks off with the semi-annual CAMUS conference-call user group meeting. Foster is the guest speaker, presenting the latest information he's gathered about Stromasys and its CHARON HP 3000 emulator.

The user group meet begins at 10:30 AM Central Time, and Foster is scheduled for a talk -- as well as Q&A from listeners about the topic -- until noon that day. Anyone can attend the CAMUS meeting, even if they're not members of the user group. Send an email to CAMUS leader Terri Lanza at tlanza@camus.org to register, but be sure to do it by April 15. The conference call's phone number will be emailed to registrants. You can phone Lanza with questions about the meeting at 630-212-4314.

Starting at noon, there's an open discussion for attendees about any subject for any MANMAN platform (that would be VMS, as well as MPE). The talk in this soup tends to run to very specific questions about the management and use of MANMAN. Foster is more likely to field questions more general to MPE. The CHARON emulator made its reputation among the MANMAN users in the VMS community, among other spots in the Digital world. You don't have to scratch very deep to find satisfied CHARON users there.

Then beginning at 1 PM Central, Foster leads the Data Migration, Risk Mitigation and Planning webinar, complete with slides and ample Q&A opportunity.

Registration for the webinar is through the MB Foster website. Like all of the Wednesday Webinars, it runs between 1-2 PM. The outline for the briefing, as summed up by the company:

Data migration is the process of moving an organization’s data from one application to another application—preferably without disrupting the business, users or active applications.

Data migration can be a routine part of IT operations in today’s business environment providing service to the whole company – giving users the data they need when they need it, especially for Report, BI (Business Intelligence) or analytics (including Excel spreadsheets) and occasionally for a migration to a new application. How can organizations minimize impacts of data migration downtime, data loss and minimize cost?

In this webinar we outline the best way to develop a data conversion plan that incorporates risk mitigation, and outlines business, operational and technical challenges, methodology and best practices.

The company has been in the data migration business since the 1980s. Data Express was its initial product to extracting and controlling data. It revamped the products after Y2K to create the Universal Data Access (UDA) product line. MBF-UDACentral supports the leading open source databases in PostgreSQL and MySQL, plus Eloquence, Oracle, SQLServer, DB2, and TurboIMAGE, as well as less-common databases such as Progress, Ingres, Sybase and Cache. The software can migrate any of these databases' data between one another.

Posted by Ron Seybold at 07:24 PM in Homesteading, Migration, Web Resources | Permalink | Comments (0)

March 26, 2014

Twice as many anti-virals: not double safety

Editor's note: While 3000 managers look over the need to update XP Windows systems in their company, anti-virus protection is a part of the cost to consider. In fact, extra anti-virus help might post a possible stop-gap solution to the end of Microsoft's XP support in less than two weeks. A lack of new security patches is past of the new XP experience. Migrating away from MPE-based hosting involves a lot more reliance on Windows, after all. Here's our security expert Steve Hardwick's lesson on why more than one A/V utility at a time can be twice as bad as a single good one.

By Steve Hardwick, CISSP
Oxygen Finance

If one is good, then two is better. Except with anti-virus software.

When it comes to A/V software there are some common misconceptions about capabilities. Recently some vendors, such as Adobe, have started bundling anti-virus components as free downloads with their updates. Some managers believe if you have one anti-virus utility, a second can only make things safer. Once we look how anti-virus software operates, you'll see why this is not the case. In fact, loading a second A/V tool can actually do more damage than good.

PolarbeardukeoutThe function of an anti-virus utility is to detect and isolate files or programs that contain viruses. There are two fundamental ways in which the A/V utility does this. The anti-virus program will have a data file that contains signatures for known viruses. First, any files that are saved on the hard drive are scanned for signatures to see if they contain malicious code. This is very similar to programs that search for fingerprints. Once the A/V utility finds a match, the file is identified as potentially dangerous and quarantined to prevent any infection. Second, the anti-virus utility will intercept requests to access a file and scan it before it is run. This requires that the anti-virus program can inspect the utility prior to it being launched.

Anti-virus designers are aware that their utility is one of the primary targets of a hacker. After all, if the hacker can bypass the A/V system then it is open to attack, commonly referred to as owned or pwned. So a core component of the A/V system is to constantly monitor its own performance to make sure it has not been compromised. If the A/V system detects that it is not functioning correctly, it will react as if there is a hacking attack and try to combat it. 

So here's what happens if two anti-virus programs are loaded on the same machine. Initially, there are issues as the second system is installed. When the second utility is loaded it contains its own database of known virus signatures. The first anti-virus will see that signature file as something highly dangerous. After all, it will look like it contains a whole mass of virus files. It will immediately stop it from being used and quarantine it. Now the fun starts -- fun that can drive a system into a ditch.

The second anti-virus program will react to the quarantine of its signature file. The second A/V does not know if the issue is another A/V, or a hacker trying the thwart the operation of the system. So it will try to stop the quarantine action of the first A/V. The two systems will battle until one of them gives up and the other wins, or the operating system steps in and stops both programs. Neither outcome is what you're after.

If the two systems do manage to load successfully -- in many cases anti-virus programs are now built to recognize other A/V systems - then a second battle occurs. When a file is opened, both A/V systems will try to inspect it before it is passed to the operating system for processing. As one A/V tries to inspect the file, the second one will try and stop the action. The two A/V systems will battle it out to take control and inspect the file ahead of each other.

Even if multiple systems do acknowledge each other and decide to work together, there are still some issues left. When a file is accessed, both systems will perform an inspection, and this increases the amount of time the virus scan will take. What's more, the anti-virus programs continually update their signature files. Once a new signature file is loaded, the A/V program will kick of a scan to see if the new file can detect any threats the old one did not catch. In most cases, new signature files arrive daily to the A/V system. That means both systems will perform file scans, sometimes simultaneously. This can bring a system to its knees -- because file scanning can be CPU intensive.

So two is worse than one, and you want to remove one of them. Removing A/V programs can be very tricky. This is because one goal of the hacker is to disable or circumvent the anti-virus system. So the A/V system is designed to prevent these attempts. If A/V programs were easy to install, all the hacker would have to do is launch the uninstall program - and in many cases, the A/V manufacturer does provide an uninstall program. Unfortunately in many cases, that uninstall may not get rid of all of the elements of the A/V. Several of the A/V manufacturers provide a utility that will clean out any remnants, after the A/V system has been initially uninstalled. 

So are there any advantages to having a second A/V system running? There is always a race between A/V companies to get out the latest signatures. Adding more A/V providers may increase your chances of getting a wider coverage, but only very marginally. The cost of the decreased performance versus this marginal increase in detection is typically not worth it. Over time, A/V vendors tend to even out on their ability to provide up-to-date signature files.

In summary, the following practices make up a good approach to dealing with the prospects of multiple A/V systems.

1) Read installation screens before adding a new application or upgrade to your system. Think carefully before adding an A/V feature that your current solution provides. Even if a new feature is being provided, it may be worth checking with your current provider to see if they have that function, and adding it from them instead.

2) If you do get a second A/V system in there and you want to remove it, consult the vendor's technical web site regarding removal steps. Most A/V vendors have a step-by-step removal process. Sometimes they will recommend a clean-up tool after the initial uninstall.

3) If you do want to check your A/V system, choose an on-line version that will provide a separate scan without loading an utility. There are many to choose from - search on “on-line antivirus check” in your favorite engine and pick one that is not your primary A/V vendor. Be careful - something online may try to quarantine your current A/V system. But this will give you a safe way to check if your current A/V is catching everything.

4) Don't rely on A/V alone. Viruses now come in myriad forms. No longer are they simple attacks on operating system weaknesses. Newer ones exploit the fallibility of browser code and are not dependent on the operating system at all. A good place to start looking at how you can improve your security is the CERT tips page https://www.us-cert.gov/ncas/tips By following safe computing practices, one A/V should be sufficient.

5) Beware of impostors. There are several viruses out there that mimic an A/V system. You may get a warning saying that your system is not working and to click on a link to download a better A/V system. Before clicking on the link, check the source of the utility. If you don't know how to do that, don't click on the link. You can always go back to Step 3 and check your A/V yourself.

Posted by Ron Seybold at 01:38 PM in Migration, Newsmakers | Permalink | Comments (0)