July 25, 2014

Pen testing crucial to passing audits

Migrated HP 3000 sites have usually just put sensitive corporate information into a wider, more public network. The next audit their business applications will endure is likely to have a security requirement far more complicated to pass. For those who are getting an IT audit on mission-critical apps hosted on platforms like Windows or Linux, we offer this guide to penetration testing.

By Steve Hardwick
CSIPP, Oxygen Finance

Having just finished installing a new cable modem with internal firewall/router, I decided to complete the installation by running a quick and dirty on-line penetration test. I suddenly realized that I am probably a handful of home users that we actually run a test after installing the model. I used the Web utility Shields Up, which provides a quick scan for open ports. Having completed the test -- successfully I may add -- I thought it would be a good opportunity to review Pen, or penetration, testing as a essential discipline.

SecuritypenetrationPenetration testing is a crucial part of any information security audit. They are most commonly used to test network security controls, but can be used for testing administrative controls too. Testing administrative controls, i.e. security rules users must follow, is commonly called social engineering. The goal of penetration testing is to simulate hacker behavior to see if the security controls can withstand the attack.

The key elements of either tests fall into three categories

1) Information gathering: This involves using methods to gain as much information about the target without contacting the network or the system users.

2) Enumeration: To be able to understand the target, a set of probing exercises are conducted to map out the various entry points. Once identified, the entry points are further probed to get more detail about their configuration and function.

3) Exploitation: After review of the entry points, a plan of attack is constructed to exploit any of the weaknesses discovered in the enumeration phase. The goal is get unauthorized access to information in order to steal, modify or destroy it.

Let's take a look at how all this works in practice.

Information gathering

There are a lot of techniques that can be used to gain information about a target. A simple whois on target URLs may reveal contact information that can be used in social engineering for example. (I used it once to get a personal cell phone number of a target by looking at the registration of their web page). 

Another commonly used method is dumpster diving This is where trash from a target is examined for any useful information. Finding out the middle name of a CIO can often confuse an IT admin and open the door to masquerading as a company employee (I have person experience of this one). There may even be old network diagrams that have been thrown out in the trash.

Another good technique is Google hacking. This is a technique where advanced Google commands are used to find information that may not be immediately apparent. For example, searching a website for any text files that contain the word “password.” Sounds amazing, but it can work. For more information, download a copy of this book published by the NSA.

Enumeration

For social engineering, this can be as simple as chatting to people on their smoke breaks. Other activities can include taking zoom photographs of employee badges, or walking around a building looking for unlocked exits and entry doors.

For networks this typically comes in multiple stages. First, the externally facing portions of the network are probed. Ports are scanned to see which ones are accepting traffic -- or open. Equipment can be queried for its make and its installed software. Also, the presence of other network devices; this can include air conditioning controllers, security camera recorders, and other peripherals connected directly to the Internet. 

Exploitation

An obvious question at this point: How can you tell if the person attacking your security systems is a valid tester or an actual hacker? The first step in any penetration test is to gain the approval of someone who can legitimately provide it. For example, approval should be from a CEO or CIO, not a network admin. The approval should also include the scope of any testing. This is sometimes called a get out of jail card.

Once a list of potential entry points and their weakness has been compiled, a plan of attack can be put together. In the case of social engineering, this can include selecting a high-ranking employee to impersonate. Acting as a VP of Sales, especially if you include their middle name, and threatening a system admin with termination if they don't change their password can be a good way of getting into a network.

On the technical side, there are a lot of tools out there that can be used to focus on a specific make of equipment with a specific software level. Especially if it has not been patched in a while. Very often the enumeration and exploitation steps are repeated as various layers of defense are breached. There is a common scene in movies as the hacker breaches one firewall after another. Each time it is a process of enumeration followed by exploitation.

Useful tools

Once of the most useful tools for performing penetration testing is BackTrack. This is a useful site for two reasons. One, it contains a set of penetration testing tools on a live CD version of Linux (now maintained by Kali). The live CD version is very useful if you gain physical access, as you may be able to use it on an existing PC. Two, it contains a wide set of how-to's and training videos. This is a good first stop for those looking to understand what is available and how penetration testing is done. The tools and training is targeted to both beginners and experienced practitioners.

Another site that provides a variety of tools is insecure.org. The site provides links to individual tools that are focused on various parts of pen testing. The listing is broken down for the various sections and the tools listed. Both free and commercial tools are listed in the site's compendium. There is also a directory of relevant articles on different security topics.

Finally, there is Open Web Application Security Project (OWASP). This site is hosted by a non-profit organization that is solely focused on Web application security. OWASP provides a great deal of information and tools regarding testing and securing web applications, as this is a very common target for hackers. This can include a corporate web site, but also a web interface for controlling an HVAC unit remotely. There is even a sample flawed website, web goat, that can be used to hone testing skills.

Penetration testing is a very important part of security audit. It provides a methodology for analyzing vulnerabilities in security controls within a company's infrastructure. In many cases testing will be performed by internal resources on a more frequent basis, with annual or semiannual tests conducted by qualified third-party testers. In all cases, the testing should be performed by someone who is qualified to the level required. A improperly executed pen test provides a dangerous level of false security. Plus in many cases, security compliance will necessitate a pen test.

Posted by Ron Seybold at 04:50 PM in Migration | Permalink | Comments (0)

Follow the 3000 NewsWire on Twitter
for immediate feeds of our latest news
and more twitter.com/3000newswire.

July 23, 2014

Migrators make more of mobile support app

A serious share of HP 3000 sites that have migrated to HP's alternative server solutions have cited vendor support as a key reason to leave MPE. Hewlett-Packard has been catering to their vendor-support needs with an iPhone/Android app, one which has gotten a refresh recently.

HPSCm screenshotsFor customers who have Connected Products via HP's Remote Support technologies, the HP Support Center Mobile (HPSCm) app with Insight Online will automatically display devices which are remotely monitored. The app allows a manager to track service events and related support cases, view device configurations and proactively monitor HP contracts, warranties and service credits.

Using the app requires that the products be linked through the vendor's HP Passport ID. But this is the kind of attempt at improving support communication which 3000 managers wished for back in the 1990s. This is a type of mobile tracking that can be hard to find from independent support companies. To be fair, that's probably because a standard phone call, email or text will yield an immediate indie response rather than a "tell me who you are, again" pre-screener.

But HPSCm does give a manager another way to link to HP support documents (PDF files), something that would be useful if a manager is employing a tablet. That content is similar to what can be seen for free, or subject to contract by public audiences, via the HP Business Portal. (Some of that content is locked behind a HP Passport contract ID.) This kind of support -- for example, you can break into a chat with HP personnel right from the phone or tablet -- represents the service that some large companies seem to demand to operate their enterprise datacenters.

Weird-zucchiniThere's also a Self-Solve feature in the HP mobile app, to guide users to documents most likely to help in resolving a support issue. Like the self-check line in the grocery, it's supposed to save time -- unless you've got a rare veggie of a problem to look up.

Remote system administration isn't unheard of in the 3000 world. Allegro Consultants' iAdmin got an update to iOS 7 this month. It supports MPE servers, as well as HP-UX, Solaris, Linux and OS X. iAdmin requires a back-end subscription for each server monitored, just like the HPSCm app. But iAdmin draws its information from a secure server in the cloud; the monitored systems feed their status to that secure server.

HPSCm offers one distinction from independent service arrangements: managers and companies can report they're getting mobile updates via HP itself -- instead of a more focused support company, like Pivital Solutions, which specializes in 3000 issues. Migrated sites have stopped caring about 3000 support, but those who are still mulling over the ideal of using more modern servers might try out the HP app. They can do so if they've already registered monitoring access for servers and such via HP Passport.

Posted by Ron Seybold at 05:06 PM in Migration, News Outta HP | Permalink | Comments (0)

July 21, 2014

Maximum Disc Replacement for Series 9x7s

Software vendors, as well as in-house developers, keep Series 9x7 servers available for startup to test software revisions. There are not very many revisions to MPE software anymore, but we continue to see some of these oldest PA-RISC servers churning along in work environments.

9x7s, you may ask -- they're retired long ago, aren't they? Less than one year ago, one reseller was offering a trio for between $1,800 (a Series 947) and $3,200. Five years ago this week, tech experts were examining how to modernize the drives in these venerable beasts. One developer figured in 2009 they'd need their 9x7s for at least five more years. For the record, 9x7s are going to be from the early 1990s, so figure that some of them are beyond 20 years old now.

"They are great for testing how things actually work," one developer reported, "as opposed to what the documentation says, a detail we very much need to know when writing migration software. Also, to this day, if you write and compile software on 6.0, you can just about guarantee that it will run on 6.0, 6.5, 7.0 and 7.5 MPE/iX."

BarracudaSome of the most vulnerable elements of machines from that epoch include those disk drives. 4GB units are installed inside most of them. Could something else replace these internal drives? It's a valid question for any 3000 that runs with these wee disks, but it becomes even more of an issue with the 9x7s. MPE/iX 7.0 and 7.5 are not operational on that segment of 3000 hardware.

Even though the LDEV1 drive will only support 4GB of space visible to MPE/iX 6.0 and 6.5, there's always LDEV2. You can use virtually any SCSI (SE SCSI or FW SCSI) drive, as long as you have the right interface and connector.

There's a Seagate disk drive that will stand in for something much older that's bearing an HP model number. The ST318416N 18GB Barracuda model -- which was once reported at $75, but now seems to be available for about $200 or so -- is in the 9x7's IOFDATA list of recognized devices, so they should just configure straight in. Even though that Seagate device is only available as refurbished equipment, it's still going to arrive with a one-year warranty. A lot longer than the one on any HP-original 9x7 disks still working in the community.

One developer quipped to the community, five years ago this week, "On the disc front at least that Seagate drive should keep those 3000s running, probably longer than HP remains a Computer Manufacturer."

But much like the 9x7 being offered for sale this year, five years later HP is still manufacturing computers, including its Unix and Linux replacement systems for any 3000 migrating users. 

So to refresh drives on the 9x7s, configure these Barracuda replacement drives in LDEV1 as the ST318416N -- it will automatically use 4GB (its max visible capacity) on reboot.

As for the LDEV2 drives, there are no real logical size limits, so anything under 300GB would work fine -- 300GB was the limit for MPE/iX drives until HP released its "Large Disk" patches for MPE/iX, MPEMXT2/T3. But that's a patch that wasn't written for the 9x7s, as they don't use 7.5.

Larger drives were not tested for these servers because of a power and heat dissipation issue. Some advice from the community indicates you'd do better to not greatly increase the power draw above what those original equipment drives require. The specs for those HP internal drives may be a part of your in-house equipment documentation. Seagate offers a technical manual for the 18GB Barracuda drive at its website, for power comparisons.

Posted by Ron Seybold at 07:51 PM in Hidden Value, Homesteading, Migration, User Reports | Permalink | Comments (2)

July 14, 2014

Protecting a Server from DDoS Attacks

For anybody employing a more Web-ready server OS than MPE, or any such server attached to a network, Distributed Denial of Service (DDoS) presents a hot security and service-level threat. Migrating sites will do well to study up on these hacks. In the second of two parts, our security writer Steve Hardwick shares preventative measures to reduce the impacts to commodity-caliber enterprise computing such as Linux, Unix or Windows.

By Steve Hardwick, CISSP
Oxygen Finance

SecurityScrabbleDDoS attacks can be very nasty and difficult to mitigate. However, with the correct understanding of both the source and impact of these attacks, precautions can be taken to reduce their impact. This includes preventing endpoints from being used as part of a botnet to attack other networks. For example, a DDoS virus may not affect the infected computer, but it could wreak havoc on the intended target.

One legitimate question is why a DDoS attack be would used. There are two main reasons:

1) As a primary attack model. For example, a group of hacktivists want to take down a specific website. A virus is constructed that specifically targets the site and then is remotely triggered. The target site is now under serious attack.

2) As part of a multi stage attack. A firewall is attacked by an amplified Ping Flood attack. The firewall can eventually give up and re-boot (sometimes referred to as “failing over”). The firewall may reboot in a “safe” mode, fail over, or back-up configuration. In many cases this back-up configuration contains minimal programming and is a lot easier to breach and launch the next phase of the attack. I've had experiences where the default fail-over configuration of a router was wide open -- allowing unfiltered in-bound traffic.

DDoS attacks are difficult to mitigate, as they attack several levels of the network. However, there are some best practices that can be employed to help lessen the threat of DDoS attacks.

1) Keep all software up to date. This includes end user machines, servers, security devices (IDPs for example as they can be targets of DDoS attacks to disable them) routes and firewalls. To be truly effective, an attack needs to secure a network of machines to source the attacks, so preventing these machines from becoming infected reduces the source of attacks.

2) Centralized Monitoring: By using a central monitoring system, a clear understanding of the network operation can be gained. Plus any variance in traffic patterns can be seen, this especially true of multistage attacks.

3) Apply filtering: Many firewalls contain specific sections for filtering out DDoS attacks. Plus disabling PING responses can also help reduce susceptibility. Additionally, firewall filtering policies must be continually reviewed. This includes audit of the policies themselves, or a simulated DDoS attack on networks at period of low activity. Don't forget to make sure that firewall backup configurations are reviewed and set correctly.

4) Threat intelligence: Constantly review the information regarding new threats. There are now many media services that will provide updates about newly detect threats.

5) Outsource: There are also several DDoS mitigation providers out there that assist in providing services that help corporations secure their networks against DDoS attacks. A quick web search will show many of the well-known companies in this space.

6) Incident Response plan: Have a good plan to respond to DDoS level threats. This must include an escalation path to a decision maker that can respond to a threat as this may include isolating critical systems from the network.

Posted by Ron Seybold at 10:12 AM in Migration | Permalink | Comments (0)

July 11, 2014

Understanding the Roots of DDoS Attacks

Editor’s Note: While the summertime of pace of business is upon us all, the heat of security threats remains as high as this season's temperatures. Only weeks ago, scores of major websites, hosted on popular MPE replacement Linux servers, were knocked out of service by Distributed Denial of Service DDoS attacks. Even our mainline blog host TypePad was taken down. It can happen to anybody employing a more Web-ready server OS than MPE, to any such server attached to a network -- so migrating sites will do well to study up on these hacks. Our security writer Steve Hardwick shares background today, and preventative measures next time.

By Steve Hardwick, CISSP
Oxygen Finance

DDOS-AttackDistributed Denial of Service (DDoS) is a virulent attack that is growing in number over the past couple of years. The NSFOCUS DDoS Threat Report 2013 recorded 244,703 incidents of DDoS attacks throughout last year. Perhaps the best way to understand this attack is to first look at Denial Of Service, (DoS) attacks. The focus of a DoS attack is to remove the ability of a network device to accept incoming traffic. DoS attacks can target firewalls, routers, servers or even personal computers. The goal is to overload the network interface such that it either it unable to function or it shuts down.

A simple example of such an attack is a Local Area Network Denial. This LAND attack was first seen around 1997. It is accomplished by creating a specially constructed PING packet. The normal function of ping is to take the incoming packet and send a response to the source machine, as denoted by the source address in the packet header. In a LAND attack, the source IP address is spoofed and the IP address of the target is placed in the source address location. When the target gets the packet, it will send the ping response to the source address, which is its own address. This will cause the target machine to repeatedly send responses to itself and overload the network interface. Although not really a threat today, some older versions of operating systems -- such as the still-in-enterprises Windows XP SP2, or Mac OS MacTCP 7.6.1 -- are susceptible to LAND attacks.

So where does the Distributed part come from? Many DoS attacks rely on the target machine to create runaway conditions that cause the generation of a torrent of traffic that floods the network interface. An alternative approach uses a collaborative group of external machines to source the attack. For example, a virus can be written that sends multiple emails to a single email address. The virus also contains code to send it to everyone in the recipient's email address book. Before long, the targeted server is receiving thousands of emails per hour -- and the mail server becomes overloaded and effectively useless.

Another DoS example is a variant of the LAND attack, a Ping flood attack. In this attack a command is issued on a machine to send ping packets as fast as possible without waiting for a response (using the -f option in the ping command, for example). If a single machine is used, then the number of packets may not overwhelm the target. However, if a virus is constructed such that the ping flood will occur at a specific time, then it can be sent to multiple machines.

When a predefined trigger time is reached, all of the infected machines start sending ping flood to the target. The collection of infected machines, called Zombies, is called a botnet or an amplifications network. A good example is Flashback Trojan, a contagion that was found to have infected more than 600,000 Mac OS X systems. This created a new phenomenon -- MAC based botnets.

Before discussing some other attacks, it is necessary to understand a little more about firewalls and servers. In the examples above, the target was at the IP address layer of the network interface. However, network equipment has additional functionality on top of the IP processing function. This includes session management of the IP connections and application level functions.

Newer attacks have now started focusing on these session and application functions. This requires less resources and can create broader based attacks that can target multiple network elements with a single virus. A good example of this class are HTTP floods. For example, repeated HTTP Get requests are made to retrieve information from a web server. The sending machine does not wait for the information to be sent, but keeps sending multiple requests. The web server will try to honor the request and send out the content. Eventually the multiple requests will overload the web server. Since these look like standard HTTP requests, they are difficult to mitigate.

Next time: Why DDoS is used, and how to reduce the threats to servers.

Posted by Ron Seybold at 06:56 PM in Migration | Permalink | Comments (0)

July 08, 2014

That MPE spooler's a big piece to replace

PrintspoolerMigration transitions have an unexpected byproduct: They make managers appreciate the goodness that HP bundled into MPE/iX and the 3000. The included spooler is a great example of functionality which has an extra cost to replace in a new environment. Unlike in Windows with MBF Scheduler, Unix has to work very had to supply the same abilities -- and that's the word from one of the HP community's leading Unix gurus.

Bill Hassell spread the word about HP-UX treasures for years from his own consultancy. While working for SourceDirect as a Senior Sysadmin expert, he noted a migration project where the project's manager noted Unix tools weren't performing at enterprise levels. Hassell said HP-UX doesn't filter many print jobs.

MPE has an enterprise level print spooler, while HP-UX has very primitive printing subsystem. hpnp (HP Network Printing) is nothing but a network card (JetDirect) configuration program. The ability to control print queues is very basic, and there is almost nothing to monitor or log print activities similar to MPE. HP-UX does not have any print job filters except for some basic PCL escape sequences such as changing the ASCII character size.

While a migrating shop might now be appreciating the MPE spooler more, some of them need a solution to replicate the 3000's built-in level of printing control. One answer to the problem might lie in using a separate Linux server to spool, because Linux supports the classic Unix CUPS print software much better than HP-UX.

The above was Glen Kilpatrick's idea as a Senior Response Center Engineer at Hewlett-Packard. Like a good support resource, Kilpatrick was a realist in solving the "where's the Unix spooler?" problem.

The "native" HP-UX scheduler / spooler doesn't use (or work like) CUPS, so if you implement such then you'll definitely have an unsupported solution (by HP anyway). Perhaps you'd be better off doing "remote printing" (look for that choice in the HP-UX System Administration Manager) to a Linux box that can run CUPS.

This advice shovels in a whole new environment to address an HP-UX weakness, however. So there's another set of solutions available from independent resources -- third-party spooling software. These extra-cost products accomodate things like default font differences between print devices, control panels, orientation and more. Michael Anderson, the consultant just finishing up a 3000 to Unix migration, has pointed out these problems that rose up during the migration.

My client hired a Unix guru (very experienced, someone I have lots of respect for) to set this up a year or more ago. They recreated all the old MPE printer LDEVs and CLASS names in CUPS, and decided on the "raw" print format so the application can send whatever binary commands to the printers. Now they have some complaints about the output not being consistent. My response was, "Absolutely! There were certain functions that the MPE spooler did for you at the device class/LDEV level, and you don't have that with CUPS on HP-UX."

Anderson has faith that learning more about CUPS will uncover a solution. "One plus for CUPS, it does make the applications more portable," he added.

There's one set of tasks can solve the problem without buying a commercial spooler for Unix, but you'll need experience with adding PCL codes and control of page layouts. Hassell explains:

Yes, [on HP-UX] it's the old, "Why doesn't Printer 2 print like Printer 3?" problem. So unlike the Mighty MPE system, where there is an interface to control prepends and postpends, in HP-UX you'll be editing the model.orig directory where each printer's script is located. It just ASMOS (A Simple Matter of Scripting). The good news is that you already have experience adding these PCL codes and you understand what it takes to control logical page layouts. The model.orig directory is located in /etc/lp/interface/model.orig

What Anderson needs to accomplish in his migration is the setup of multiple config environments for each printer, all to make "an HP-UX spooler send printer init/reset instructions to the printer, before and after the print job. In other words: one or more printer names, each configured differently, yet all point to the same device."

You won't get that for HP-UX without scripting, the experts are saying, or an external spooling server under Linux, or a third party indie spooler product. 

3000 managers who want third party expertise to support a vast array of print devices are well served to look at ESPUL and PrintPath spooling software from veteran 3000 developer Rich Corn at RAC Consulting. Corn's the best at controlling spoolfiles for 3000s, and he takes networked printing to a new level with PrintPath. Plenty of 3000 sites never needed to know all that his work could do, however -- because that MPE spooler looks plenty robust compared to what's inside the Unix toolbox.

Posted by Ron Seybold at 01:56 PM in Migration, Web Resources | Permalink | Comments (0)

July 07, 2014

User says licensing just a part of CHARON

Dairylea-Districts-0809Licensing the CHARON emulator solution at the Dairylea Cooperative has been some work, with some suppliers more willing to help in the transfer away from the compay's Series 969 than others. The $1.7 billion organization covers seven states and at least as many third party vendors. “We have a number of third party tools and we worked with each vendor to make the license transfers,” said IT Director Jeff Elmer. 

“We won’t mention any names, but we will say that some vendors were absolutely wonderful to work with, while others were less so. It’s probably true that anyone well acquainted with the HP 3000 world could make accurate guesses about which vendors fell in which camp.”

Some vendors simply allowed a transfer at low cost or no cost; others gave a significant discount because Dairylea has been a long-time customer paying support fees. ”A couple wanted amounts of money that seemed excessive, but in most cases a little negotiation brought things back within reason,” Elmer said. The process wasn’t any different than a customary HP 3000 upgrade: hardware costs were low, but software fees were significant.

“The cumulative expense of the third party software upgrades was nearly a deal-breaker,” he said. “In the end, our management was concerned enough about reliance on old disk drives that they made the decision to move forward. In our opinion it was money very well spent.”

Just as advertised, software that runs on an HP RISC server runs under CHARON. ‘Using those third party tools on the emulator is completely transparent,” Elmer said. “We had one product for which we had to make a command change in a job stream, and we had to make a mind-shift in evaluating what our performance monitoring software is telling us. Apart from that, it is business as usual.”

Posted by Ron Seybold at 07:48 AM in Migration, User Reports | Permalink | Comments (0)

June 30, 2014

Update: Open source, in 3000 ERP style

OpenBravo roadmapAn extensive product roadmap is part of the OpenBravo directions for this open source ERP commercial solution

Five years ago today, we chronicled the prospects of open source software for HP 3000s. We mentioned the most extensive open source repository for MPE systems, curated by Brian Edminster and his company Applied Technologies. MPE-OpenSource.org has weathered these five years of change in the MPE market and still serves open source needs. But in 2009 we also were hopeful about the arrival of OpenBravo as a migration solution for 3000 users who were looking for an ERP replacement of MANMAN, for example -- without investing in the balky request-and-wait enhancement tangle of proprietary software.

Open source software is a good fit for the HP 3000 community member, according to several sources. Complete app suites have emerged and rewritten the rules for software ownership. An expert consulting and support firm for ERP solutions is proving that a full-featured ERP app suite, Openbravo, will work for 3000 customers by 2010.

[Editor's note: "We meant work for 3000 customers" in the sense of being a suitable ERP replacement for MPE-based software]. 

A software collective launched in the 1990s by the University of Navarra which has evolved to Openbravo, S.L., Openbravo is utilized by manufacturing firms around the world. Openbravo is big stuff. So large that it is one of the ten largest projects on the SourceForge.net open source repository, until Openbravo outgrew SourceForge. The software, its partners and users have their own Forge running today. In 2009, Sue Kiezel of Entsgo -- part of the Support Group's ERP consulting and tech support operations -- said, “We believe that within six to nine months, the solution will be as robust as MANMAN was at its best.”

From the looks of its deep Wiki, and a quick look into the labs where development is still emerging for advanced aspects such as analytics, Entsgo's premonition has come to fruition. Managing manufacturing is easily within the pay-grade of open source solutions like OpenBravo.

What we reported on five years ago is no less true today. Open source is an essential part of enterprise IT by now, though. Entsgo's predictions were spot-on.

Open source solutions can span a wide range of organization, from code forges with revisions and little else to the one-stop feel of a vendor, minus the high costs and long waits. Openbravo is in the latter category, operating with hundreds of employees after having received more than $18 million in funding. If that doesn't sound much like the Apache and Samba open source experience, then welcome to Open Source 2.0, where subscription fees have replaced software purchases and partner firms join alongside users to develop the software.

Openbravo says the model is "commercial open source business model that eliminates software license fees, providing support, services, and product enhancements via an annual subscription." Entsgo says you have a company that supports it, and you can subscribe to it and verifies it, upgrades it and maintains it — all of that under one company name.

“In the 3000 community, we’re used to the independence of the open source model,” said Kiezel. “We’re used to tools that are intuitive, and if you look at us, we should be able to embrace open source more than any other community.”

Open source practices turn the enhancement experience upside down for an application. In the traditional model, a single vendor writes software at a significant investment for high profits, then accepts requests for enhancements and repairs. A complex app such as ERP might not even get 10 percent of these requests fulfilled by the average vendor.

The open source community around Openbravo operates like many open source enterprises. Companies create their own enhancements, license them back to the community, and can access bug fixes quickly—all because the ownership is shared and the source code for the app is open.

Posted by Ron Seybold at 07:33 PM in Migration, Newsmakers, Web Resources | Permalink | Comments (0)

June 27, 2014

Mansion meet takes first comeback steps

A few hours ago, the first PowerHouse user group meeting and formation of a Customer Advisory Board wrapped up in California. Russ Guzzo, the guiding light for PowerHouse's comeback, told us a few weeks ago that today's meeting was just the first of several that new owner UNICOM Global was going to host. "We'll be taking this on the road," he said, just as the vendor was starting to call users to its meeting space at the PickFair mansion in Hollywood.

We've heard that the meeting was webcast, too. It's a good idea to extend the reach of the message as Unicom extends the future of the PowerHouse development toolset.

CoeThis is a product that started its life in the late 1970s. But so did Unix, so just because a technology was born more than 35 years ago doesn't limit its lifespan. One user, IT Director Robert Coe at HPB Management Ltd. in Cambridge, wants to see PowerHouse take a spot at the table alongside serious business languages. Coe understands that going forward might mean leaving some compatibility behind. That's a step Hewlett-Packard couldn't ever take with MPE and the HP 3000. Some say that decision hampered the agility of the 3000's technical and business future at HP. Unix, and later Linux, could become anything, unfettered by compatibility.

Coe, commenting on the LinkedIn Cognos Powerhouse group, said his company has been looking at a migration away from Powerhouse -- until now.

I would like to see Powerhouse developed into a modern mainstream language, suitable for development of any business system or website. If this is at the expense of backwards compatibility, so be it. We are developing new systems all the time, and at the moment are faced with having to use Java, c# or similar. I would much rather be developing new systems in a Powerhouse based new language, with all the benefits that provides, even if it is not directly compatible with our existing systems. 

The world would be a better place if Powerhouse was the main platform used for development! I hope Unicom can provide the backing, wisdom and conviction to enable this to happen.

There were many business decisions made about the lifecycle and sales practices for PowerHouse over the last 25 years that hampered the future of the tool. Coe found technical faults with the alternatives to PowerHouse -- "over-complicated, hard to learn, slow to develop, difficult to maintain, prone to bugs, with far too much unnecessary and fiddly syntax."

But he was also spot-on in tagging the management shortcomings of the toolset's previous owners:

  • Cognos concentrated on BI tools, as there appeared to be more money in them 
  • IBM bought Cognos for its BI tools for the same reason 
  • Powerhouse development more or less stopped many years ago 
  • Licences were very expensive compared to other languages. which were often open source and free 
  • Powerhouse was not open source and therefore didn’t get the support of the developer community 
  • Backwards compatibility was guaranteed, stifling major development
Powerhouse is a far superior platform for development of business systems. I cringe at the thought of having to use the likes of Java to replace or current systems or to develop our future systems!

Bob Deskin, hired by UNICOM to advise the new owners on a growth strategy for the toolset, reminded Coe that things like Java, Ruby, Python and Perl were not purpose-built for business.

Don't be too hard on those other languages. Some of them aren't what I would call complete programming languages. Some are scripting languages. And some are trying to be all things to all people. PowerHouse was always focused on business application development. Hang in for a while longer and watch what UNICOM can do.

Posted by Ron Seybold at 07:51 PM in Homesteading, Migration, Newsmakers | Permalink | Comments (0)

June 25, 2014

What level of technology defines a legacy?

Even alternatives to the HP 3000 can be rooted in legacy strategy. This week Oracle bought Micros, a purchase that's the second-largest in Oracle history. Only buying Sun has cost Oracle more, to this point in the company's legacy. The twist in the story is that Micros sells a legacy solution: software and hardware for the restaurant, hospitality and retail sectors. HP 3000s still serve a few of those outlets, such as duty-free shops in airports.

Stand By Your Unix SystemMicros "has been focused on helping the world’s leading brands in our target markets since we were founded in 1977," said its CEO. The Oracle president who's taking on this old-school business is Mark Hurd, an executive who calls to mind other aspects of legacy. Oracle's got a legacy to bear since it's a business solution that's been running companies for more than two decades. Now the analysts are saying Oracle will need to acquire more of these customers. Demand for installing Oracle is slowing, they say.

In the meantime, some of the HP marketplace is reaching for ways to link with Oracle's legacy. There's a lot of data in those legacy databases. PowerHouse users, invigorated by the prospects of new ownership, are reaching to find connection advice for Oracle. That's one legacy technology embracing another.

Legacy is an epithet that's thrown at anything older. It's not about one technology being better than another. Legacy's genuine definition involves utility and expectations.

It's easy to overlook that like Oracle, Unix comes in for this legacy treatment by now. Judging only by the calendar, it's not surprising to see the legacy tag on an environment that was just finding its way in the summer of 1985, while HP was still busy cooking up a RISC revolution that changed the 3000's future. Like the 3000's '70s ideal of interactive computing -- instead of batch operations -- running a business system with Unix in the 1980s was considered a long shot.

An article from a 1985 Computerworld, published the week that HP 3000 volunteer users were manning the Washington DC Interex meet, considered commercial Unix use something to defend. Like some HP 3000 companies of our modern day, these Unix pioneers were struggling to find experienced staff. Unix was unproven, and so bereft of expertise. At least MPE has proven its worth by now.

In the pages of that 1985 issue, Charles Babcock reported on Unix-for-business testimony.

NEW YORK -- Two large users of AT&T Unix operating systems in commercial settings told attendees at the Unix Expo conference that they think they have made the right choice. Both said, however, that they have had difficulty building a professional staff experienced in Unix.

The HP 3000 still ran on MPE V in that month. Apple's Steve Jobs had just resigned from the company he founded. Legacy was leagues away from a label for Unix, or even Apple in that year. It was so far back that Oracle wondered why they'd ever need to build a version of its database for HP 3000s. IMAGE was too dominant, especially for a database bundled with a business server. The 3000, even in just its second decade of use, was already becoming a legacy.

That's legacy as in a definition from Brian Edminster of Applied Technologies. The curator of open source solutions, and caretaker of a 3000 system for World Duty Free Group, shared this.

A Legacy System is one that's been implemented for a while and is still in use for a very important reason: Even if it's not pretty -- It works.

A Legacy System is easy to identify in nearly any organization:  It's the one that is constructed with tools that aren't 'bleeding edge.'

Posted by Ron Seybold at 09:22 PM in History, Migration | Permalink | Comments (0)

June 20, 2014

Time to Sustain, If It's Not Time to Change

LarsHomesteadIn the years after HP announced its 3000 exit, I helped to define the concept of homesteading. Not exactly new, and clearly something expected in an advancing society. Uncle Lars' homestead, at left, showed us how it might look with friendly droids to help on Tattooine. The alternative 3000 future that HP trumpeted in 2002 was migration. But it's clear by now that the movement versus steadfast strategy was a fuzzy picture for MPE users' future.

What remains at stake is transformation. Even to this week, any company that's relying on MPE, as well as those making a transition, are judging how they'll look in a year, or three, or five. We've just heard that software rental is making a comeback at one spot in the 3000 world. By renting a solution to remain on a 3000, instead of buying one, a manager is planning to first sustain its practices -- and then to change.

Up on the LinkedIn 3000 Community page I asked if the managers and owners were ready to purchase application-level support for 3000 operations. "It looks like several vendors want to sell this, to help with the brain-drain as veteran MPE managers retire." I asked that question a couple of years ago, but a few replies have bubbled up. Support has changed with ownership of some apps, such as Ecometry, and with some key tools such as NetBase.

"Those vendors will now get you forwarded to a call center in Bangalore," said Tracy Johnson, a veteran MPE manager at Measurement Specialties. "And by the way, Quest used to be quick on support. Since they got bought by Dell, you have to fill in data on a webpage to be triaged before they'll even accept an email."

Those were not the kind of vendors I was suggesting. Companies will oversee and maintain MPE apps created in-house, once the IT staff changes enough to lose 3000 expertise. But that led to another reply about why anyone might pursue the course to Sustain, when the strategy to Change seems overwhelming.

Managed Business Systems, one of the original HP Platinum Migration partners, was ready to do this sustaining as far back as a decade ago. Companies like the Support Group, Pivital Solutions -- they're still the first-line help desks and maintainers for 3000 sites whose bench has grown thin. Fresche Legacy made a point of offering this level of service, starting from the last days when it was called Speedware. There are others willing to take over MPE app operations and care, and some of these vendors have feet planted firmly in the Change camp, as well as staking out the Sustain territory.

Todd Purdum of Sherlock Systems wondered on LinkedIn if there really was a community that would take on applications running under MPE. We ran an article last year about the idea of a backstop if your expertise got ill or left the company. Five years earlier, we could point to even smaller companies, and firms like 3K Ranger and Pro 3K are available to do that level of work. Purdum, by his figuring, believes such backstops are rare.

Although I agree with the need for sustained resources to keep an HP3000 running, I'm not sure that "several vendors" can provide this. We have been in the business for over 23 years, and as a leader in providing hardware and application support for HP 3000s and MPE, I don't see many other vendors truly being capable of providing this.

Purdum asked, tongue-in-cheek, if there was a 3000 resurgence on the way he didn't see coming. No one has a total view of this market. But anecdotal reports are about all anyone has been able to use for most of a decade. Even well-known tool vendors are using independent support companies for front-line support. Purdum acknowledged that the support would be there, but wondered who'd need it.

Customers who use MPE (the HP 3000) know their predicament, and offering more salvation does not help them move into the right direction. I am only a hardware support company (that had to learn all HP 3000 applications) and it disappoints me a little that the companies you mentioned, most of which are software companies, haven't developed software that will allow these folks to finally move on and get off of this retired platform. 

I can't change it, I just sustain it... applications and all.

Sustaining mission-critical use of MPE is the only choice for some companies have in 2014. Their parent corporations aren't ready for a hand-off, or budget's not right, or yes, their app vendor isn't yet ready with a replacement app. That's what's leading to software rentals. When a company chooses to homestead, it must build a plan to Sustain. HP clearly retired its 3000 business more than three years ago. But that "final" moving on, into the realms of real change, follows other schedules, around the world. On the world of Tattooine, Lars first changed by setting up a moisture farm, then sustained. And then everything changed for him and Luke Skywalker. Change-sustain-change doesn't have a final state.

 

 

 

Posted by Ron Seybold at 07:07 PM in Homesteading, Migration, User Reports | Permalink | Comments (0)

June 19, 2014

Making Sure No New Silos Float Up

SilosCloud computing is a-coming, especially for the sites making their migrations off of the HP 3000. But even if an application is making a move to a cloud solution, shouldn't its information remain available for other applications? Operational systems remain mission-critical inside companies that use things like Salesforce.

To put this question another way, how do you prevent the information inside a Salesforce.com account to become a silo: that container that doesn't share its contents with other carriers of data?

The answer is to find a piece of software that will extract the data in a Salesforce account, then transform it into something that can be used by another database. Oracle, SQL Server, Eloquence, even DB2. All are active in the community that was once using TurboIMAGE. Even though Salesforce is a superior ERP application suite, it often operates alongside other applications in a company. (You might call these legacy apps, if they're older than your use of Salesforce. That legacy label is kind of a demerit, though, isn't it?)

Where to find such an extraction tool? A good place to look would be providers of data migration toolsets. This is a relatively novel mission, though. It doesn't take long for the data to start to pile up in Salesforce. Once it does, the Order Entry, CRM, Shipping, Billing and Email applications are going to be missing whatever was established in Salesforce initially. The popular term for this kind of roadblock is Cloud Silo.

It reminds me of the whole reason for getting data migration capabilities, a reason nearly as old as what was once called client-server computing. Back in the days when desktop PCs became a popular tool for data processing, information could start out on a desktop application, not just from a terminal. Getting information from one source to another, using automation, satisfies the classic mission of  "no more rekeying." 

It's a potent and current mission. Just because Salesforce is a new generation app, and based in the cloud, doesn't make it immune to rekeying. You need a can opener, if you will, to crack open its data logic. That's because not every company is going all-in on Salesforce.

The trick here is to find a data migration tool that understands and works well with the Salesforce API. This Application Program Interface is available to independent companies, but it will require some more advanced tech help to embrace it, for anyone who's limited to a single-company, in-house expertise pool. You want to hire or buy someone or something who's worked with an API for integration before now.

"How do you get stuff in and out of Salesforce? It not something unto itself," says Birket Foster. "It's a customer relationship management system. It's nice to have customer data in Salesforce, but you want to get it into your operational systems later."  

You want to get the latest information out of Salesforce, he adds, and nobody wants to re-key it. "That started in 1989," Foster says, "when we tried to help people from re-keying spreadsheets." For example, a small business data capture company, one that helps other small businesses get through the process, needs a way to get the Salesforce data into its application. Even if that other app is based in the cloud, it needs Salesforce data.

Silos are great for storing grains, but a terrible means to share them. The metaphor gets a little wiggly when you imagine a 7-grain bread being baked -- that'd be your OE or Shipping system, with data blended alongside Salesforce's grains of information. The HP 3000 once had several bakery customers -- Lewis Bakeries (migrated using the AMXW software, or Twinkie-maker Continental/Interstate Brands -- which mixed grains. They operated their mission-critical 3000s too long ago to imagine cloud computing, though.

Clouds deliver convenience, reliability, flexibility. Data migration chutes -- to pull the metaphor to its limit -- keep information floating, to prevent cloud silos from rising up.

Posted by Ron Seybold at 09:57 PM in Migration | Permalink | Comments (0)

June 18, 2014

The Long and Short of Copying Tape

Is there a way in MPE to copy a tape from one drive to another drive?

Stan Sieler, co-founder of Allegro Consultants, gives both long and short answers to this fundamental question. (Turns out one of the answers is to look to Allegro for its TapeDisk product, which includes a program called TapeTape.)

Short answer: It’s easy to copy a tape, for free, if you don’t care about accuracy/completeness.

Longer answer: There are two “gotchas” in copying tapes ... on any platform.

Gotcha #1: Long tape records

You have to tell a tape drive how long a record you with to read.  If the record is larger, you will silently lose the extra data.

Thus, for any computer platform, one always wants to ask for at least one byte more than the expected maximum record — and if you get that extra byte, strongly warn the user that they may be losing data.  (The application should then have the internal buffer increased, and the attempted read size increased, and the copy tried again.)

One factor complicates this on MPE: the file system limits the size of a tape record you can read.  STORE, on the other hand, generally bypasses the file system when writing to tape and it is willing to write larger records (particularly if you specify the MAXTAPEBUF option).

In short, STORE is capable of writing tapes with records too long to read via file system access. The free programs such as TAPECOPY use the file system; thus, there are tapes they cannot  correctly copy.

Gotcha #2: Setmarks on DDS tapes

Some software creates DDS tapes and writes “setmarks” (think of them as super-EOFs). Normal file system access on the 3000 will not see setmarks, nor be able to write them.

Our TapeDisk product for MPE/iX (which includes TapeTape) solves both of the above problems. As far as I know, it’s the only program that can safely and correctly copy arbitrary tapes on an HP 3000.

Posted by Ron Seybold at 04:53 PM in Hidden Value, Migration | Permalink | Comments (0)

June 17, 2014

How a Fan Can Become a Migration Tool

LaskofanWe heard this story today in your community, but we'll withhold the names to protect the innocent. A Series 948 server had a problem, one that was keeping it offline. It was a hardware problem, one on a server that was providing archival lookups. The MPE application had been migrated to a Windows app five years ago. But those archives, well, they often just seem to be easier to look up from the original 3000 system.

There might be some good reasons to keep an archival 3000 running. Regulatory issues come to mind first. Auditors might need original equipment paired with historic data. There could be budget issues, but we'll get to that in a moment.

The problem with that Series 948: it was overheating. And since it was a server of more than 17 years of service, repairing it required a hardware veteran. Plus parts. All of which is available, but "feet on the street" in the server's location, that can be a challenge. (At this point a handful of service providers are wondering where this prospective repair site might be. The enterprising ones will call.)

But remember this is an archival 3000. Budget, hah. This would be the time to find a fan to point at that overheating 17-year-old system. That could be the first step in a data migration, low-tech as it might seem.

From the moment the fan makes it possible to boot up, this could be the time to get that archival data off the 3000. Especially since the site's already got a replacement app on another piece of newer hardware, up and running. There's a server there, waiting to get a little more use.

Moving data off an archival server is one of the very last steps in decommissioning. If you've got a packaged application, there are experts in your app out there -- all the big ones, like Ecometry, MANMAN, Amisys -- that can help export that data for you. And you might get lucky and find that's a very modest budget item. You can also seek out data migration expertise, another good route.

But putting more money into a replacement Hewlett-Packard-branded 3000 this year might be a little too conservative. It depends on how old the 3000 system is, and what the hardware problem would be. If not a fan, then maybe a vacuum cleaner or shop vac could lower the temperature of the server, with a good clean-out. Funk inside the cabinet is common, we've seen.

Overheating old equipment could be a trigger to get the last set of archives into a SQL Server database, for example, one designated only for that. Heading to a more modern piece of hardware might have led you into another kind of migration, towards the emulator, sure. But if your mission-critical app is already migrated, the fan and SQL Server -- plus testing the migrated data, of course -- might be the gateway to an MPE-free operation, including your archives.

Posted by Ron Seybold at 06:08 PM in Migration, User Reports | Permalink | Comments (0)

June 13, 2014

User group's mansion meet sets deadline

JoinUsPowerHouse


June 15 is the first "secure your spot" registration date

PowerHouse customers, many of whom are still using their HP 3000 servers like those at Boeing, have been invited to the PickFair mansion in Hollywood for the first PowerHouse user conference. The all-day Friday meeting is June 27, but a deadline to ensure a reserved space passes at the end of June 15.

That's a Sunday, and Father's Day at that, so the PowerHouse patriarchy is likely to be understanding about getting a reservation in on June 16. Russ Guzzo, the marketing and PR powerhouse at new owners Unicom Global, said the company's been delighted at the response from customers who've been called and gathered into the community.

"I think it makes a statement that we're in it for the long haul," Guzzo said of gathering the customers, "and that the product's no longer sitting on the shelf and collecting dust. Let's talk." 

We're taking on a responsibility, because we know there are some very large companies out there that have built their existence around this technology. It's an absolute pleasure to be calling on the PowerHouse customers. Even the inactive ones. Why? Because they love the technology, and I've heard, "Geez, I got a phone call?"

Register at unicomglobal.com/PowerHouseCAB -- that's shorthand for Customer Advisory Board. It's a $500 ticket, or multiple registrations at $395 each, with breakfast and lunch included. More details, including a handsome flyer for justifying a one-day trip, at the event's webpage.

Posted by Ron Seybold at 11:34 PM in Homesteading, Migration, Newsmakers | Permalink | Comments (0)

June 11, 2014

HP to spin its R&D future with The Machine

Big SpiralCalling it a mission HP must accomplish because it has no other choice, HP Labs director Martin Fink is announcing a new computer architecture Hewlett-Packard will release within two years or bust. Fink, who was chief of the company's Business Critical Systems unit before being handed the Labs job in 2012, is devoting 75 percent of HP's Labs resources to creating a computer architecture, the first since the company built the Itanium chip family design with Intel during the 1990s.

A BusinessWeek article by Ashlee Vance says the product will utilitize HP breakthroughs in memory (memsistors) and a process to pass data using light, rather than the nanoscopic copper traces employed in today's chips. Fink came to CEO Meg Whitman with the ideal, then convinced her to increase his budget.

Fink and his colleagues decided to pitch Whitman on the idea of assembling all this technology to form the Machine. During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. "People in Labs see this as a once-in-a-lifetime opportunity."

Fast, cheap, persistent memory is at the heart of what HP hopes to change about computing. In the effort to build The Machine, however, the vendor harks back to days when computer makers created their own technology in R&D organizations as a competitive advantage. Commodity engineering can't cross the Big Data gap created by the Internet of Things, HP said at Discover today. The first RISC designs for HP computers, launched in a project called Spectrum, were the last such creation that touched HP's MPE servers.

Itanium never made it to MPE capability. Or perhaps put another way, the environment that drives the 3000-using business never got the renovation which it deserved to use the Intel-HP created architecture. Since The Machine is coming from HP's Labs, it's likely to have little to do with MPE, an OS the vendor walked away from in 2010. The Machine might have an impact on migration targets, but HP wants to change the way computing is considered, away from OS-based strategies. But even that dream is tempered by the reality that The Machine is going to need operating systems -- ones that HP is building.

SpiralOS compatibility was one reason that Itanium project didn't pan out the way HP and Intel hoped, of course. By the end of the last decade, Itanium had carved out a place as a specialized product for HP's own environments, as well as an architecture subsidized by Fink's plans to pay Intel to keep developing it. The Machine seems to be reaching for the same kind of "change the world's computing" impact that HP and Intel dreamed about with what it once called the Tahoe project. In a 74-year timeline of HP innovation alongside the BusinessWeek article, those dreams have been revised toward reality.

PA-RISC is denoted in a spiraling timeline of HP inventions that is chock-a-block with calculator and printing progress. The HP 2116 predecessor to the HP 3000 gets a visual in 1969, and Itanium chips are chronicled as a 2001 development.

The Machine, should it ever come to the HP product line, would arrive in three to six years, according to the BusinessWeek interview, and Fink isn't being specific about delivery. But with the same chutzpah he displayed in running Business Critical Systems into critical headwinds of sales and customer retention, he believes HP is the best place for tech talent to try to remake computing architecture.

According to the article, three operating systems are in design to use the architecture, one open-source and HP proprietary, another a variant of Linux, and a third based on Android for mobile dreams. That's the same number of OS versions HP supported for its first line of computers -- RTE for real time, MPE for business, and HP-UX for engineering, and later business. OS design, once an HP staple, need to reach much higher to meet the potential for new memory -- in the same way that MPE XL made use of innovative memory in PA-RISC.

Fink says these projects have burnished HP’s reputation among engineers and helped its recruiting. “If you want to really rethink computing architecture, we’re the only game in town now,” he said. Greg Papadopoulos, a partner at the venture capital firm New Enterprise Associates, warns that the OS development alone will be a massive effort. “Operating systems have not been taught what to do with all of this memory, and HP will have to get very creative,” he says. “Things like the chips from Intel just never anticipated this.” 

Posted by Ron Seybold at 06:24 PM in Migration, News Outta HP | Permalink | Comments (0)

June 10, 2014

Security patches afloat for UX, for a price

If an IT manager had the same budget for patches they employed while administering an HP 3000, today they'd have no patches at all for HP's Unix replacement system. That became even more plain when the latest Distributed Denial of Service (DDoS) alert showed up in my email. You never needed a budget to apply any patches while HP 3000s were for sale from the vendor. Now HP's current policy will be having an impact on the value of used systems -- if they're Unix-based, or Windows ProLiant replacements for a 3000. Any system's going to require a support contract for patches.

For more than 15 years, HP's been able to notify customers when any security breach puts its enterprise products at risk. For more than five years, one DDoS exploit after another has triggered these emails. But over the past year, Hewlett-Packard has insisted that a security hole is a good reason to pay for a support contract with the vendor.

The HP 3000 manager has better luck in this regard than HP's Unix system owners. Patches for the MPE/iX environment, even in their state of advancing age, are distributed without charge. A manager needs to call HP and be deliberate to get a patch. The magic incantation when dealing with the Response Center folks is to use transfer code 798. That’ll get you to an MPE person. And there's not an easy way for an independent support company to help in the distribution, either. HP insisted on that during a legal action last spring.

In that matter, a support company -- one that is deep enough to be hiring experts away from HP's support team -- was sued for illegal distribution of HP server patches. HP charged copyright infringement because the service company had downloaded patches -- and HP claimed those patches were redistributed to the company's clients. 

The patch policy is something to budget for while planning a migration. Some HP 3000 managers haven't had an HP support contract since the turn of this century. Moving to HP-UX will demand one, even if a more-competent indie firm is available to service HP-UX or even Windows on a ProLiant system. See, even the firmware patches aren't free anymore. Windows security patches continue to be free -- that is, they don't require a separate contract. Not even for Windows XP, although that environment has been obsoleted by Microsoft.

HP said the lawsuit was resolved when the support company agreed to suspend their practices that were alleged in the suit. 

HP, like Oracle (owners of Sun) and other OS manufacturers, have chosen to restrict updates, patches, and now firmware to only those customers that have a current support agreement. Indie support companies can recommend patches; in fact, they're a great resource for figuring out which patch will fix problems without breaking much else. But customers are required to have their own support agreement in order to download and install such patches and updates.

Even following the links in the latest HP emails landed me in a "you don't have a support agreement to read this" message, rather than the update about DDoS exposure. It's more than the patches for migration platforms that HP's walled away from the customer base. Now even the basic details of what's at risk are behind support paywalls.

The extra cost is likely to be felt most in the low to midrange end of the user community. Dell's not getting caught up in what HP calls an industry trend to charge for repairing malformed software or OS installations that get put at risk. Dell offers unrestricted access to BIOS and software updates for its entire server, storage, and networking line.

Posted by Ron Seybold at 06:29 PM in Migration, News Outta HP | Permalink | Comments (0)

June 04, 2014

Don't wait until a migrate to clean up

Not long ago, the capital of Kansas District Court in Topeka made a motion to turn off their HP 3000s. During the report on that affair -- one that took the court system offline for a week -- the IT managers explained that part of the migration process would include cleaning up the data being moved off an HP 3000.

This data conversion is one of the most important attributes of this project and is carefully being implemented by continuously and repeatedly checking thousands of data elements to ensure that all data converted is “clean” data which is essential to all users. When we finally “go live,” we would sincerely appreciate your careful review of data as you use the system.

Not exactly a great plan, checking on data integrity so late in the 3000's lifecycle, said ScreenJet's Alan Yeo. The vendor who supplies tools and service for migrations has criticism for the court's strategy statement that "we either move on to another system or we go back to paper and pen."

Fisker"Interesting, that pen and paper comment," Yeo said. "It has the ring of someone saying that we have an old car that's running reliably, but because it might break down at some time, the only options are to go back to walking or buy a Fisker." The Fisker, for those who might not know, was a car developed in 2008 as one of the very first plug-in hybrid models. About 2,000 were built before the company went bankrupt. Moving to any new technology, on wheels or online, should be an improvement over what's in place -- not an alternative to ancient practices.

"Oh, and what's all this crap about having to clean the data?" Yeo added. "That's like saying I'll only bother cleaning the house that I live in when I move. Yes, sure you don't want to move crap in a migration. But you probably should have been doing some housekeeping whilst you lived in the place. Blaming the house when you got it dirty doesn't really wash!"

Posted by Ron Seybold at 09:58 PM in Migration | Permalink | Comments (0)

May 27, 2014

Does cleaning out HP desks lift its futures?

Migration sites in the 3000 community have a stake in the fortunes of Hewlett-Packard. We're not just talking about the companies that already have made their transition away from MPE and the 3000. The customers who know they're not going to end this decade with a 3000 are watching the vendor's transformation this year, and over the next, too.

Clean-Out-Your-DeskIt's a period when a company that got bloated to more that 340,000 companies will see its workforce cut to below 300,000 when all of the desks are cleaned out. The HP CEO noted that the vendor has been through massive change in the period while HP was cleaning out its HP 3000 desks. During the last decade, Meg Whitman pointed out last week, Compaq, EDS, Automomy, Mercury Interactive, Palm -- all became Hewlett-Packard properties. Whitman isn't divesting these companies, but the company will be shucking off 50 percent more jobs than first planned.

Some rewards arrived in the confidence of the shareholders since the announcement of 16,000 extra layoffs. HP stock is now trading at a 52-week high. It's actually priced at about the same value as the days after Mark Hurd was served walking papers in 2010. Whitman's had to do yeoman work in cost-cutting to keep the balance sheet from bleeding, because there's been no measureable sales growth since all 3000 operations ceased. It's a coincidence, yes, but that's also a marker the 3000 customer can recall easily.

When you're cutting out 50,000 jobs -- the grand total HP will lay off by the end of fiscal 2015 in October of next year -- there's no assured way of retaining key talent. Whitman said during the analyst conference call that everybody in HP has the same experience during these cuts. "Everyone understands the turnaround we're in," she said, "and everyone understands the market realities. I don't think anyone likes this."

These are professionals working for one of the largest computer companies in the world. They know how to keep their heads down in the trenches. But if you're in a position to make a change in your career, a shift away from a company like HP that's producing black ink on its ledger through cuts, you want to engage in work you like -- by moving toward security. In the near term, HP shareholders are betting that security will be attained by the prospect of a $128 billion company becoming nimble, as Whitman vowed last week.

In truth, becoming nimble isn't going to be as important to an HP enterprise customer as becoming innovative. Analysts are identifying cloud computing as the next frontier, one that's already got profitable outposts and the kind of big-name users HP's always counted in its corral. During an interview with NPR on the day after the job cuts rolled out, Michael Regan of Bloomberg News pointed out that most of HP's businesses have either slipped, like printers and PCs, or are under fire.

Servers are under a really big threat from cloud computing. HP formerly, you know, their business was to sell you the server so that you can store all your data yourself and have customers access the data right off of your server from the Internet.

The big shift over the last few years has been to put it on a cloud, where basically companies are renting space on a server, and consumers a lot of times aren't even buying any Web applications. They're renting them over the cloud, too. All three of [HP's] main business lines are really under a lot of competition from tablets and cloud computing.

This isn't good news for any customer whose IT career has been built around server management and application development and maintenance. Something will be replacing those in-house servers at any company that will permit change to overturn its technology strategy.

Cloud computing is a likely bet to replace traditional server architectures at companies using the HP gear. But it's a gamble right now to believe that HP's strength in traditional computing will translate to any dominance in cloud alternatives. IBM and Amazon and Google are farther in front on these offerings. That's especially true for the small to midsize company where an HP 3000 is likely to remain working this year.

During the NPR interview, Regan took note of the good work that's come from Whitman's command of the listing HP ship. But the stock price recovery is actually behind both the Standard & Poors average and the average for technology firms during Whitman's tenure. She's floating desks out the door, but that probably won't be enough to float the growth trend line upward. When extra cuts are needed to keep all those shareholders happy, one drooping branch could be the non-industry standard server business.

Any deeper investment in any HP strategy that relies on catching up with non-standard technology should float away from procurement desks for now.

Posted by Ron Seybold at 11:01 PM in Migration, News Outta HP | Permalink | Comments (0)

May 22, 2014

HP's migration servers stand ground in Q2

ESG HP Q2

The decline of HP's 3000 replacement products has halted
(click on graphic for details)

CEO Meg Whitman's 10th quarterly report today promised "HP's turnaround remains on track." So long as that turnaround simply must maintain sales levels, she's talking truth to investors. During a one-hour conference call, the vendor reported that its company-wide earnings before taxes had actually climbed by $240 million versus last year's second quarter. The Q2 2014 numbers also show that the quarter-to-quarter bleeding of the Business Critical Systems products has stopped.

But despite that numerical proof, Whitman and HP have already categorized BCS, home of the Linux and HP-UX replacement systems for 3000, as a shrinking business. The $230 million in Q2 sales from BCS represent "an expected decline." And with that, the CEO added that Hewlett-Packard believes its strategy for enterprise servers "has put this business on the right path."

The increased overall earnings for the quarter can be traced to a robust period for HP printers and PCs. Enterprise businesses -- the support and systems groups that engage with current or former 3000 users -- saw profits drop by more than 10 percent. HP BCS sales also fell, by 14 percent versus last year's Q2. But for the first time in years, the numbers hadn't dropped below the previous quarter's report.

The decline of enterprise server profits and sales isn't a new aspect of the HP picture. But the vendor also announced an new round of an extra 10,000-15,000 job eliminations. "We have to make HP a more nimble company," Whitman said. CFO Cathie Lesjack added that competing requires "lean organizations with a focus on strong performance management." The company started cutting jobs in 2012, and what it calls restructuring will eliminate up to 50,000 jobs before it's over in 2015.

Enterprise business remains at the heart of Hewlett-Packard's plans. It's true enough that the vendor noted the Enterprise Systems Group "revenue was lower than expected" even before the announcement of $27.3 billion overall Q2 revenues. The ESG disappointments appeared to be used to explain stalled HP sales growth.

But those stalled results are remarkable when considered against what Whitman inherited more than two years ago. Within a year, HP bottomed out its stock price at under $12 a share. It was fighting with an acquired Autonomy about how much the purchased company was worth, and was shucking off a purchase of Palm that would have put the vendor into the mobile systems derby.

If nothing else, Whitman's tenure as CEO -- now already half as long as Mark Hurd's -- contains none of the hubris and allegations of the Hurd mentality. After 32 months on the job, Whitman has faced what analysts are starting to call the glass cliff -- a desperate job leading a company working its way back from the brink, offered to a woman.

As the conference call opened on May 22, HP's stock was trading at close to three times its value during that darkest month of November, 2012. At $31 a share valuation, HPQ is still paying a dividend to shareholders. Meanwhile, the company said it has "a bias toward share repurchases" planned for the quarters to come.

There's still plenty of profit at HP. But the profits for the Enterprise Group, which includes blades and everything that runs an alternative to MPE, have been on a steady decline. A year ago before taxes they were $1.07 billion, last quarter they were $1 billion, and this quarter they're $961 million. Sales are tracking on the same trajectory.

Whitman noted the tough marketplace for selling its business servers in the current market. She also expressed faith in HP's system offerings. It's just that the vendor will have to offer them with fewer employees.

"I really like our product lineup. But we need to run this company more efficiently," she said. "We're going to have to be quicker and faster to compete in this new world order."

When an analyst asked Whitman about morale in the face of job cuts, she said people at HP understand the economic climate.

"No company likes to decrease the workforce," she said. "Our employees live with it every single day. Everyone understands the turnaround we're in, everyone understands the market realities. I don't think anyone likes this." HP believes the extra job cuts will free up an additional $1 billion a year, "and some of that will be reinvested back into the business."

There's also money being spent in R&D. At first during the Q&A session, the CFO Lesjack said that "the increase of R&D year over year is very broad-based" across many product lines. Whitman immediately added that there have been increases for R&D in HP's server lines. The servers which HP is able to sell are "mission-critical x86" systems. That's represents another report that the Integrity-based lineup continues to decline. BCS overall represents just 3 percent of all Enterprise Systems sales in this quarter.

HP's internal enterprise systems -- which were once managed by HP 3000s -- are in the process of a new round of replacements. SAP replaced internal systems at HP last decade. Whitman said the churn that started in 2001 with the Compaq purchase has put the vendor through significant changes, ones that HP must manage better.

"This company has been through a lot," Whitman said during analyst questioning. "The acquisition of Compaq. The acquisition of EDS. Eleven to 20 software acquisitions. It's a lot of change. We're putting in new ERP programs and technology to automate processes that frankly, have not been done in awhile."

Posted by Ron Seybold at 07:16 PM in Migration, News Outta HP | Permalink | Comments (0)

May 21, 2014

Ops check: does a replacement application do the same caliber of power fail recovery?

Migrating away from an HP 3000 application means leaving behind some things you can replace. One example is robust scheduling and job management. You can get that under Windows, if your target application will run on that Microsoft OS. It's extra, but worth it, especially if the app you need to replace generates a great many jobs. We've heard of one that used 14,000.

Disk-bandaidA migrating site will also want to be sure about error recovery in case of a system failure. Looking at what's a given in the 3000 world is the bottom-rung bar to check on a new platform. This might not be an issue that app users care about -- until a brown-out takes down a server that doesn't have robust recovery. One HP 3000 system manager summed up the operations he needs to replace on HP's 3000 application server.

We're looking at recovery aspects if power is lost, or those that kick in whenever MPE crashes. On the 3000's critical applications, we can use DBCONTROL or FCONTROL to complete the I/O.  Another option would be to store down the datasets before the batch process takes place.

A couple of decades ago, this was a feature where the 3000's IMAGE database stood out in a startling, visual way. A database shootout in New Jersey pitted IMAGE and MPE against Unix and Oracle, or second-level entries such as Sybase or Informix. A tug on the power plug of the 3000 while it was processing data left the server in a no-data-loss state, when it could be rebooted. Not so much, way back then, for what we'd call today's replacement system databases.

Eloquence, the IMAGE workalike database, emulates this rock-solid recovery for any Windows or Linux applications that use that Marxmeier product. Whatever the replacement application will be for a mission-critical 3000 system, it needs to rely on the same caliber of crash or powerfail recovery. This isn't an obvious question to ask during the feature comparison phase of migration planning. But such recovery is not automatic on every platform that will take over for MPE.

Sometimes there's powerfail tools available for replacement application hosts, system-wide tools to aid in database recovery -- but ones that managers don't employ because of costs to performance. For example, a barrier is a system feature common in the Linux world. A barrier protects the state of the filesystem journal. Here's a bit of discussion from the Stack Exchange forum, where plenty of Linux admins seek solutions.

It is possible that the write to the journal on the disk is delayed, because it's more efficient from the head position currently to write in a different order to the one the operating system requested as the actual order -- meaning blocks can be committed before the journal is.

The way to resolve this is to make the operating system explicitly wait for the journal to have been committed before committing any more writes. This is known as a barrier. Most filesystems do not use this by default and would explicitly need enabling with a mount option.

mount -o barrier=1 /dev/sda /mntpnt

The big downside to barriers is they have a tendency to slow IO down, sometimes dramatically (around 30 percent) which is why they aren't enabled by default.

In the 3000 world, logging has been used as a similar recovery feature, focused on recovering IMAGE data. A long-running debate included concerns about whether logging penalized application performance. We've run a logging article written by Robelle's Bob Green that's worth a look.

Peering under the covers of any replacement application, to see the means to recover its data, is a best practice. Even if a manager doesn't have deep knowledge of the target environment, this peering is the kind of thing the typical experienced 3000 manager will embrace without question. Then they'll ask the powerfail recovery question.

Posted by Ron Seybold at 07:53 PM in Migration | Permalink | Comments (0)

May 16, 2014

Unicom returns PowerHouse expert to fold

CognosibmlogoBob Deskin fielded questions about the PowerHouse products for more than a decade on the PowerHouse-L mailing list. When a question from the vendor -- for many of those years, Cognos -- was required, Deskin did the answering. He was not able to speak for IBM in a formal capacity about the software. But he defined the scope of product performance, as well as soothed the concerns from a customer base when it felt abandoned.

UnicomgloballogoAfter retiring from IBM's PowerHouse ADT unit last year, Deskin's back in the field where he's best known. The new owners of the PowerHouse tools, Unicom Global, added him to the team in a consultant's capacity.

As part of UNICOM's commitment to the PowerHouse suite of products, I have been brought on board as a consultant to work with the UNICOM PowerHouse team to enhance the support and product direction efforts.

For anyone not familiar with my background, I started in this business in the early '70s as a programmer and systems analyst. I joined Cognos (then Quasar) in 1981 after evaluating QUIZ and beta testing QUICK for a large multinational. Over the years, I’ve been in customer support, technical liaison, quality control, education, documentation, and various advisory roles. For the past 12 years, until my retirement from IBM in 2013, I was the Product Manager for PowerHouse, PowerHouse Web, and Axiant.

New owners of classic MPE tools, like PowerHouse, are not always so savvy about keeping the tribal knowledge in place. In Deskin's case, he's been retained to know about the OpenVMS product owners' issues, as well as those from the 3000 community. There's another HP platform that's still available for PowerHouse customer, too.

The story behind the story of new PowerHouse ownership is a plan for enhancing the products, even as a site's sideways migration to a supported HP platform might be underway. In order to help retain a customer in the proprietary PowerHouse community, the new owners know they need to improve the products' capabilities

WainhomesballoonIn one example of such a sideways migration -- where PowerHouse remains the constant platform element while everything else changes -- Paul Stennett of house builder Wainhomes in the UK reported that an MPE-to-UX move began with a change in database. The target was not Oracle, either.

We actually migrated to Eloquence rather than Oracle, which meant the data conversion was pretty simple -- Eloquence emulates IMAGE on the HP 3000. The only issue was KSAM files which we couldn't migrate. However, Eloquence has a better solution, and allows third party indexes, and therefore generic key retrieval of data. For instance, surname = "Smi@" etc... Testing was around 3 months.

Our HP 3000 applications go back over 20 years and have been continually developed over time. I have had experience with other [replacement] packages for the housebuilding industry, in particular COINS, However, with the mission to keep the re-training and disruption to the business to a minimum, the migration option was the best route for us. 

I completely agree that you can gain major benefits from replacing a system [completely instead of migrating it.] I guess it depends on what type of business you are in. If you are an online retailer, for example, then technology can save costs and improve efficiency. As they say, it's horses for courses.

Posted by Ron Seybold at 03:39 PM in Migration | Permalink | Comments (0)

May 09, 2014

HP bets "Hey! You'll, get onto your cloud!"

TalltorideHewlett-Packard announced that it will spend $1 billion over the next two years to help its customers built private cloud computing. Private clouds will need security, and they'll begin to behave more like the HP 3000 world everybody knows: management of internal resources. The difference will reside in a standard open source stack, OpenStack. It's not aimed at midsize or smaller firms. But aiding OpenStack might help open some minds about why clouds can be simple to build, as well as feature-rich.

This is an idea that still needs to lift off. Among the 3000 managers we interview, there are few who've been in computing since the 1980s who are inclined to think of clouds much differently than time-sharing, or apps over the Internet. Clouds are still things in Rolling Stones or Judy Collins choruses.

The 3000 community that's moving still isn't embracing any ideal of running clouds in a serious way. Once vendor who's teeing up cloud computing as the next big hit is Kenandy. That's the company built around the IT experience and expertise of the creators of MANMAN. They've called their software social ERP, in part because it embraces the information exchange that happens on that social network level.

But from the viewpoint of Terry Floyd, founder of the manufacturing services firm The Support Group, Kenandy's still waiting for somebody from the 3000 world to hit that teed-up ball. Kenandy was on hand at the Computer History Museum for the last HP3000 Reunion. That gathering of companies now looks like the wrong size of ball to hit the Kenandy cloud ERP ball.

"Since we saw them at the Computer History Museum meeting, Kenandy seems to have has re-focused on large Fortune 1000 companies," Floyd said. There are scores of HP 3000 sites running MANMAN. But very few are measuring up as F1000 enterprises. Kenandy looks like it believes the typical 3000 site is not big enough to benefit from riding a cloud. There are many migrated companies who'd fit into that Fortune 1000 field. But then, they've already chosen their replacements.

The Kenandy solution relies on the force.com private cloud, operated by Salesforce.com. Smaller companies, the size of 3000 customers, use Salesforce. The vendor's got a force.com cloud for apps beyond CRM. But the magnitude of the commitment to Kenandy seems larger than the size of the remaining 3000 sites which manufacture using Infor's MANMAN app.

"Most MANMAN sites don't meet their size requirements," Floyd said. "I have a site that wants to consider Kenandy next year, but so far Kenandy is not very interested. We'll see if they are serious when the project kicks off next year, because we think Kenandy is a good fit for them."

The longer that small companies wait out such cloud developments as HP's $500 million per year, the better the value becomes for getting onto their cloud, migrating datacenter ops outside company walls. HP is investing to convince companies to build their own private clouds, instead of renting software from firms like Kenandy and Salesforce. Floyd and his company have said there's good value in switching to cloud-based ERP for some customers. Customization of the app becomes the most expensive issue.

This is the central decision in migrating to cloud-based ERP from a 3000. It's more important than how much the hardware to support the cloud will cost. HP's teaming up with Foxconn -- insert snarky joke here -- to drive down the expense of putting up cloud-optimized servers. But that venture is aimed at telecommunications companies and Internet service providers. When Comcast and Verizon, or Orange in Europe, are your targets, you know there's a size requirement.

You might think of the requirements for this sort of cloud -- something a customer would need to devote intense administrative resources to -- as that sign at the front of the best amusement park rides. "You must be Fortune 1000 tall to ride this ride," it might say. Maybe, over the period of HP's new cloud push, the number on the sign will get smaller.

Posted by Ron Seybold at 10:04 AM in Migration, News Outta HP | Permalink | Comments (0)

May 06, 2014

PowerHouse users study migration flights

737-lineA sometimes surprising group of companies continue to use software from the PowerHouse fourth generation language lineup on their HP 3000s. At Boeing, for example -- a manufacturer whose Boeing 737 assembly line pushes out one aircraft's airframe every day -- the products are essential to one mission-critical application. Upgrade fees for PowerHouse became a crucial element in deciding whether to homestead on the CHARON emulator last year.

PowerHouse products have a stickiness to them that can surprise, here in 2014, because of the age of the underlying concept. But they're ingrained in IT operations to a degree that can make them linchpins. In a LinkedIn Group devoted to managing PowerHouse products, the topic of making a new era for 4GL has been discussed for the past week. Paul Stennett, a group systems manager with UK-based housebuilder Wainhomes, said that his company's transition to an HP-UX version of PowerHouse has worked more seamlessly -- so far -- than the prospect of replacing the PowerHouse MPE application with a package. 

"The main driver was not to disrupt the business, which at the end of the day pays for IT," Stennett said. "It did take around 18 months to complete, but was implemented over a weekend. So the users logged off on Friday on the old system, and logged onto the new system on Monday. From an application point of view all the screens, reports and processes were the same."

This is the lift-and-shift migration strategy, taken to a new level because the proprietary language driving these applications has not changed. Business processes -- which will get reviewed in any thorough migration to see if they're still needed -- have the highest level of pain to change. Sometimes companies conclude that the enhancements derived from a replacement package are more than offset by required changes to business processes.

Enter the version of PowerHouse that runs on HP's supported Unix environment. It was a realistic choice for Stennett's company because the 4GL has a new owner this year in Unicom.

"With the acquisition of PowerHouse by UNICOM, and their commitment to developing the product and therefore continued support," Stennett posted on the LinkedIn group, "is it better to migrate PowerHouse onto a supported platform (from HP 3000 to HP-UX) rather than go for a complete re-write in Java, with all of its risks. To the user was seamless, other than they have to use a different command to logon. The impact to the businesses day to day running was zero."

The discussion began with requests on information for porting PowerHouse apps to Java. The 4GL was created with a different goal in mind than Java's ideal of "write once, run anywhere." Productivity was the lure for users who moved to 4GLs such as PowerHouse, Speedware, and variants such as Protos and Transact. All but Protos now have support for other platforms.

And HP's venerated Transact -- which once ran the US Navy's Mark 85 torpedo facility at Keyport, Wash. -- can be replaced by ScreenJet's TransAction and then implemented on MPE. ScreenJet, which partnered with Transact's creator David Dummer to build this replacement, added that an MPE/iX TransAction implementation would work as a testing step toward an ultimate migration to other environments.

Bob Deskin, a former PowerHouse support manager who retired from IBM last year, sketched out why the fourth generation language is preserving so many in-house applications -- sometimes on platforms where the vendor has moved on, or set an exit date as with HP's OpenVMS.

Application systems, like many things, have inertia. They tend to obey Newton's first law. A body at rest tends to remain at rest unless acted upon by an outside force. The need for change was that outside force. When an application requires major change, the decision must be made to do extensive modifications to the existing system, to write a new system, or to buy a package. During the '90s, the answer was often to buy a package. 

But packages are expensive so companies are looking at leveraging what they have. If they feel that the current 4GL application can't give them what they need, but the internal logic is still viable, they look for migration or conversion tools. Rather than completely re-write, it may be easier to convert and add on now that Java and C++ programmers are readily available.

Deskin added as part of his opinion of what happened to 4GLs that they were never ubiquitous -- not even in an environment like the HP 3000's, where development in mainstream languages might take 10 times longer during the 1970s.

There weren't enough programmers to meet the demand. Along came 4GLs and their supposed promise of development without programmers. We know that didn't work out. But the idea of generating systems in 10 percent of the time appealed to many. If you needed 10 percent of the time, maybe you only needed 10 percent of the programmers. 

The 4GL heyday was the '80s. With computers being relatively inexpensive and demand for systems growing, something had to fill the void. Some programmers caught the 4GL bug, but most didn't. There was still more demand than supply, so studying mainstream languages almost guaranteed a job. 

Now even mainstream languages like COBOL and FORTRAN are out of vogue. COBOL was even declared extinct by one misinformed business podcast on NPR. The alternatives are, as one LinkedIn group member pointed out, often Microsoft's .NET or Oracle's Java. (Java wasn't considered a vendor's product until Oracle acquired it as part of its Sun pickup. These days Java is rarely discussed without mention of its owner, perhaps because the Oracle database is so ubiquitous in typical migration target environments.)

Migration away from a 4GL like PowerHouse -- to a complete revision with new front end, back end databases, reporting and middleware -- can be costly by one LinkedIn member's accounts. Krikor Gellekian, whose name surfaces frequently in the PowerHouse community, added that a company's competitive edge is the reward for the lengthy wade through the surf of 4GL departures.

"It is not simple, it takes time and is expensive, and the client should know that in advance," Gellekian wrote. "However, I always try to persuade my clients that IT modernization is not a single project; it is a process. And adopting it means staying competitive in their business."

Deskin approached the idea that 4GLs might be a concept as extinct as that podcast's summary of COBOL. 

Does this mean that the idea of a 4GL is dead? Absolutely not. The concept of specifying what you want to do rather than how to do it is still a modern concept. In effect, object-oriented languages [like Java] are attempting to do the same thing -- except they are trying to be all things to all people and work at a very low level. However, it takes more than a language these days to be successful. It also requires a modern interface. Here's hoping.

Posted by Ron Seybold at 11:32 AM in Migration, User Reports | Permalink | Comments (0)

May 02, 2014

Timing makes a difference to MPE futures

Coming to market with virtualized 3000s has been a lengthy road for Stromasys. How long is a matter of perspective. The view of an emulated 3000's lifespan can run from using it for just a few years to the foreseeable future. I heard about both ends of the emulator's continuum over the last few weeks.

StopwatchIn the Kern County Schools in Bakersfield, Calif., a 3000 manager said the timetable for his vendor's app migration is going to sideline any steps into using CHARON. Robert Canales, Business Information Systems Analyst in the Division of Administration and Finance, was an eager prospect for the software last May, when the company's Training Day unfolded out in the Bay Area. But the pace of migration demonstrated by his MPE software vendor, who's moving customers to Linux, showed his team that 3000 computing was not going to outlast the vendor's expected migration timetable.

Our main software vendor has since migrated several of their California K-12 education customers off of the 3000. We believe that our organization will be able to successfully migrate over to their Linux-based platform within the next 18-24 months. So from that perspective, we simply couldn't justify the financial investment, or the time for our very limited number of personnel, to focus on utilizing the CHARON solution for backup, testing or historical purposes.

The analysis at the district draws the conclusion that two more school years using available HP 3000 iron -- at most, while awaiting and then undertaking a migration -- will be a better use of manpower and budget than preserving MPE software. This is understandable when a commercial application drives IT. You follow your vendor's plan, or plan to replace something. Replacement could be either the physical hardware with an emulator, because the vendor's leaving your MPE app behind. Or everything: your OS environment as well as applications. Getting two years of emulator use, or maybe a bit more, isn't enough to fit the Kern County Schools resources and budget.

On the other side of that timetable, we can point out a comment from the recent CAMUS user group conference call. It suggests people will want to do more than mimic their 3000 power. They'll want to trade up for a longer-term installation.

An MB Foster analyst noted that as hardware moves upward, from one level of emulation to a more powerful option, the changes might trigger application upgrading. That's a long schedule of use, if you consider that horsepower increases usually happened on 3- or 5-year timetables back when MPE ran only on 3000s. That mirrors a schedule that emulator vendors have reported as commonplace: several decades of lifespan.

Arnie Kwong clarified what he said on that call: that moving upward in the CHARON license lineup might be reason for a vendor -- like some in the 3000 world -- to ask for upgrading fees.

My understanding on CHARON is 1) If you change processor class (for example, from an 'A' license to an 'N' license) then you are likely to get 'upticks' from your third party vendors.  

2) If you change to 'more processors' (for example, from one 'A' license to more than one 'A' license so that you can run separate reporting machines or year-end processing or the like) then you have more licenses as you are running more processors.

This isn't a change for anything that has been in place -- it's just a clarification of ours, that we haven't heard of anyone who isn't doing this the same way as its always been done. Stromasys is vending the 'hardware' and the software suppliers are providing the 'code' as things have always been.

We don't know how likely such upticks will be in the community. 3000 shops use an array of third party vendors. Some vendors do charge for processor uplifts. Others do not, and the number of vendors who will do this has not been confirmed by the installed CHARON base. We heard a report that a PowerHouse user was facing a six-figure fee to emulate their 3000. We heard that report before PowerHouse ownership changed at the end of 2013.

But if you think about that kind of scenario for a bit, you come up with a company that's extending its MPE power while it emulates. That's an investment to cover more than a few years. Emulating customers, just like the vendors who are offering this virtualization, are often into their applications for a very long ride. Before Stromasys emerged as the survivor in the emulation derby, there was Strobe Data. Willard West at that vendor talked about a multiple decades of a timetable for its HP 1000 and Digital emulation customer base.

"Our major competition has been the used hardware market," West said a decade ago. "We’ve out-survived that." At the time that we talked, Strobe was emulating Data General servers that were obsoleted 15 years earlier.

Emulation vendors know that time can be on their side if an application is customized and critical to a company. When time is on your side, the costs to revitalize an in-house application can be applied over enough years. Emulation mimics more than hardware platforms. It preserves IT business rules for returns on investment which have often been on MPE's side. MPE applications have outlasted their hardware and triggered upgrades. The clock on the ROI determines IT investments, just like it always has.

Posted by Ron Seybold at 08:14 PM in Homesteading, Migration | Permalink | Comments (0)

April 30, 2014

Kansas court rings down gavel on its 3000

GavelThe District Court in the capital of Kansas is switching off its HP 3000 this week, a process that's going to pull the district clerk's office competely out of service over the first two days of May. The Topeka court's IT department said the alternative to replacing the 3000 software would be going back to paper and pen. The project will knock all court computing offline -- both old and new systems -- for one work week.

"Anyone who needs to file or pick up documents should do so between 8 AM and noon on Thursday and Friday," the court advised Topeka-area citizens on its website. The Topeka courts have been using HP 3000s since the 1980s. Four years ago the court commissioners voted to spend $207,800 for FullCourt software to replace the 3000 application. The court has been paying for the software -- which will be loaded with data May 5-9 -- over three years at no interest. All court data is being extracted and replaced during the workweek of May, when only jury trials, emergency hearings and essential dockets will be heard.

The court is predicting a go-live date of May 12. The HP 3000 will be shut off Friday, May 2, at 5 PM, according to a schedule "that may fluctuate."

The HP 3000 has "outlived its life expectancy, making it essential that we either move on to another system or we go back to paper and pen," according to a statement on the court's website. Converting data is the crucial part of the migration.

No other district court in the state of Kansas has attempted such a challenge.  This data conversion is one of the most important attributes of this project and is carefully being implemented by continuously and repeatedly checking thousands of data elements to ensure that all data converted is “clean” data which is essential to all users. When we finally “go live,” we would sincerely appreciate your careful review of data as you use the system.

32-year-old Justice Systems of Alberquerque sells FullCourt. The latest marketing materials for the software company's Professional Services include a testimonial from Chief Information Technical Officer Kelly O'Brien of the Kansas Judicial Branch. The court's announcements did not break out the cost of software versus the cost of professional migration services.

Chief Judge Evelyn Wilson said in a statement, “We know this system affects the entire community. There are bound to be some bumps in the road. While the court has tried to take into consideration the different issues that may arise, there is no way we can address all of them. Initially, we anticipate that productivity may be slower as people get accustomed to the new system. We’ll do our best to accommodate you, and we ask you to do the same."

FullCourt is an enterprise grade application that's broad enough in its scope that the Kansas court had to partition the project. According to the Topeka Capital-Journal, "to manage the conversion to FullCourt, the court broke down the project into several components."

The replacement software includes features such as e-filing of documents. Wyoming state courts have also implemented FullCourt, although an HP 3000 wasn't shut down there.

Posted by Ron Seybold at 01:09 PM in Migration | Permalink | Comments (0)

April 24, 2014

RUG talk notes emulator licensing, recovery

Second of two parts

When CAMUS held its recent user group conference call, MB Foster's Arnie Kwong had advice to offer the MANMAN and HP 3000 users about the CHARON emulator for PA-RISC systems like the 3000. A more complex environment than HP's decade-old 3000 hardware is in place to enable things like powerfail recovery while protecting data. And readying licenses for a move to the Stromasys CHARON 3000 emulator means you've got to talk to somebody, he said.

"Everybody is pretty helpful in trying to keep customers in a licensing move," Kwong said. "If anyone tells you that you don't even have to ask, and that you're just running a workalike, that would be a mistake. You have to have an open and fair conversation. Not doing so, and then having a software problem, could be a fairly awkward support conversation. You can't make the assumption you'll be able to make this move without any cost." 

If you create secondary processing capacity through CHARON, you'll have to execute new licenses for those licenses. But most of the third party vendors are going to be pretty reasonable and rational. We've all known each other for decades. People who do lots of IT procurement understand straightforward rules for handling that. 

Kwong said that CHARON prospects should make a catalog of their MPE software applications and utilities, and then talk to vendors about tech compatibility, too.

In manufacturing IT in particular, its cost has been declining recently. "Short of somebody paying $10-15 million to re-engineer around SAP, or Infor's other products, most of the incremental spending in the MANMAN and 3000 environments have been to extend life. People do a lot of stuff now on Excel spreadsheets and SQL Server databases around the ERP system. We look to see if the 3000 is the essential piece, and often it is. We look at what other things are affected if we change that 3000 piece."

Kwong said that MB Foster has not done MANMAN-specific testing against its in-lab CHARON installations yet.

Data integrity questions came up from Mike Hornsby, who wanted to know about comparison in using transactional testing to evaluate possible data loss. Of the HP 3000's powerfail environment,  Kwong said, "it's been one of the key strengths of the 3000 environment in particular." The tests at MB Foster haven't revealed any data loss. Kwong didn't dismiss the possibility, however.

"This is theory, but I'll say this: One of the things you have at risk during the crash recovery process is either in the CHARON emulator, or the underlying infrastructure in the cloud environment that you're running it in." In this meaning of the word cloud, Kwong was referring to the VMware hosting that's common to the 3000 CHARON experience.

"In those instances you could have failures that were never in their wildest imaginations considered by the folks who built this software-hardware combination. I have not seen anything personally in our testing where things have been horrendously corrupted, rolled over and died. But inherently in the environments they're running, there are assumptions of database logfiles, and particularly in certain key files and so forth, where your warmstart processing can be at risk." 

When such failures occur — and they can happen in HP's provided hardware — "You have the same predictability in an emulated environment as you do in the 3000 hardware environment. I don't think I'd lose a lot of sleep over it." However, networking and storage architecture issues are different for the emulated MPE hardware than for HP's native hardware, he added, 

But application expenses take the forefront over hardware and platform issues at the sites where MB Foster has discussed transitions of any kind. "When you take the context where the 3000 is running from a business standpoint, yes, you have licensing issues for maintenance and so forth," Kwong said. "But as a total percentage of the cost to the enterprise, the application's value and the application's cost to change anything, usually begins to predominate. 

"It's not the fact that you have no-cost terminals and low-cost hardware anymore, it's what that application's power brings you. We've seen that newer managers who come in from outside at these sites with stable HP applications have vastly different expectations for what the application's going to deliver — also, different demands for the applications portfolio — than people who've been there for decades running the same architecture. The platform discussions usually aren't major economic drivers. 

"Running a 3000 application in another environment, such as Windows or Linux, is never zero, although it's cheaper to do that in a Stromasys environment. We need to carefully consider the hardware scalability performance availability, and certain kinds of communication and networking interfaces that aren't qualified for use in the Stromasys environment yet."

"We look at how to approach the problem of migration and its processes. In talking to our customers and concerns they have at small one-person shops with boxes running for 20 years, a move will take a year or two years to do. People that we talk to say they're gotten by for a long time without having to pay the kind of money needed to migrate to SAP or Oracle, or FMS or JD Edwards. Those alternatives are on the list of things they look at.

"Few people are talking about development stages for the kinds of complex environments the folks on this call represent. The days of large scale development have pretty much gone by the board. Everybody's talking about what kind of capacity they can buy, and what kind of features can they buy, rather than concentrate on what kinds of things they could move to the new environment.

"For them, the Stromasys approach says they'll leave their software base the same and go to new hardware, essentially. There are a lot of business assumptions and a lot of applications assumptions that might change because you're running in that new hardware environment. Things that were always based on the 7x24 capability, running without a lot of staff expense — all of those things are now open to question and rethink. We encourage people to take a step back and look at their business planning assumptions and business models, because that's the foundation for why they bought the 3000 in the first place".

Kwong he believes most of the users on the call could agree HP didn't do badly by them in the initial offering of high-value, investment-protected systems. Now that the system is into its second decade beyond HP's exit announcement, protecting that value deserves some fresh assessment.

Posted by Ron Seybold at 11:38 AM in Homesteading, Migration | Permalink | Comments (0)

April 21, 2014

A week-plus of bleeds, but MPE's hearty

BleedingheartThere are not many aspects of MPE that seem to best the offerings from open source environments. For anyone who's been tracking the OpenSSL hacker-door Heartbleed, though, the news is good on 3000 vulnerability. It's better than more modern platforms, in part because it's more mature. If you're moving away from mature and into migrating to open source computing, then listen up.

Open source savant Brian Edminster of Applied Technologies told us why MPE is in better shape.

I know that it's been covered other places, but don't know if it's been explicitly stated anywhere in MPE-Land: The Heartbleed issue is due to the 'heartbeat' feature, which was added to OpenSSL after any known builds for MPE/iX.

That's a short way of saying: So far, all the versions of OpenSSL for MPE/iX are too old to be affected by the Heartbleed vulnerability. Seems that sometimes, it can be good to not be on the bleeding edge.

However, the 3000 IT manager -- a person who usually has a couple of decades of computing experience -- may be in charge of the more-vulnerable web servers. Linux is used a lot for this kind of thing. Jeff Kell, whose on-the-Web servers deliver news of 3000s via the 3000-L mailing list, outlined repairs needed and advice from his 30-plus years of networking -- in MPE and all other environments.

About 10 days after the news rocked the Web, Kell -- one of the sharpest tools in the drawer of networking -- posted this April 17 summary on the challenges and which ports to watch.

Unless you've had your head in the sand, you've heard about Heartbleed. Every freaking security vendor is milking it for all it's worth. It is pretty nasty, but it's essentially "read-only" without some careful follow-up. 

Most have focused on SSL/HTTPS over 443, but other services are exposed (SMTP services on 25, 465, 867; LDAP on 636; others). You can scan and it might show up the obvious ones, but local services may have been compiled against "static" SSL libraries, and be vulnerable as well.

We've cleaned up most of ours (we think, still scanning); but that just covers the server side.

There are also client-side compromises possible.

And this stuff isn't theoretical, it's been proven third-party... 

https://www.cloudflarechallenge.com/heartbleed

Lots of folks say replace your certificates, change your passwords, etc.  I'd wait until the services you're changing are verified secure.

Most of the IDS/IPS/detections of the exploits are broken in various ways.  STARTTLS works by negotiating a connection, establishing keys, and bouncing to an encrypted transport.  IDS/IPS can't pick up heartbleed encrypted. They're after the easy pre-authenticated handshake.

It's a mess for sure. But it’s not yet safe to necessarily declare anything safe just yet.

Stay tuned, and avoid the advertising noise.

Posted by Ron Seybold at 06:45 AM in Migration, Newsmakers, User Reports, Web Resources | Permalink | Comments (0)

April 18, 2014

Denying Interruptions of Service

DDoSFor the last 18 hours, the 3000 Newswire’s regular blog host TypePad has had its outages. (Now that you're reading this, TypePad is back on its feet.) More than once, the web resource for the Newswire has reported it’s been under a Denial of Service attack. I’ve been weathering the interruption of our business services up there, mostly by posting a story on my sister-site, Story for Business.

We also notified the community via Twitter about the outage and alternative site. It was sort of a DR plan in action. The story reminds me of the interruption saga that an MPE customer faces this year. Especially those using the system for manufacturing.

MANMAN users as well as 3000 owners gathered over the phone on Wednesday for what the CAMUS user group calls a RUG meeting. It's really more of an AUG: Applications User Group. During the call, it was mentioned there’s probably more than 100 different manufacturing packages available for business computers which are like the HP 3000. Few of them, however, have a design as ironclad against interruption as the venerable MANMAN software. Not much service could be denied to MANMAN users because of a Web attack, the kind that’s bumped off our TypePad host over the last day. MANMAN only employs the power of the Web if a developer adds that interface.

This is security through obscurity, a backhanded compliment that a legacy computer gets. Why be so condescending? It might be because MPE is overshadowed by computer systems that are so much newer, more nimble, open to a much larger world.

They have their disadvantages, though. Widely-known designs of Linux, or Windows, attract these attempts to deny their services. Taking something like a website host offline has a cost to its residents, like we reside on TypePad. Our sponsors had their messages denied an audience. In the case of a 3000, when it gets denied it’s much more likely to be a failure of hardware, or a fire or flood. Those crises, they’ve got more rapid repairs. But that’s only true if a 3000 owner plans for the crisis. Disaster Recovery is not a skill to learn in-situ, as it were. But practicing the deployment it’s about as popular as filing taxes. And just as necessary.

Another kind of disruption can be one that a customer invites. There are those 100 alternatives to MANMAN out there in the market, software an MPE site might choose to use. Manufacturing software is bedeviled with complexity and nuance, a customized story a company tells itself and its partners about making an object.

There’s a very good chance that the company using MPE now, in the obscurity of 2014, has put a lot of nuance into its storytelling about inventory, receivables, bill of materials and more. Translating that storytelling into new software, one of those 100, is serious work. Like any other ardent challenge, this translation — okay, you might call it a migration — has a chance to fail. That’s a planned failure, though, one which usually won’t cost a company its audience like a website service denial.

The term for making a sweeping translation happen lightning-quick is The Magic Weekend. 48 hours of planned offline transformation, and then you’re back in front of the audience. No journey to the next chapter of the MPE user’s story — whether it’s a jump to an emulator that mimics Hewlett-Packard computers, or the leap to a whole new environment — can be accomplished in a Magic Weekend. Business computers don’t respond to magic incantations.

The latest conference call among MANMAN users invoked that warning about magic. Turning the page on the story where Hewlett-Packard’s hardware was the stage for the software of MANMAN and MPE — that’s an episode with a lot longer running time than any weekend. Even if all you’re doing is changing the stage, you will want to test everything. You don’t want to be in middle of serving hundreds and hundreds of audience members at a time, only to have the lights grow too dim to see the action on the stage.

Posted by Ron Seybold at 04:45 PM in Homesteading, Migration | Permalink | Comments (0)

April 11, 2014

Again, the 3000's owners own a longer view

GeorgeBurnsHeartbleed needs a repair immediately. Windows XP will need some attention over the next three years, as the client environment most favored by migrating 3000 sites starts to age and get more expensive. XP is already "off support," for whatever that means. But there's a window of perhaps three years where change is not as critical as a repair to Heartbleed's OpenSSL hacker window.

Then there's MPE. The OS already has gone through more than a decade of no new sales. And this environment that's still propping up some business functions has now had more than five years of no meaningful HP lab support. In spite of those conditions, the 3000's OS is still in use, and by one manager's accounting, even picking up a user in his organization.

"Ending?" Tim O'Neill asks with a rhetorical tone. "Well, maybe MPE/iX will not be around 20 years from now, but today one of our people  contacted me and said they need to use the application that runs on our HP 3000. Isn't that great? Usage is increasing!"

VladimirNov2010GrayPondering if MPE/iX will be around in 20 years, or even 13 when the end of '27 date bug surfaces, just shows the longer view the 3000 owner still owns. Longer than anything the industry's vendors have left for newer, or more promising, products. My favorite avuncular expert Vladimir Volokh called in to leave a message about his long view of how to keep MPE working. Hint: This septuagenarian plans to be part of the solution.

Vladimir is bemused at the short-term plans that he runs across among his clientele. No worries from them about MPE's useful lifespan. "I'll be retired by then," say these managers who've done the good work of IT support since the 1980s. This retirement-as-futures plan is more common than people would like to admit.

Volokh took note of our Fixing 2028 update awhile back. "It's interesting that you say, "We've still got more than 13 years left. Almost every user who I've told you about has said, 'Oh, by then, I'll retire.' My answer is, 'Not me.' I will be just 90 years old. You call me, and we'll work out something.' "

I invite you to listen to his voice, delivering his intention to keep helping and pushing MPE into the future -- a longer one than people might imagine for something like XP.

Why do some 3000 experts say a longer view seems like a good chance? Yes, one obvious reason is that they don't want to say goodbye to the meaningful nature of their expertise, or the community they know. I feel that same way, even though I only tell the stories of this community.

But there's another reason for the long view. MPE has already served in the world for 40 years. HP thought this so unlikely that they didn't even program for a Y2K event. Then the vendor assumed more than 80 percent of sites will be off in four years' time after HP's "we're quitting" notice. Then it figured an extra two years would do the job.

Wrong on all three accounts. Change must prove its value, and right soon, if you intend to begin changing soon. There's another story to tell about that reality, one from the emulator's market, which I'll tell very soon. In the meantime, change your passwords

1. If a website you use is vulnerable to Heartbleed; check here with a free tool, or it has been (list below).

and

2. It has now been repaired.

Here's a list of websites which were vulnerable, from Github. Yahoo is among them, which means that ATT broadband customers have some password-changing to do. That's very-short-view change.

Posted by Ron Seybold at 01:46 PM in Homesteading, Migration | Permalink | Comments (0)

April 10, 2014

Heartbleed reminds us all of MPE/iX's age

The most wide-open hole in website security, Heartbleed, might have bypassed the web security tools of the HP 3000. Hewlett-Packard released WebWise/iX in the early 2000's. The software included SSL security that was up to date, back in that year. But Gavin Scott of the MPE and Linux K-12 app vendor QSS reminds us that the "security through antiquity" protection of MPE/iX is a blessing that's not in a disguise.

OldheartWebWise was just too late to the web game already being dominated by Windows at the time -- and even more so, by Linux. However, the software that's in near total obscurity doesn't use the breached OpenSSL 1.0.1 or 1.0.2 beta versions. Nevertheless, older software running a 3000 -- or even an emulated 3000 using CHARON -- presents its own challenges, once you start following the emergency repairs of Heartbleed, Scott says.

It does point out the risks of using a system like MPE/iX, whose software is mostly frozen in time and not receiving security fixes, as a front-line Internet (or even internal) server. Much better to front-end your 3000 information with a more current tier of web servers and the like. And that's actually what most people do anyway, I think.

Indeed, hardly any 3000s are used for external web services. And with the ready availability of low-cost Linux hosts, any intranets at 3000 sites are likely to be handled by that open-sourced OS. The list of compromised Linux distros is long, according to James Byrne of Harte & Lynne, who announced the news of Heartbleed first to the 3000 newsgroup. 

The versions of Linux now in use which are at risk, until each web administrator can supply the security patch, include

Debian Wheezy
Ubuntu 12.04.4 LTS1
CentOS 6.5
Fedora 18
OpenBSD 5.3
FreeBSD 10.0
NetBSD 5.0.2
OpenSUSE 12.2

The PA-RISC architecture of the HP 3000, emulated on CHARON HPA/3000, could also provide a 3000 manager with protection even if somehow an MPE/iX web server had been customized to use OpenSSL 1.0.1, Scott says.

I'm pretty certain that the vulnerable versions of OpenSSL have never been available on MPE/iX. However, it is possible that the much older OpenSSL versions which were ported for MPE/iX may have other SSL vulnerabilities. I haven't looked into it. Secure Apache or another web server dependent on OpenSSL would be the only likely place such a vulnerability could be exposed.

There's also a chance that MPE/iX, even with a vulnerable web server, might have different behavior -- as its PA-RISC architecture has the stack growing in the opposite direction from x86. As such, PA-RISC may do more effective hardware bounds checking in some cases. This checking could mitigate the issues or require MPE/iX-specific knowledge and effort on the part of an attacker in order to exploit vulnerabilities. All the out-of-the-box exploit tools may actually be very dependent on the architecture of the underlying target system.

Security through such obscurity has been a classic defense for the 3000 against the outside world of the web. But as Scott notes, it's a reminder of how old the 3000's web and network tools are -- simply because there's been little to nothing in the way of an update for things like WebWise Apache Server.

But there's still plenty to worry about, even if a migrated site has moved all of its operations away from the 3000. At the website The Register, a report from a white-hat hacker throws the scope of Heartbleed much wider than just web servers. It's hair-raising, because just about any client-side software -- yeah, that browser on any phone, or on any PC or Mac -- can have sensitive data swiped, too.

In a presentation given yesterday, Jake Williams – aka MalwareJake – noted that vulnerable OpenSSL implementations on the client side can be attacked using malicious servers to extract passwords and cryptographic keys.

Williams said the data-leaking bug “is much scarier” than the gotofail in Apple's crypto software, and his opinion is that it will have been known to black hats before its public discovery and disclosure.

Posted by Ron Seybold at 11:18 AM in Migration, Newsmakers | Permalink | Comments (0)

April 09, 2014

How SSL's bug is causing security to bleed

HeartbleedComputing's Secure Sockets Layer (SSL) forms part of the bedrock of information security. Companies have built products around SSL, vendors have wired its protocols into operating systems, vendors have applied its encryption to data transport services. Banks, credit card providers, even governments rely on its security. In the oldest days of browser use, SSL displayed that little lock in the bottom corner that assured you a site was secure -- so type away on those passwords, IDs, and sensitive data.

In a matter of days, all of the security legacy from the past two years has virtually evaporated. OpenSSL, the most current generation of SSL, has developed a large wound, big enough to let anyone read secured data who can incorporate a hack of the Heartbeat portion of the standard. A Finnish security firm has dubbed the exposed hack Heartbleed.

OpenSSL has made a slow and as-yet incomplete journey to the HP 3000's MPE/iX. Only an ardent handful of users have made efforts to bring the full package to the 3000's environment. In most cases, when OpenSSL has been needed for a solution involving a 3000, Linux servers supply the required security. Oops. Now Linux implementations of OpenSSL have been exposed. Linux is driving about half of the world's websites, by some tallies, since the Linux version of Apache is often in control.

One of the 3000 community's better-known voices about mixing Linux with MPE posted a note in the 3000 newsgroup over the past 48 hours to alert Linux-using managers. James Byrne of Harte & Lyne Ltd. explained the scope of a security breach that will require a massive tourniquet. To preface his report, the Transport Layer Security (TLS) and SSL in the TCP/IP stack encrypt data of network connections. They have even done this for MPE/iX, but in older, safe versions. Byrne summed up the current threat.

There is an exploit in the wild that permits anyone with TLS network access to any system running the affected version of OpenSSL to systematically read every byte in memory. Among other nastiness, this means that the private keys used for Public Key Infrastructure on those systems are exposed and compromised, as they must be loaded into memory in order to perform their function.

It's something of a groundbreaker, this hack. These exploits are not logged, so there will be no evidence of compromises. It’s possible to trick almost any system running any version of OpenSSL released over the past two years into revealing chunks of data sitting in its system memory.

The official security report on the bug, from OpenSSL.org, does its best to make it seem like there's a ready solution to the problem. No need to panic, right?

A missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server.

Only 1.0.1 and 1.0.2-beta releases of OpenSSL are affected, including 1.0.1f and 1.0.2-beta1.

Thanks for Neel Mehta of Google Security for discovering this bug and to Adam Langley and Bodo Moeller for preparing the fix.

Affected users should upgrade to OpenSSL 1.0.1g. Users unable to immediately upgrade can alternatively recompile OpenSSL with -DOPENSSL_NO_HEARTBEATS.

1.0.2 will be fixed in 1.0.2-beta2

For the technically inclined, there's a great video online that explains all aspects of the hack. Webserver owners and hosts have their work to do in order to make their sites secure. That leaves out virtually every HP 3000, the server that was renamed e3000 in its final HP generation to emphasize its integration with the Internet. Hewlett-Packard never got around to implementing OpenSSL security in its web services for MPE/iX. 3000 systems are blameless, but that doesn't matter as much as insisting your secure website providers apply that 1.0.1g upgrade.

The spookiest part of this story is that without the log evidence, nobody knows if Heartbleed has been used over the past two years. Byrne's message is directed at IT managers who have Linux-driven websites in their datacenters. Linux has gathered a lot of co-existence with MPE/iX over the last five years and more. This isn't like a report of a gang shooting that's happened in another part of town. Consider it more of a warning about the water supply.

In a bit of gallows humor, it looks as if the incomplete implementation of OpenSSL, frozen in an earlier edition of the software, puts it back in the same category as un-patched OpenSSL web servers: not quite ready for prime time.

Posted by Ron Seybold at 09:50 PM in Homesteading, Migration, Newsmakers, User Reports | Permalink | Comments (0)

April 08, 2014

Here it is: another beginning in an ending

Today's the day that Microsoft gives up its Windows XP business, but just like the HP 3000 exit at Hewlett-Packard, the vendor is conflicted. No more patches for security holes, say the Redmond wizards. But you can still get support, now for a fee, if you're a certain kind of Windows XP user.

New BeginningsIt all recalls the situation of January 2009, when the support caliber for MPE/iX was supposed to become marginal. That might have been true for the typical kind of customer who, like the average business XP user, won't be paying anything to Microsoft for Service Packs that used to be free. But in 2009 the other, bigger sort of user was still paying HP to take 3000 support calls, fix problems, and even engineer patches if needed. 

A lot of those bigger companies would've done better buying support from smaller sources. Yesterday we took note of a problem with MPE/iX and its PAUSE function in jobstreams, uncovered by Tracy Johnson at Measurement Specialties. In less than a day, a patch that seemed to be as missing as that free XP support of April 8 became available -- from an independent support vendor. What's likely to happen for XP users is the same kind of after-market service the 3000 homesteader has enjoyed.

Johnson even pointed us to a view of the XP situation and how closely it seems to mirror the MPE "end of life," as Hewlett-Packard liked to call the end of 2010. "Just substitute HP for Microsoft," Johnson said about a comparison with makers of copiers and makers of operating systems.

Should Microsoft Be Required To Extend Support For Windows XP? The question is being batted around on the Slashdot website today. One commenter said that if the software industry had to stick to the rules for the rest of the office equippers, things would be differerent. Remember, just substiture HP (and MPE) for Microsoft and XP.

If Windows XP were a photocopier, Microsoft would have a duty to deal with competitors who sought to provide aftermarket support. A new article in the Michigan Law Review argues that Microsoft should be held to the same duty, and should be legally obligated to help competitors who wish to continue to provide security updates for the aging operating system, even if that means allowing them to access and use Windows XP's sourcecode.

HP did, given enough time, help in a modest way to preserve the maintainability of MPE/iX. The vendor sold source code licenses for $10,000 each to support companies. In at least one case, the offer of help was proactive. Steve Suraci of Pivital Solutions said he was called by Alvina Nishimoto of HP and asked, "You want to purchase one of these, don't you?" The answer was yes. Nobody knew what good a source code license might do in the after-market. But HP was not likely to make the offer twice, and the companies who got one took on the expense as an investment in support in the future.

But there was a time in the 3000's run-up to that end-of-HP Support when the community wanted to take MPE/iX into open source status. That's why the advocacy group was named OpenMPE. Another XP commenter on Slashdot echoed the situation the 3000 faced during the first years of its afterlife countdown.

(Once again, just substitute HP and MPE for Microsoft and XP. In plenty of places, they'll be used together for years to come.)

XP isn't all that old, as evidenced by the number of users who don't want to get off of it. It makes sense that Microsoft wants to get rid of it -- there's no price for a support contract that would make it mutually beneficial to keep tech support trained on it and developers dedicated to working on it. But at the same time, Microsoft is not the kind of company that is likely to release it to the public domain either. The last thing they would want is an open source community picking it up, keeping it current with security patches and making it work on new hardware. That's the antithesis of the forced upgrade model.

Note: MPE/iX has been made to work with new hardware via the CHARON emulator. Patches are being written, too, even if they are of the binary variety. XP will hope to be so lucky, and it's likely to be. If not, there's the migration to Windows 7 to endure. But to avoid that expense for now, patches are likely to be required. The 3000 community can build many of them. That's what happens when a technology establishes reliability and matures.

Posted by Ron Seybold at 06:12 PM in Migration, News Outta HP | Permalink | Comments (1)

April 04, 2014

Save the date: Apr 16 for webinar, RUG meet

April 16 is going to be a busy day for MB Foster's CEO Birket Foster.

BirketLong known for his company's Wednesday Webinars, Foster will be adding a 90-minute prelude on the same day as his own webinar about Data Migration, Risk Mitigation and Planning. That Wednesday of April 16 kicks off with the semi-annual CAMUS conference-call user group meeting. Foster is the guest speaker, presenting the latest information he's gathered about Stromasys and its CHARON HP 3000 emulator.

The user group meet begins at 10:30 AM Central Time, and Foster is scheduled for a talk -- as well as Q&A from listeners about the topic -- until noon that day. Anyone can attend the CAMUS meeting, even if they're not members of the user group. Send an email to CAMUS leader Terri Lanza at tlanza@camus.org to register, but be sure to do it by April 15. The conference call's phone number will be emailed to registrants. You can phone Lanza with questions about the meeting at 630-212-4314.

Starting at noon, there's an open discussion for attendees about any subject for any MANMAN platform (that would be VMS, as well as MPE). The talk in this soup tends to run to very specific questions about the management and use of MANMAN. Foster is more likely to field questions more general to MPE. The CHARON emulator made its reputation among the MANMAN users in the VMS community, among other spots in the Digital world. You don't have to scratch very deep to find satisfied CHARON users there.

Then beginning at 1 PM Central, Foster leads the Data Migration, Risk Mitigation and Planning webinar, complete with slides and ample Q&A opportunity.

Registration for the webinar is through the MB Foster website. Like all of the Wednesday Webinars, it runs between 1-2 PM. The outline for the briefing, as summed up by the company:

Data migration is the process of moving an organization’s data from one application to another application—preferably without disrupting the business, users or active applications.

Data migration can be a routine part of IT operations in today’s business environment providing service to the whole company – giving users the data they need when they need it, especially for Report, BI (Business Intelligence) or analytics (including Excel spreadsheets) and occasionally for a migration to a new application. How can organizations minimize impacts of data migration downtime, data loss and minimize cost?

In this webinar we outline the best way to develop a data conversion plan that incorporates risk mitigation, and outlines business, operational and technical challenges, methodology and best practices.

The company has been in the data migration business since the 1980s. Data Express was its initial product to extracting and controlling data. It revamped the products after Y2K to create the Universal Data Access (UDA) product line. MBF-UDACentral supports the leading open source databases in PostgreSQL and MySQL, plus Eloquence, Oracle, SQLServer, DB2, and TurboIMAGE, as well as less-common databases such as Progress, Ingres, Sybase and Cache. The software can migrate any of these databases' data between one another.

Posted by Ron Seybold at 07:24 PM in Homesteading, Migration, Web Resources | Permalink | Comments (0)

March 26, 2014

Twice as many anti-virals: not double safety

Editor's note: While 3000 managers look over the need to update XP Windows systems in their company, anti-virus protection is a part of the cost to consider. In fact, extra anti-virus help might post a possible stop-gap solution to the end of Microsoft's XP support in less than two weeks. A lack of new security patches is past of the new XP experience. Migrating away from MPE-based hosting involves a lot more reliance on Windows, after all. Here's our security expert Steve Hardwick's lesson on why more than one A/V utility at a time can be twice as bad as a single good one.

By Steve Hardwick, CISSP
Oxygen Finance

If one is good, then two is better. Except with anti-virus software.

When it comes to A/V software there are some common misconceptions about capabilities. Recently some vendors, such as Adobe, have started bundling anti-virus components as free downloads with their updates. Some managers believe if you have one anti-virus utility, a second can only make things safer. Once we look how anti-virus software operates, you'll see why this is not the case. In fact, loading a second A/V tool can actually do more damage than good.

PolarbeardukeoutThe function of an anti-virus utility is to detect and isolate files or programs that contain viruses. There are two fundamental ways in which the A/V utility does this. The anti-virus program will have a data file that contains signatures for known viruses. First, any files that are saved on the hard drive are scanned for signatures to see if they contain malicious code. This is very similar to programs that search for fingerprints. Once the A/V utility finds a match, the file is identified as potentially dangerous and quarantined to prevent any infection. Second, the anti-virus utility will intercept requests to access a file and scan it before it is run. This requires that the anti-virus program can inspect the utility prior to it being launched.

Anti-virus designers are aware that their utility is one of the primary targets of a hacker. After all, if the hacker can bypass the A/V system then it is open to attack, commonly referred to as owned or pwned. So a core component of the A/V system is to constantly monitor its own performance to make sure it has not been compromised. If the A/V system detects that it is not functioning correctly, it will react as if there is a hacking attack and try to combat it. 

So here's what happens if two anti-virus programs are loaded on the same machine. Initially, there are issues as the second system is installed. When the second utility is loaded it contains its own database of known virus signatures. The first anti-virus will see that signature file as something highly dangerous. After all, it will look like it contains a whole mass of virus files. It will immediately stop it from being used and quarantine it. Now the fun starts -- fun that can drive a system into a ditch.

The second anti-virus program will react to the quarantine of its signature file. The second A/V does not know if the issue is another A/V, or a hacker trying the thwart the operation of the system. So it will try to stop the quarantine action of the first A/V. The two systems will battle until one of them gives up and the other wins, or the operating system steps in and stops both programs. Neither outcome is what you're after.

If the two systems do manage to load successfully -- in many cases anti-virus programs are now built to recognize other A/V systems - then a second battle occurs. When a file is opened, both A/V systems will try to inspect it before it is passed to the operating system for processing. As one A/V tries to inspect the file, the second one will try and stop the action. The two A/V systems will battle it out to take control and inspect the file ahead of each other.

Even if multiple systems do acknowledge each other and decide to work together, there are still some issues left. When a file is accessed, both systems will perform an inspection, and this increases the amount of time the virus scan will take. What's more, the anti-virus programs continually update their signature files. Once a new signature file is loaded, the A/V program will kick of a scan to see if the new file can detect any threats the old one did not catch. In most cases, new signature files arrive daily to the A/V system. That means both systems will perform file scans, sometimes simultaneously. This can bring a system to its knees -- because file scanning can be CPU intensive.

So two is worse than one, and you want to remove one of them. Removing A/V programs can be very tricky. This is because one goal of the hacker is to disable or circumvent the anti-virus system. So the A/V system is designed to prevent these attempts. If A/V programs were easy to install, all the hacker would have to do is launch the uninstall program - and in many cases, the A/V manufacturer does provide an uninstall program. Unfortunately in many cases, that uninstall may not get rid of all of the elements of the A/V. Several of the A/V manufacturers provide a utility that will clean out any remnants, after the A/V system has been initially uninstalled. 

So are there any advantages to having a second A/V system running? There is always a race between A/V companies to get out the latest signatures. Adding more A/V providers may increase your chances of getting a wider coverage, but only very marginally. The cost of the decreased performance versus this marginal increase in detection is typically not worth it. Over time, A/V vendors tend to even out on their ability to provide up-to-date signature files.

In summary, the following practices make up a good approach to dealing with the prospects of multiple A/V systems.

1) Read installation screens before adding a new application or upgrade to your system. Think carefully before adding an A/V feature that your current solution provides. Even if a new feature is being provided, it may be worth checking with your current provider to see if they have that function, and adding it from them instead.

2) If you do get a second A/V system in there and you want to remove it, consult the vendor's technical web site regarding removal steps. Most A/V vendors have a step-by-step removal process. Sometimes they will recommend a clean-up tool after the initial uninstall.

3) If you do want to check your A/V system, choose an on-line version that will provide a separate scan without loading an utility. There are many to choose from - search on “on-line antivirus check” in your favorite engine and pick one that is not your primary A/V vendor. Be careful - something online may try to quarantine your current A/V system. But this will give you a safe way to check if your current A/V is catching everything.

4) Don't rely on A/V alone. Viruses now come in myriad forms. No longer are they simple attacks on operating system weaknesses. Newer ones exploit the fallibility of browser code and are not dependent on the operating system at all. A good place to start looking at how you can improve your security is the CERT tips page https://www.us-cert.gov/ncas/tips By following safe computing practices, one A/V should be sufficient.

5) Beware of impostors. There are several viruses out there that mimic an A/V system. You may get a warning saying that your system is not working and to click on a link to download a better A/V system. Before clicking on the link, check the source of the utility. If you don't know how to do that, don't click on the link. You can always go back to Step 3 and check your A/V yourself.

Posted by Ron Seybold at 01:38 PM in Migration, Newsmakers | Permalink | Comments (0)

March 24, 2014

40 years from a kitchen-size 3000 to 3.4GHz

HP3000 CX 1974Forty years ago this spring, the HP 3000 was just gaining some traction among one of its core markets: manufacturing. This was a period where the computer was big enough to take over kitchen space in a software founder's home, according to an HP software VP of the time. That server didn't run reliably, and so got plenty of attention from the software labs of that day's Hewlett-Packard. And if you were fortunate, a system the size of a two tall-boy file cabinets could be yours for $99,500 in a starter configuration, with 96KB of core memory.

MPE was so new that Hewlett-Packard would sell the software unbundled for $10,000. The whole collection of server and software would burn off 12,000 BTU per hour. HP included "cooling dissipation" specs for the CX models -- they topped off at a $250,000 unit -- so you could ramp up your air conditioning as needed in your datacenter. (Thanks to the HP Computer Museum for the details).

DL380Those specs and that system surfaced while I wrote the Manufacturing ERP Options from Windows article last week. Just this week I rolled the clock forward to find the smallest HP 3000 while checking on specifications. This 2014 era 3000 system runs off an HP DL380 server fired by on a 3.44 GHz chip. It's plenty fast enough to handle the combo of Linux, VMWare and the Stromasys CHARON 3000 emulator. And it's 19 inches x 24 by 3.5.

We've heard, over the past year from Stromasys tech experts, that CPUs of more than 3 GHz are the best fit for VMWare and CHARON. It's difficult to imagine the same operating system that would only fit on a 12,000 BTU server surviving to run on that 2U-sized DL380. The newest Generation 8 box retails for about one-tenth of the cost of that '74 HP3000 System CX server unit. But the CX was all that ASK Computer Systems had to work with, 40 years ago. And HP needed to work with ASK just to bring MPE into reliable service. "It didn’t work worth shit, it’s true," said Marty Browne of ASK. "But we got free HP computer time."

The leap in technology evokes the distinction between a Windows ERP that will replace ASK's MANMAN, and other choices that will postpone migration. Especially if a company has a small server budget, enough time to transfer data via FTP or tape drive -- and no desire to revise their manufacturing system. What started in a kitchen has made its transition to something small enough to look like a large briefcase, a thousand times more powerful. Users made that happen, according to Browne and retired HP Executive VP Chuck House.

The last time I saw these two in a room together, the No. 2 employee at ASK and HP's chief of MPE software management had a touching exchange over the roots of MANMAN -- an application that's survived over four decades. (No. 1 at ASK would be the Kurtzigs, Andrew and Sandy. It's always been a family affair; their son Andy leads Pearl.com, a for-pay Q&A expert site.) 

At the HP3000 Software Symposium at the Computer History Museum, Browne said that if the 3000 had failed to take root, ASK would have been hung out to dry.

Marty Browne: It used to be so expensive to buy computer time to do development work. And it was so much better a deal for me to do this 3000 development. I was able to put several years of engineering work into my product before I ever sold it. I could not have afforded that since I was bootstrapping my business.

Chuck House: Let me add that was true for Sandy too. She got a free HP 3000 for her kitchen. 

Browne: It was not in the kitchen. We had the first HP 3000 on the computer floor at HP. Did you say kitchen?

House: Correct.

Browne: Yes, we got an HP 3000. We had to work at night, by the way.

House: But it was free time.

Browne: It was free time. It didn’t work worth shit. It’s true. But we got free HP time.

House: No, we used you to debug.

Browne: Pardon me?

House: You were our debuggers.

Browne: Yes, right. HP provided an open house in a lot of ways, I mean that’s part of the HP culture. They were good partners. HP is an excellent partner.

Moderator Burt Grad: So if the 3000s had not been able to sell, you would have been hung out? 

Browne: Yes.

Why is this history lesson important today? You might say that whatever MANMAN's bones were built from is sturdy stuff. Customization, as we noted in that ERP article, makes MANMAN sticky. Robert Mills commented to clarify that after I posted the article.

MANMAN could be customized and added to by the customer because they were given full documentation on the system. ASK would, for a reasonable cost, make modifications to standard programs and supply you with the source code of the modified programs. Even MM/3000 had a Customizer that allowed you to make database and screen changes. Can you do this with MS Dynamics and IFS? Will Microsoft and IFS allow this, and give you the information required?

The answer to the question might be just a flat-out no, of course not. Just as HP stopped selling MPE unbundled, Microsoft and IFS don't customize their application. But partners -- some perhaps the equivalent of Marty Browne, abeit of different skill -- would like to do that customization. It's just that this customization in the modern era, which would run on the same DL380, would come after host environment transfer, plus work configuring and testing the apps and installation of a new OS. Then there's the same transfer of data, no small task, which is about the only one that these options have in common.

If a migration away from the HP 3000 for ERP is essential, that change could cost as much as that 1974 CX server did. This is one reason why still-homesteading companies will work hard to prove they need that budget. A $2,000 DL380 and disks plus CHARON might be more cost-effective and less disruptive. How much future that provides is something your community is still evaluating. 

Posted by Ron Seybold at 03:56 PM in History, Homesteading, Migration | Permalink | Comments (0)

March 20, 2014

Manufacturing ERP Options from Windows

ASKLogoEven among the companies that host homesteader solutions for manufacturers, there's a sense that the long-term plan will involve Windows rather than MPE. The length of that term varies, of course, depending on the outlook for the current software in place. Customization keeps MPE systems in charge at companies very small and some large ones (albeit in small spots at those giants, like Boeing.)

Moving a 3000 installation away from MANMAN -- first created in the 1970s and after five ownership changes, still serving manufacturers -- is a skill at The Support Group's Entsgo group. In that TSG practice, the IFS suite is available and can be installed to replace the MANMAN, software which began development at a kitchen table in the middle 1970s. (That's if HP's former executive VP of Software Chuck House is to be believed. He said that HP sent a 3000 to ASK CEO Sandy Kurtzig's kitchen when MANMAN was still being debugged, as well as  MPE itself.)

MS DynamicsIFS -- which you can read up on at the Entsgo webpages of the Support Group site -- is just one of several replacement applications for manufacturers. Like IFS, Microsoft Dynamics GP has a wide range of modules to cover all the needs of a company using a 3000. Like any replace-to-migrate strategy, there's a lot of customized business logic to carry forward. But that's what a service company like TSG does, in part to keep down the costs of migrating.

TSG's CEO Terry Floyd said that Microsoft solution is battle-tested. Dynamics also happens to be a solution that our homesteading service hoster of a couple of days ago offers as a Windows migration target. Floyd says:

Several companies have converted from MANMAN to MS Dynamics, including one company in SoCal; that was 10 years ago. It's a fairly mature product by now, and had some great features when I checked it out way back when.

Windows used to be an anathema to the 3000 IT director, at least when it was considered as an enterprise-grade solution. Those days are long gone -- just as vanished as the sketchy beginnings of MPE itself, from its earliest days, and then again when it became a 32-bit OS in the late 1980s.

So it makes sense that someone who knows the genuine article in ERP, MANMAN, could have a positive review of a Windows replacement -- whether it's Microsoft's Dynamics, or IFS. Floyd said

There are dozens of viable ERP alternatives now (some industry specific, but many general purpose for all types of manufacturers.) There used to be hundreds. MS Dynamics is not as good as IFS, but choosing Microsoft now is considered as safe as choosing IBM was in the early 1980's. And at least you know they won't get bought by [former MANMAN owner] Computer Associates :)   

Microsoft bought several ERP packages from Europe (one big one from Denmark, as I recall) and merged them together about 2002. They didn't write [that app suite] but they certainly have a viable product and a sizable user base, after this many years into it.

Posted by Ron Seybold at 12:23 PM in Migration | Permalink | Comments (1)

March 19, 2014

A year-plus later, Ecometry awaits a map

NeedlepointAbout one year ago, users of the Ecometry multi-channel software were wondering what the future might be for their software. According to analysts, the software (now joined up with the Escalate Retail system, to add extra Point of Sale power) is being used by 60 percent of the $200 million and under retailers. That's a lot of companies, and some will sound very familiar to the 3000 community. Catalog and website providers like M&M Mars, or retailers with strong online store presences like Hot Topic. These were all part of the Ecometry community that's been folded into a much larger entity.

That would be JDA, a company large enough to join forces with Red Prairie in early 2013. But not large enough to deliver a futures map for the Ecometry customer. These customers have been loath to extend their Ecometry/Escalate installations until they get a read on the tomorrow they can expect from JDA.

The JDA Focus conference comes up in about a month, and right now there's only one certain piece of news. JDA will meet for the last time, via conference call, with the Ecometry/Escalate user group before Focus opens up. That's not extending the contact with customers. There's a total of six meetings, including one meet-and-greet and two updates on enhancement requests.

MB Foster's been tracking the Ecometry situation for years by now. As a result of being a partner with Red Prairie, they're now a partner with JDA since the merger of 15 months ago. In all that time, says CEO Birket Foster, no evidence of product planning has emerged from the larger company. JDA is very big, he notes, more than 130 software suites big. Ecometry is just one. JDA is so large they now offer a JDA Eight, which is 30 applications in one bundle.

Foster's view, backed up by work over the years in migrating Ecometry's MPE sites to Ecometry Open, is that anyone who's making a migration to the Open version is in a better place to react to whatever plan JDA might introduce. "I think it's possible there's nobody left in JDA who can even spell MPE, let alone know what it means to Ecometry sites," he said.

Foster would like to assemble a consortium of companies, partnering together, to assist in these migrations to Ecometry Open. In part, that's because the estimate for a migration which the customers get from JDA today runs 6-12 weeks. This is accurate only to the point that it describes installing the Open software on Windows or Unix servers (for SQL Server or Oracle databases) and Windows servers (for the app.) That's the part of the project where JDA (nee Red Prairie, nee Escalate) helps with the software transfer. But it's only a part of the 3000 customer's migration.

"In a lot of people's minds, from the day they decide to go to Ecometry Open, it's just a month or three," Foster said. "That [JDA] estimate is right. They come in, start it up, make sure the menu screens work right, and they're done. We do the integration work for everything else outside of the main Ecometry stuff."

In the JDA practice, the customer is responsible for loading its own data. "If you have any issues with your data, you might have to get more services from [JDA]," Foster said. "Or in this case, from MB Foster" perhaps along with its partners.

But without a clear roadmap from JDA, these customers cannot make plans to migrate with confidence. About a dozen of them are looking into this prospect, but have to work with only those half-dozen Ecometry sessions scheduled on the Focus agenda for the April 28 meeting. "They're scrambling for content, again," Foster said. "We've been addressing Ecometry customers since 2002, standing with [former CEO] John Marrah [of Ecometry] to talk about migration."

Foster's organization practices assessment as an essential prelude to change. Once an assessment has been completed -- in an overall look -- the cost of the migration can be estimated for the Ecometry site within a 30 percent plus-minus range. "After a detail assessment, you should be down to plus-minus 10 percent," he said. Other practices include doing a "parking lot" routing for parts of a migration that can be postponed with no ill effects, plus good project management and change control.

A migration away from an essential 3000 application like Ecometry should include a solid estimate of the costs. "You can get though a project like this on time and on budget," Foster said. "We don't believe in the philosophy that you should bid low and change-control people to death. We like to be up front and say 'this is what it's going to take.' Some people may not like the number they see, but that's actually what it's going to take."

Pragmatism about processes is really a significant part of the way an Ecometry customer operates, anyway. Foster shared a story about retailers who still ship out catalogs. Really, here well into the 21st Century? "Of course," Foster said the retailer told him. "With a catalog I've got a half-hour of your time on the couch on a Saturday, without the distraction of the Web. Focused on my products for you." One catalog retailer, a needlepoint supply resource, says the majority of its customers are beyond age 65. "That company actually gets a lot of orders from the paper order form in the catalog," Foster said.  

Posted by Ron Seybold at 06:22 PM in Migration | Permalink | Comments (0)

March 18, 2014

Customizing apps keeps A500 serving sites

A-Class in rackHP's A-Class 3000s aren't that powerful, and they're not as readily linked to extra storage. That's what the N-Class systems are designed to do. But at one service provider's shop, the A500 is plenty powerful enough to keep a client's company running on schedule, and within budget. The staying power comes from customization, that sticky factor which is helping some 3000s remain in service.

The A500 replaced a Series 987 about a year ago. That report is one point of proof that 9x7 systems are still being replaced. It's been almost two decades since the 9x7s were first sold, and more than 15 years since the last one was built. The service company, which wants to remain unnamed, had good experience with system durability from the 3000 line.

We host a group of companies that have been using our system for over 20 years. So, we’re planning on being around for a while. One of these customers may migrate to a Windows-based system over the next few years, but I anticipate that this will be a slow process, since we have customized their system for them over the years.

The client company's top brass wants to migrate, in order to get all of its IT onto a single computing environment. That'd be Windows. But without that corporate mandate to make the IT identical in every datacenter, the company would be happy staying with the 3000, rather than looking at eventual migration "in several years' time." It will not be the speed of the server that shuts down that company's use of an A500. It will be the distinction that MPE/iX represents.

A-Class singleThere are many servers at a similar price tag, or even cheaper, which can outperform an A500. HP never compared the A-Class or N-Class systems to anything but other HP 3000s. By the numbers, HP's data sheet on the A-Class lineup lists the top-end of the A500s -- a two-CPU model with 200 MHz chips -- at five times the performance of those entry-level $2,000 A400s being offered on eBay (with no takers, yet.) The A500-200-200 tops out at 8GB of memory. But the chip inside that server is just a PA-8700, a version of PA-RISC that's two generations older than the ultimate PA chipset. HP stopped making PA-RISC chips altogether in 2009.

HP sold that 2-way A500 at a list price of just under $42,000 at the server's 2002 rollout. In contrast, those bottom-end A400s had a list price of about $16,000 each. Both price points didn't include drives, or tape devices. Our columnist at the time, John Burke, reported on performance upgrades in the newer A-Class systems by saying

There is considerable controversy in the field about the A-Class servers in particular, with many people claiming these low-end boxes have been so severely crippled (when compared to their non-crippled HP-UX brothers) as to make them useless for any but the smallest shops. Even if you accept HP’s performance rating (and many people question its accuracy), the A400-100-110 is barely faster then the 10-year-old 928 that had become the de-facto low-end system.

I see these new A-Class systems as a tacit agreement by HP that it goofed with the initial systems.

The power of the iron is just a portion of the performance calculation, of course. The software's integration with the application, and access to the database and movement of files into and out of memory -- that's all been contributing to the 3000's reputation. "I’ve been working on the HP since 1984 and it’s such a workhorse!" said the service provider's senior analyst. "I've seen other companies that have gone from the 3000 to Windows-based systems, and I hear about performance issues."

Not all migrations to Windows-based ERP, for example, give up performance ground when leaving the 3000 field. We've heard good reports on Microsoft Dynamics GP, a mature set of applications that's been in the market for more than a decade. Another is IFS, which pioneered component-based ERP software with IFS Applications, now in its seventh generation.

One area where the newer products -- which are still making advances in capability, with new releases -- have to give ground to 3000 ERP is in customization. Whatever the ERP foundation might be at that service provider's client, the applications have grown to become a better fit to the business practices at that client company. ERP is a set of computing that thrives on customization. This might be the sector of the economy which will be among the last to turn away from the 3000 and MPE.

Posted by Ron Seybold at 10:52 AM in Homesteading, Migration, User Reports | Permalink | Comments (0)

March 17, 2014

Breaching the Future by Rolling Back

Corporate IT has some choices to make, and very soon. A piece of software that's essential to the world's business is heading for drastic changes, the kind that alter information resource values everywhere. Anyone with a computer older than three years has a good chance of being affected. What's about to happen will echo in the 3000 owner's memories.

Windows XP is about to ease out of Microsoft's support strategy. You can hardly visit a business that doesn't rely on this software -- about 30 percent of the world's Windows is still XP -- but no amount of warning from its vendor seems to be prying it off of tens of millions of desktops. On this score, it seems that the XP-using companies are as dug-in as many of the 3000 customers were in 2002. Or even 2004.

A friend of mine, long-steeped in IT, said he was advising somebody in his company about the state of these changes. "I'm getting a new PC," he told my pal. "But it's got Windows 8 on it. What should I do?" Of course, the fellow is asking this because Windows 8 behaves so differently from XP that it might as well be a foreign environment. Programs will run, many of them, but finding and starting them will be a snipe hunt for some users forced into Windows 8. The XP installations are so ubiquitous that IT managers are still trying to hunt them down.

Classic-shell-02-580-100023730-largeThe market sees this, knows all, and has found a solution. It won't keep Windows 8 from being shipped on new PCs. But the solutions will return the look and feel of the old software to the new Microsoft operating environment. One free solution is Classic Shell, which will take a user right back to the XP interface for users. Another simply returns the hijacked Start Button to a rightful place on new Windows 8 screens.

You can't make these kinds of changes in a vacuum, or even overnight. Microsoft has been warning and advising and rolling its deadlines backwards for several years now, but April 8 seems to be the real turning point. Except that it isn't, not completely. Like the 2006-2010 years for MPE and the 3000, the vendor is just changing the value of installed IT assets. It will be making them more expensive, and as time rolls on, less easy to maintain.

The expectation is that the security patches that Microsoft has been giving away for XP will no longer be free. There's no announcement, officially, about the "now you will pay for the patches" policy. Not like the one notice that HP delivered, rather quietly, back in 2012 for its enterprise servers. Security used to be an included value for HP's servers, but today any patch requires a support contract. 

Windows XP won't be any different by the time the summer arrives, but its security processes will have changed. Microsoft is figuring out how to be in two places at once: leading the parade away from XP and keeping customers from going rogue because XP is going to become less secure. The message is mixed, at the moment. A new deadline of 2015 has been announced for changes to the Microsoft Security Engine, MSE.

Cue the echoes of 2005, when HP decided that its five-year walk of the plank for MPE needed another two years worth of plank. Here's Microsoft saying

Microsoft will continue to provide updates to their antimalware signatures and Microsoft Security Engine for Windows XP users through July 14, 2015. 

The extension, for enterprise users, applies to System Center Endpoint Protection, Forefront Client Security, Forefront Endpoint Protection and Windows Intune running on Windows XP. For consumers, this applies to Microsoft Security Essentials.

Security is essential, indeed. But the virus that you might get exposed to in the summer of next year can be avoided with a migration. Perhaps over the next 16 months, that many percentage points of user base will have moved off XP. If so, they'll still be hoping they don't have to retrain their workforce. That's been a cost of migration difficult to measure, but very real for HP 3000 owners.

Classic Shell, or the $5 per copy Start 8, work to restore the interface to a familiar look and feel. One reviewer on ZDNet said the Classic Shell restores "the interface patterns that worked and that Microsoft took away for reasons unknown. In other words, Classic Start Menu is just like the Start Menu you know and love, only more customizable."

The last major migration the HP 3000 went through was from MPE V to MPE/XL, when the hardware took a leap into PA-RISC chipsets and 32-bit computing. Around that time, Taurus Software's Dave Elward created Chameleon, aimed at letting managers employ both the Classic and MPE/XL command interfaces. Because HP had done the heavy lifting of creating a Classic Mode for older software to run inside of MPE/XL, the interface became the subject of great interest.

But Chameleon had a very different mission from software like Classic Shell. The MPE software was a means to let customers emulate the then-new PA-RISC HP 3000 operating system MPE/XL on Classic MPE V systems. It was a way to move ahead into the future with a gentle, cautious step. Small steps like the ones which Microsoft is resorting to -- a string of extensions -- introduce some caution with a different style.

Like HP and the 3000, Microsoft keeps talking about what the end of XP will look like to a customer. There's one similarity. Microsoft, like HP, wants to continue to control the ownership and activation of XP even after the support period ends. 

"Windows XP can still be installed and activated after end of support on April 8," according to a story on the ZDNet website. The article quotes a Microsoft spokesperson as explaining, "Computers running Windows XP will still work, they just won’t receive any new security updates. Support of Windows XP ends on April 8, 2014, regardless of when you install the OS." And the popular XP Mode will still allow users with old XP apps to run them on Windows 7 Professional, Enterprise and Ultimate.

And just like people started to squirrel away the documentation and patches for the 3000 -- the latter software resulting in a cease-and-desist agreement last year -- XP users are tucking away the perfectly legal "professionals and developers" installer for XP's Service Pack 3, which is a self-contained downloadable executable.

"I've backed that up in the same place I've backed up all my other patch files and installers," said David Gerwitz of CBS Interactive, "and now, if I someday need it, I have it." These kinds of things start to go missing, or just nearly impossible to find, once a vendor decides its users need to move on.

Posted by Ron Seybold at 08:03 PM in History, Migration | Permalink | Comments (0)

March 13, 2014

Can COBOL flexibility, durability migrate?

InsideCOBOLogoIn our report yesterday on the readiness for CHARON emulation at Cerro Wire, we learned that the keystone application at that 3000 shop began as the DeCarlo, Paternite, & Associates IBS/3000 suite. That software is built upon COBOL. But at Cerro Wire, the app's had lots of updating and customization and expansion. It's one example of how the 3000 COBOL environment keeps on branching out, in the hands of a veteran developer.

That advantage, as any migrating shop has learned, is offset by finding COBOL expertise ready to work on new platforms. Or a COBOL that does as many things as the 3000's did, or does them in the same way.

OpenCOBOL and Micro Focus remain two of the favorite targets for 3000 COBOL migrations. The more robust a developer got with COBOL II on MPE, however, the more challenge they'll find in replicating all of that customization.

As an example, consider the use of COBOL II macros, or the advantage of COBOL preprocessors. The IBS software "used so many macros and copylibs that the standard COBOL compiler couldn't handle them," Terry Simpkins of Measurement Specialities reported awhile back. So the IBS creators wrote a preprocessor for the COBOL compiler on the 3000. Migrating a solution like that one requires careful steps by IT managers. It helps that there's some advocates for migrating COBOL, and at least one good crossover compiler that understands the 3000's COBOL nuances.

Alan Yeo reminds us that one solution to the need for a macro pre processor is AcuCOBOL. "It has it built in," he says. "Just set the HP COBOL II compatibility switch, and hey presto, it will handle the macros."

Yeo goes on to add that "Most of the people with good COBOL migration toolsets have created COBOL preprocessors to do just this when migrating HP COBOL to a variety of different COBOL compilers. You might just have to cross their palms with some silver, but you'll save yourself a fortune in effort." Transformix is among those vendors. AMXW support sthe conversion of HP COBOL to Micro Focus as well as AcuCOBOL.

Those macros were a staple for 3000 applications built in the 1980s and 90s, and then maintained into the current century, some to this very day. One of the top minds in HP's language labs where COBOL II was born thinks of macros as challenges to migrations, though. Walter Murray has said

I tried to discourage the use of macros in HP COBOL II. They are not standard COBOL, and do not work the same, or don't work at all, on other compilers.  But nobody ever expects that they will be porting their COBOL. One can do some very powerful things with macros. I have no argument there.

COBOL II/iX processes macros in a separate pass using what was called MLPP, the Multi-Language PreProcessor.  As the name implies, it was envisioned as a preprocessor that could be used with any number of HP language products.  But I don't think it was used anywhere except COBOL II, and maybe the Assembler for the PA-RISC platform.

Jeff Kell, whose 3000 shutdown at the University of Tennessee at Chattanooga we've chronicled recently, said macros were a staple for his shop.

In moving to COBOL II we lived on macros. Using predictable data elements for IMAGE items, sets, keys, status words and so forth, we reduced IMAGE calls to simple macros with a minimum of parameters.

We also had a custom preprocessor. We had several large, modular programs with sequential source files containing various subprograms.  The preprocessor would track the filename, section, paragraph, last image call, and generate standard error handling that would output a nice "tombstone" identifying the point in the program where the error occurred.  It also handled terminal messages, warnings, and errors (you could put the text you wanted displayed into COBOL comments below the "macro" and it filled in code to generate a message catalog and calls to display the text).

It's accepted as common wisdom that COBOL is yesterday's technology, even while it still runs today's mission critical business. How essential is that level of business? The US clearinghouse for all automated transfers between banks is built upon COBOL. But if your outlook for the future is, as one 3000 vet said, a staff pool of "no new blood for COBOL, IMAGE, or VPlus," then moving COBOL becomes a solid first step in a migration. Just ensure there's enough capability on the target COBOL to embrace what that 3000 application -- like the one at Cerro Wire -- has been doing for years.

Posted by Ron Seybold at 07:38 PM in Migration, User Reports | Permalink | Comments (0)

March 06, 2014

Reducing the Costs in a Major MS Migration

XPLook all around your world, anywhere, and you'll see XP. Windows XP, of course, an operating system that Microsoft is serious about obsoleting in a month. That doesn't seem to deter the world from continuing to use it, though. XP is like MPE. Where it's installed, it's working. And getting it out of service, replacing it with the next generation, has serious costs. It will remind a system manager of replacing a 3000, in the aggregate. Not as much per PC. But together, a significant migration cost.

The real challenge lies in needed upgrades to all the other software installed on the Windows PCs.

There's a way to keep down the costs related to this switch. MB Foster reminded us that they've got a means to improve the connection to the 3000 updated via Windows PCs.

Microsoft will end support for Windows XP on April 8, 2014. MB Foster has noticed companies moving to Windows 7/8 with an eye toward leveraging 64-bit architectures, reducing risks and standardizing on a currently supported operating system.

As an authorized reseller of Attachmate's Reflection terminal emulation software, we advise you that now is the time to seize the opportunity and minimize risks -- and get the most out of your IT investments.

The key to keeping down these costs is something called a Volume Purchase Agreement. It's an ownership license that HP 3000 shops may not have employed up to now, but its terms have improved. MB Foster's been selling and supporting Reflection ever since the product was called PC2622, and ran from the DOS prompt. Over those three decades, the company estimates it's been responsible for a million or more desktops during the PC boom, when 3000 owners were heavy into another kind of migration: replacement of HP2392 hardwired terminals. "Today, we are responsible for the management and maintenance of approximately 50,000 desktops," Foster's Accounts Manager Chris Whitehead said.

Upgrading Reflection is a natural step in the migration away from Windows XP. "We recommend upgrading terminal emulation software to Windows 7/8 compatible versions," said Whitehead. "As your partner we can make it easy and convenient to administrate licenses, reduce year over year costs, secure a lower price per unit and for your company to gain some amazing efficiencies."

For a site that has individual licenses of Reflection or a competing product, there's an opportunity to move into a Volume Purchase Agreement (VPA). The minimum entry is now only 10 units, Whitehead explains.

Years ago, when the product was sold by WRQ, the minimum for a Reflection limited site license was 25. Then it went to a point system. But as a follow-on, a minimum 10 units is just required for a Volume Purchase Agreement. The VPA provides a mechanism for maintaining licenses on an annual basis -- meaning free upgrades and support. It also provides price protection, typically giving a client a lower price per unit when compared to a single-unit purchase. The VPA also allows you to transition from one flavor of Reflection to another, i.e. going from Reflection for HP to Reflection for Unix, and at a lower cost.

If a site is already a Volume Purchase Agreement (VPA) customer and it’s not maintained, Whitehead suggest a customer consider reactivating it. During the reactivation process you can

• Consolidate and upgrade licenses into one or more standardized solutions

• Surrender / retire licenses no longer needed or required

• Trade in competing products

• Only maintain the licenses needed

Details are available from Whitehead at cwhitehead@mbfoster.com.

Posted by Ron Seybold at 08:35 PM in Homesteading, Migration | Permalink | Comments (0)

February 27, 2014

Unix-Integrity business keeps falling at HP

Sliding-cliffNumbers reported by Hewlett-Packard for its just-ended quarter show the company's making something of a rebound in some areas. One analyst said to CEO Meg Whitman that she'd been at the helm of the company for three-and-a-half years, and she had to correct him during the financial briefing last week.

"Actually, I've been here two-and-a-half years," Whitman said. "Sometimes it feels like three-and-a-half, but I've been here two-and-a-half years."

It's been a long 30 months with many changes for the vendor which still offers migration solutions to 3000 customers making a transition. But one thing that hasn't changed a bit is the trajectory of the company's Unix server business. Just as it has over each of the previous six quarters, sales and profits from the Business Critical Systems fell. Once again, the BCS combination of Integrity and HP-UX reported a decline in sales upwards of 15 percent from the prior fiscal year's quarter. This time it was 25 percent lower than Q1 of 2013. That makes 2014 the fourth straight year where BCS numbers have been toted up as lower.

Enterprise Group Q1 2014"We continued to see revenue declines in business-critical systems," Whitman said. Only the Enterprise Group servers based on industry standards -- HP calls them ISS, running Windows or Linux -- have been able to stay out of the Unix vortex.

"We do think revenue growth is possible through the remainder of the year on the enterprise [systems] group," Whitman said. "We saw good traction in ISS. We still have a BCS drag on the portfolio, and that's going to continue for the foreseeable future."

In a small victory among the runaway slide of HP-UX and Integrity sales, Whitman predicted that HP will pick up two points of market share in the business critical system marketplace.

"Listen, we are turning the enterprise group around," Whitman said. "You can see it in the success in ISS revenues, as well as networking and storage. We've still got more work to do on the margins. When you consider the significant headwind of the declining BCS business, the technology services operating profit performance was strong. Business critical systems continues to be impacted by a declining Unix market. BCS revenue declined 25 percent year-over-year, to $228 million."

As a marker of how small a slice that's become at HP, consider that the profits alone from HP's lending operations were more than $100 million. And the ISS revenues are 15 times higher than Integrity, at $3.2 billion.

Total HP revenues for the quarter were $28.2 billion, down 0.7 percent year-over-year and up 0.3 percent in constant currency. Total profits were HP's been stuck on $28 billion quarters since 2013. Whitman said the company has been in pivot mode "to the new style of intellectual property, around investment in innovation."

I think we've been hard at work on doing a lot of things that are going to position us as this industry continues to go through some very challenging changes. The pace of change and the magnitude of change here is as great as I've seen in my career. I think we're reasonably well positioned take advantage of those changes.

Changes in business are dictating new outlooks for older businesses at HP. It's always been that way at the vendor which cut off its 3000 futures during a post-merger closeout of product lines.

"We have businesses that are declining businesses," Whitman told the analysts, who were sometimes complimentary of where she's leading the company over the two-plus years. "We understand where the declining businesses are, we understand what we need to do with them. We've got businesses that are holding in terms of revenue, and then we've got growth businesses."

What's growing at HP will be getting whatever investment and energy the company can manage. "We have pivoted investment," Whitman said. "We've pivoted people. We've pivoted go-to-market to those growth areas in the company."

Posted by Ron Seybold at 08:05 PM in Migration, News Outta HP | Permalink | Comments (0)

February 20, 2014

Migration best practices: Budget and plan as if taking a business vacation

VacationIs a migration as much fun as a vacation? That seems like an easy question for the HP 3000 homesteader who's still got a transition in their future. Only a small percentage of the managers of these servers plan to homestead forever. For the rest of the installed base, this transition is a matter of when, rather than if.

With its feet in both camps of homesteading and migration, MB Foster held a webinar yesterday that delivered best practices for the CIO, IT director or even systems and programming manager who faces the someday of moving away. When an organization with the tenure of the University of Tennessee at Chattanooga shuts down its servers -- after 37 years of service -- it might be evidence that migration is an eventuality. A possibility for some sites.

For those that still have that mighty project on their futures calendars, the advice from Foster mirrors things like home remodeling and vacation planning. 

"This is a business decision, not a technical decision," CEO Birket Foster has always said, in delivering these practices over more than a decade. "A migration’s just like a vacation –- the more you plan, the less it costs, and the better the results." Perhaps the comparison might align with the concept of taking a business vacation. That's the sort where you tack on a few extra days to a business trip, and carry along the same set of bags while you go further.

MB Foster's eight-step process takes HP 3000 customers through migration with in-depth planning and expertise. A key piece is understanding the business and technical baselines, as well as an assessment of the business and technical goals of the migrating company. The results of the assessment form a plan presented in a one-day Executive-Level Workshop which highlights the major issues and recommendations of migration.

The work takes place side-by-side with a customer's IT staff, producing a complete evaluation of a company's data environment. The company's experts put data in flight with Data Blueprints, Software Selection, and Staffing Plans.

"What we've found in best practices is that you should do a skills matrix, for both the baseline and target," Foster advises. "When you go to put in a new application, you'll have to install and configure it, have a changed end-user workflow, and perhaps changes in the skills required to do this. You'll have changes in IT operations, and there may be application program interfaces that need configuration, so they talk to the normal systems."

You assess, plan and implement, Foster says. The assessment breaks across two scopes of responsibility. "From our best practices we know you have to establish two baselines," Foster says. "One's on the technical side, and the other is on the business side. When people establish a technical baseline, they sometimes fail to go through and check with the business side folks. You must ask them if they had a magic wand, what they want the application to do differently than the way it behaves today."

You measure your total cost of ownership over five years to get a true budget for the budget on a migration. Objectives are to mitigate risks, but do not assume you'll migrate your existing applications. 80 percent of the customers following best practices buy a replacement application off the shelf, rather than pour money into re-engineering existing code. Software selection practices use the assessment framework as for the entire migration process. Look at 3-7 packages. Some surround code -- the sorts of programs that aid an MPE app -- might be taken along or re-integrated.

Cleaning data is key to a successful implementation of any migration. That can mean partnering with data experts, especially some who know your current as well as future database environments. Your data migration solution should decide how much data to keep, whether the 3000 data will be a part of your main application, and determine requirements for cleaning enough to go live.

The best practices mirror the weeks and months that you'd spend before taking a trip. With travel, this can be a way of experiencing the journey before ever leaving home. With migrations, it's a way of visualizing and budgeting for the waypoints that will get a shop onto a new application while not interrupting business flow.

"Like when you say you'll go on a trip, at first it's just a concept," Foster said. "You figure out if you're driving, or going by plane. You figure where you're staying, who am I meeting. Those all have more detail, but at the high level it's assess, plan and implement."

Posted by Ron Seybold at 06:59 PM in Migration | Permalink | Comments (0)

February 19, 2014

Don't forget: Migration Best Practices today

MB Foster's kicking off its season of Webinars -- the 13th year of showing off details of best practices for 3000 operations, strategy and transitions -- using slide its summaries, a presentation and interactive Q&A and chat features. The event is this afternoon at 2 PM Eastern.

Today's meeting, which requires a commitment of under an hour, is all about app migrations, modernizations, and the budgeting that's worked for their clients over the last decade. You can sign up for the free experience that provides an online chat room, slides with the salient points, Q&A exchange via standard phone or IP voice, as well as Foster's expertise. The company specializes in application migrations -- the first step in the ultimate transition in a 3000-based datacenter.

Posted by Ron Seybold at 06:25 AM in Migration | Permalink | Comments (1)

February 13, 2014

App modernization gets budget-sleek look

Money-167733_150Transitions are still in the future for HP 3000 shops in your community. It might not have made sense to switch platforms in 2003 (to nearly everybody) or in 2008 (when HP's labs closed, but the 3000 remained online) or even in 2011 (when HP ended all of its support, and indie support firms stepped up).

But by 2014, there will be some shops that would be considering how to budget for the biggest transformation project they've ever encountered. Pulling out a CRM, ERP or even a manufacturing system, honed over decades, to shift to commodity hardware is a major undertaking. But it's been going on for so long that there's best practices out there, and one vendor is going to share the best of the best next week.

For some US companies, Monday is a holiday, so it'd be easy to let Wednesday sneak by without remembering it's Webinar Wednesday at MB Foster. The first show of the new year is all about app migrations, modernizations, and the budgeting that's worked for their clients over the last decade. It's a 2PM Eastern start for the interactive presentation February 19. You can sign up for the free experience that provides an online chat room, slides with the salient points, Q&A exchange via standard phone or IP voice, as well as Foster's expertise. The company says that it specializes in application migrations -- the first step in the ultimate transition in a 3000-based datacenter.

The vendor is one of the four original HP Platinum Migration partners, and since 2003 MB Foster's been acquiring experience in transitioning apps written in HP's COBOL and Powerhouse, apps employing Suprtool, apps whose interface is driven by VPlus. "We've been working with entire ecosystems and integrating them with the database of your choice," CEO Birket Foster says, "along with integrating the complete application environment covering external application interfaces, database interfaces, JCL, scheduler and other pieces that complete the environment."

Our clients, who have had us assist in their migrations, appreciate the experience and guidance the MB Foster team brings to the table. In every case there have been advantages to making the transition to Windows, Linux, or Unix in terms of:

1) Reduced risk from aging hardware

2) Reduced cost of supporting the server environment and

3) Easier access to, training and hiring of application programmers and operations personnel with knowledge of the new application environment to support it properly.

Foster promises that the 45-minutes around midday in the middle of next week -- a shorter workweek for some -- will deliver a "thought-provoking synopsis for your senior management team to invest in the right IT solutions for your business."

Posted by Ron Seybold at 07:54 PM in Migration | Permalink | Comments (0)

February 12, 2014

How Shaved Sheep Help Macs Link to 3000s

The HP 3000 never represented a significant share of the number of business servers installed around the world. When the system's highest census was about 50,000, it was less than a tenth of the number of Digital servers, or IBM System 36-38s. Not to mention all of the Unix servers, or the Windows that began to run businesses in the 1990s.

SheepShaverIf you'd be honest, you could consider the 3000 to have had the footprint in the IT world that the Macintosh has in the PC community. Actually, far less, considering that about 1 in 20 laptop-desktops run Apple's OS today. Nevertheless, the HP 3000 community never considered Macs a serious business client to communicate with the 3000. The desktops were full of Windows machines, and MS-DOS before that. Walker, Richer & Quinn, Tymlabs, and Minisoft took the customers into client-server waters. All three had Mac versions of their terminal emulators. But only one, from Minisoft, has survived to remain on sale today.

MinisoftMac92That would be Minisoft 92 for the Mac, and Doug Greenup at Minisoft will be glad to tell a 3000 shop that needs Mac-to-3000 connectivity how well it hits the mark, right up to the support of the newest 10.9 version of the OS X. "Minisoft has a Macintosh version that supports the Maverick OS," Greenup said. "Yes, we went to the effort to support the latest and greatest Apple OS."

WRQ ReflectionBut there were also fans of the WRQ Reflection for Mac while it was being sold, and for good reason. The developer of the software came to WRQ from Tymlabs, a company that was one of the earliest converts to Apple to run the business with, all while understanding the 3000 was the main server. The first time I met anyone from Tymlabs -- much better known as vendor of the BackPack backup program -- Marion Winik was sitting in front of an Apple Lisa, the precursor to the Mac. Advertising was being designed by that woman who's now a celebrated essayist and memoir writer.

What's all that got to do with a sheep, then? That WRQ 3000 terminal emulator for the Mac ran well, executing the classic Reflection scripting, but then Apple's jump to OS X left that product behind. So if you want to run a copy of Reflection for Mac, you need to emulate a vintage Mac. That doesn't require much Apple hardware. Mostly, you need SheepShaver, software that was named to mimic the word shape-shifter -- because SheepShaver mimics many operating environments. The emulation is of the old Mac OS, though. It's quite the trick to make a current day Intel machine behave like a computer that was built around Apple's old PowerPC chips. About the same caliber of trick as making programs written in the 1980s for MPE V run on Intel-based systems today. The future of carry-forward computing is virtualization, rooted in software. But it's the loyalty and ardor that fuel the value for such classics as the 3000, or 1990-2006 Macs.

Barry Lake of Allegro took note of SheepShaver as a solution to how to get Reflection for Mac to talk to an HP 3000. The question came from another 3000 vet, Mark Ranft.

I've been looking for a copy of Reflection for Mac.  It is no longer available from WRQ/Attachmate. I've looked for old copies on eBay without any luck.  Does anyone know where a copy may be available, and will it still run on OSX Mavericks (10.9)?

Lake replied

It was possible to run the "Classic" versions of Reflection under OS X up through Tiger (10.4). Sadly, Apple dropped Classic support in Leopard (10.5). The only way to run Classic apps now is in some sort of virtual environment. I've been doing this for many years, and quite happily so, using SheepShaver.

But you have to find a copy of the old Mac OS ROM somewhere, and have media (optical or digital) containing a Classic version of Mac OS.

As with so many things that were once sold and supported, the OS ROM can be had on the Web by following that link above. That Mac OS ROM "was sort of a 'mini operating system' that was embedded in all the old Macs, one which acted as an interface between the hardware and the OS," Lake explains. "It allowed a standard OS to be shipped which could run on various different physical machines.

Modern operating systems simply ship with hundreds of drivers -- most of which are never used -- so that the OS (might be Windows or linux or even Mac OS X) is able to run on whatever hardware it happens to find itself on. But this of course, has resulted in enormous bloat, so the operating systems now require gigabytes of storage even for a basic installation.

The beauty of the old Mac OS ROM is that the ROM was customized for each machine model, so that endless drivers didn't have to be included in the OS, and therefore the OS could be kept small and lean.

Lake said that althought using SheepShaver to run the favorite 3000 terminal emulator "took a modest effort to set up, it has been working beautifully for me for years. And yes, it works on the Intel Macs (the Power PC instruction set is emulated, of course)."

So here's an open source PowerPC Apple Macintosh emulator. Using SheepShaver (along with the appropriate ROM image) it is possible to emulate a PowerPC Macintosh computer capable of running Mac OS 7.5.2 through 9.0.4. Builds of SheepShaver are available for Mac OS X, Windows and Linux

Posted by Ron Seybold at 08:56 PM in History, Migration, Web Resources | Permalink | Comments (0)

February 05, 2014

3000 emulators moving ahead on Windows

Changes to the most dominant computer environment on the planet, Windows, as well as reaching backward to the days of a surging client-system strategy, have sparked some research and solutions for next-generation HP 3000 emulation.

Attachmate ReflectionWe're not talking about emulating the 3000 hardware. Stromasys CHARON HPA/3000 is the tool for that. The subject here is getting a traditional HP 3000 application screen to display on what we once called desktop PCs. Now they're mostly laptops, but at their essence they are smart clients, linked to servers. WRQ did the biggest trade in this kind of tool, selling hundreds of thousands of copies of Reflection over the years.

MB Foster is reminding 3000 customers there's a migration coming for those desktop environments running Windows. The firm has been a supplier of the Reflection line of emulators and connectivity software since the 1980s. In a few months, Microsoft will be pulling its XP version of the desktop OS out of security patching status. XP won't stop working, not any more than MPE/iX did when HP stopped patching it. But running a company with XP-based PCs, attached to 3000s, is asking for a lot of blind luck when it comes to patching for trouble. Much more luck will be needed for the PCs, a situation which is leading Foster to remind users about upgrading Reflection for the future.

Attachmate acquired WRQ years ago, but the Reflection brand lives in in the combined corporation. On April 8, when the XP patches end, things get more risky for the company that hasn't migrated to Windows 7 or 8. MB Foster wants to help with this aspect of that migration, too.

If you are using Attachmate’s Reflection product line, it will continue to be supported based on the Product’s Support Lifecycle. However, Attachmate’s ability to resolve issues related to Windows XP will be limited after April 8, 2014.

MB Foster has already noticed companies moving to Windows 7/8 with an eye toward leveraging 64-bit architectures, reducing risks, and standardization of a supportable operating system. To minimize risks even more, Attachmate and MB Foster are recommending you upgrade retired or discontinued versions of Attachmate software. 

Reflection was offered in many sorts of volume bundles during its lifespan at WRQ. It was the most widely-installed software program, in numbers, that ever served HP 3000 owners. The discounts for volume licensing can be reactivated, as can support for older versions of Reflection.

For customers and clients who need support for older versions, Attachmate is offering an Extended Support Plan. For those who don’t have a maintained Volume Purchase Agreement, MB Foster is taking this opportunity to encourage customers to pre-plan, pre-budget, and to take advantage of product fixes, enhancements and security updates for Reflection.

The Microsoft migration away from XP -- like HP's from MPE/iX -- has "its undertones and challenges." Finding an XP client system in production is easy these days, but one that's also serving an HP 3000 may be more elusive. MB Foster is quoting reactivations of the Volume Purchase Agreement -- the least costly way to get a lot of Windows PCs updated. Customers who have a VPA number should have it at the ready, the vendor says.

Posted by Ron Seybold at 08:06 PM in Migration | Permalink | Comments (0)

February 04, 2014

Making Domain Magic, at an Efficient Cost

DomainFive years ago, HP cancelled work on the DNS domain name services for MPE/iX. Not a lot of people were relying on the 3000 to be handling their Internet hosting, but the HP decision to leave people on their own for domain management sealed the deal. If ever there was something to be migrated, it was DNS.

But configuring DNS software on a host is just one part of the Internet tasks that a 3000-savvy manager has had to pick up. One of the most veteran of MPE software creators, Steve Cooper of Allegro, had to work out a fresh strategy to get domains assigned for his company, he reports.

We have been using Zerigo as our DNS hosting service for a number of years now, quite happily.  For the 31 domains that we care for, they have been charging us $39 per year, and our current year has been pre-paid through 2014-08-07.

 We received an e-mail explaining exciting news about how their service will soon be better-than-ever.  And, how there will be a slight increase in costs, as a result.  Instead of $39 per year, they will now charge $63 per month. A mere 1900% increase!  And, they won't honor our existing contract either.  They will take the pro-rated value of our contract on January 31, and apply that towards their new rates.  (I don't even think that's legal.)

 In any case, we are clearly in the market for a new DNS Hosting provider. Although I am not a fan of GoDaddy, their website. or their commercials, they appear to offer a premium DNS Hosting service, with DNSSEC, unlimited domains, etc. for just $2.99 per month.  Sounds too good to be true.

Cooper was searching for experience with that particular GoDaddy service. GoDaddy has been a default up to now, but acquiring a domain seems to need more tech savvy from support. The 3000 community was glad to help this other kind of migration, one to an infrastructure that MPE never demanded. The solution turned out to be one from the Southern Hemisphere, from a company whose hub is in a country which HP 3000 experts Jeanette and Ken Nutsford call home.

Cooper said that some 3000 vets suggested "rolling my own," self-hosting with his external DNS. Here's a few paragraphs addressing those two topics:

We have a dual-zoned DNS server inside our firewall, but we do not have it opened to the the outside world.  Instead, only our DNS hosting service has access to it.  The DNS hosting service sees itself as a Slave server and our internal server as the Master server.  However, our registrars point to that external DNS hosting service, not our internal server, so the world only interrogates our DNS hosting service when they need to resolve an address in one of our 31 domains.

 Why don't we open it up to the world?  Well, we get between 200,000 and 3,000,000 DNS lookups per month.  I don't want that traffic on our internal network.  There are also DDoS attacks and other exploits that I want no part of.  And, since some of our servers are now in the Cloud, such as our mail, webserver, and iAdmin server, I don't want to appear to disappear, if our internet connection is down.  Best to offload all of that, to a company prepared to handle that.

When I need to make a change, I do it on our internal DNS server, and within a few seconds, those changes have propagated to our DNS hosting service, without the need for any special action.  The best of both worlds.

 Now, on to the issue from earlier in the month.  Our DNS hosting service, Zerigo,  announced that they were raising rates by 1900%.  And, our first attempt at a replacement was GoDaddy.  Although the information pages at GoDaddy sounded promising, they made us pay before we could do any testing. After three days of trying to get it to work, and several lengthy calls to GoDaddy support, they finally agreed that their service is broken, and they can't do what they advertised, and refunded our money.

The biggest problem at GoDaddy is that I (as the customer) was only allowed to talk to Customer Service.  They in turn, could talk to the lab people who could understand my questions and problems.  But the lab folks were not allowed to talk to me, only the Customer Service people.  This is not a way to do support, as those of us in the support business know full well.

  Screen Shot 2014-02-04 at 6.09.48 PMAfter more research, I hit upon what appears to be a gem of a company: Zonomi. They are a New Zealand based company with DNS servers in New York, Texas, New Zealand, and the UK.  And, they let you set up everything and run with it for a month before you have to pay them anything. We were completely switched over with about an hour of effort.

 Now, the best news: they are even cheaper than our old DNS hosting service used to be.  If you have a single, simple domain, then they will host you for free, forever.  If you have a more complex setup, as we do, the cost is roughly US $1 per year, which beats the $63 per month Zerigo wanted to charge. The first ten domains cost $10 per year, then you add units of five more domains for $5 per year.

 The only risk I can see is if they go out of business.  In that case, I could just open our firewall and point our domains to our internal server, until I could find a replacement.  So, that seems reasonable.

 That problem is solved.  On to the next fire.

Posted by Ron Seybold at 06:17 PM in Migration, User Reports, Web Resources | Permalink | Comments (0)

January 29, 2014

University learns to live off of the MPE grid

Shutdown windowOne of the most forward-looking pioneers of the HP 3000 community shut off its servers last month, ending a 37-year run of service. The University of Tennessee at Chattanooga IT staff, including its networking maven Jeff Kell, has switched over fully to Linux-based computing and an off the shelf application.

UTC, as Kell and his crew calls the school, has beefed up its server count by a factor of more than 10:1 as a byproduct of its transition. This kind of sea change is not unusual for a migration to Unix and Oracle solutions. HP 3000s tend to be single-server installations, or multiples in very large configurations. But to get to a count of 43 servers, IT architecture has to rethink the idea of a server (sometimes just a blade in an enclosure) and often limits the server to exclusive tasks.

After decades of custom-crafted applications, UTC is running fully "on Banner, which has been SunGuard in the past," Kell said. "I believe it's now called now Ellucian. They keep getting bought out." But despite the changes, the new applications are getting the same jobs done that the HP 3000s performed since the 1970s.

It's Linux / Oracle replacing it.  The configuration was originally Dell servers (a lot of them), but most of it is virtualized on ESXi/vCenter, fed by a large EMC SAN. They got some server hardware refreshed recently, and got Cisco UCS blade servers.  I'm sure they're well into seven figures on the replacement hardware and software alone. I've lost count of how many people they have on staff for the care and feeding of it all. It's way more than our old 3000 crew, which was basically six people.

The heyday of the HP 3000 lasted until about 2009 or so, when UTC got all of the Banner applications up and running, Kell reports. Banner -- well, Ellucian -- has many modules. Like a lot of migrating sites that have chosen replacement software off the shelf, the transition was a stepwise affair.

The 3000 was still pretty heavily used until 3-4 years ago, when they got all of Banner up and running.  The 3000 continued to do some batch transfers, and our Identity Management.  The 3000 was the "authoritative" source of demographics and user accounts, but they are now using Novell's Identity management -- which bridges Banner/Oracle with Active Directory, faculty/staff in Exchange, and the student accounts in Google Mail.

We had dedicated 3000s in the past for Academics. They later jumped on an IBM system, then Solaris for a time, and now Linux. We also had one for our Library catalog and circulation (running VTLS software), but they later jumped on the Oracle bandwagon. More recently, the whole module for the library has been outsourced to a cloud service. 

Even in the days when the 3000 ruled at UTC, there were steps in a transition. "I think we peaked at seven 3000s briefly while we were in transition -- the days we were moving from our old Classic HP 3000 hardware to the then-new Series 950 RISC systems," Kell said. "After the delays in the 3000 RISC system deliveries and the promises HP had made, they loaned us a Series 52 and a 58, so we could keep pace with production while waiting on PA-RISC."

Posted by Ron Seybold at 07:49 PM in Migration, User Reports | Permalink | Comments (0)