August 29, 2012
What's Overlooked or Lost: Test Disciplines
Whether a 3000 customer is hanging in there for good business reasons, or heading off to another platform, they all need testing skills. Retirements and workforce reductions contribute to the loss of those disciplines. One advantage of making a migration is a refreshed demand for testing. After all, changing environments means measuring the effects of those changes.
During a recent online presentation about migration practices, the scope of that underestimation was revealed. Figure on three times the amount of resource for testing, experts say, as you'd initially budget. About a third of the cost and time to make significant changes of applications or an environment should go into testing. The lucky part of that costly equation is that at least on enterprise systems, you can work to replicate bugs.
Touchpad interface developers are not so lucky. Give a user an infinite number of ways to touch a screen, swipe it, pinch it or tap it. Then when an app crashes, try to replicate the exact combination of user interface actions. Testing, says Allegro's co-founder Steve Cooper, is much more complex in that world of BYOD apps.
Complex testing is an artifact of the rising art of systems. Where the HP 3000 can guarantee that programs written in 1978 would run in 2008, the 2010 iPads cannot run software built less than two years later. Testing is costly, and remaining in place on a platform like the 3000, for business reasons, reduces the need to do it. When you do go, an HP 3000 expert might be out of their depth in juggling development tools of Java, Ruby or even iOS -- but some of those veterans know testing disciplines better than any recent graduate, offshore, or near-shore programmer.You have a test which you execute and watch fail before something is fixed, "and then it won't, after it's fixed, and you add that to the regression test suite," Cooper says. "That means you can tell if you ever accidentally introduce that bug back again."
There's a lot to know about testing, and some of the methods don't involve higher technology tools at all. One major 3000 migration at an insurance firm of Fortune 500 size took place because the users' possible interactions were recorded, one screen at a time, using Microsoft Word documents. The magic there was the ability to analyze human behaviors against the potential of each program. That can happen more quickly with someone trained as a programmer-analyst, or a P/A as they were called in the earliest days of the HP 3000.
Analysis skills are not in vogue like creativity skills or the magic of mobile among the tech workforce. Everybody wants to be a creator, and there seems to be no limit to the size of such teams. The number of people involved in creating a program tells a lot about the ability to test it. Cooper estimated that perhaps three people were working on HP's IMAGE team, from what he recalls hearing in the '70s. All of MPE was at first maintained by five people. No more than three developers maintained the 3000's systems language, SPL.
Fewer voices made more robust systems in those legendary times. The accepted wisdom was that three people arguing in a room over how to create a robust program was about the right number. There is certainly a lower number of IT pros who can perform the higher-order thinking of test disciplines. That number will not necessarily improve just because the next systems and programs are more popular. Auditors aren't popular either, but the real money in enterprise computing flows through them and their tests.
Use our search engine to find 20 years
of HP 3000 news and articles
The comments to this entry are closed.