« March 2018 | Main

April 16, 2018

How many 3000s are out there?

1954 CensusIt's a reasonable question, that one, whose answer gets pursued by homesteaders and migrators alike. How many of those computers are still running out there? That's the question asked by the vendors who aren't familiar with the 3000. From another voice, the query sounds like "How many of us are left, by now?"

We heard the question from a migration services company and thought we would ask around a bit. The range of estimates is wide, and unless you're reading from a client list, the calculation of how many systems is a guess based on whatever activity you've seen. Sales of used systems to companies would be one way of measuring such activity. Support contracts would offer another data point. Customers currently paying for support of apps might be a third.

From Steve Suraci at Pivital Solutions, the estimate is 500 active servers in production use, and at least that many more for some sort of historical purpose. In between those two systems might lie hot spares or Disaster Recovery servers. If a system is mission-critical enough to have a hot spare, it's probably going to be one of the last to be mothballed whenever MPE goes dark altogether.

Some of the mystery comes from the fact that 3000s are running all across the world. We've reached some North American community providers, but European and Mideast-Asia is beyond our reach. The numbers in this story reflect North American activity.

Starting with that low end of 1,000-plus systems, Steve Cooper of Allegro estimates 300 to 1,000 active servers. He adds that his number includes both real and emulated systems, acknowledging the role that the Stromasys Charon HPA emulator is playing. Another 3000 veteran at Allegro, Donna Hofmeister, estimates up to 2,000 active systems, "but that seems a bit optimistic to me," Cooper adds.

Another data point on this chart came in from a software vendor with widespread coverage. Database utility supplier Adager's CEO Rene Woc is willing to take the estimate to 1,000-3,000 servers. He refers to the active system count as servers "under commission."

As with many things, the answer depends a lot on the definition of what’s “an HP 3000”: a system in production, a system as DR, a system in storage for spare parts, systems used for hosting multiple organizations, emulator systems, systems for archival purposes, etc.

We also asked Woc about his guess at the size of the 3000 community. This would be the number of humans who know of and care for 3000s and MPE/iX servers. "The same comment about the definition applies to the 'size of the community,' he said, "namely, how many persons per site, how many persons still interested in participating in the HP3000-L and other mailing lists but who do not use HP 3000s any more? That’s a tougher question to answer."

We can report there are almost 600 subscribers to the HP3000-L mailing list. Those names can include active suppliers of support, customers still driving machines in production, retired 3000 fans, consultants who could hop in on a 3000 migration or renovation if needed, and more.

The definitive numbers for machine count and community census haven't ever been something to confirm. Ideal Computer says it's supporting 89 HP 3000s, including one outside of the US. That's also an encouraging number for anyone who's got an eye on the upper end of these estimates. The idea that one company could be supporting close to one-third of the world's active 3000s? That would only be true if there were about 300 HP 3000s running today.

You can mark us down for 3,000 of the 3000s. Change happens more slowly that we predict or expect. It's been more than 14 years since HP last built one of these physical servers. Well-built environments like MPE/iX and HP's iron have lifespans that exceed expectations. And as for the lifespan of MPE/iX, by using the Stromasys emulator, the populace of 

04:56 PM in Homesteading | Permalink | Comments (0)

April 13, 2018

Fine-Tune: Net config file care and feeding

I’m replacing my Model 10 array with a Model 20 on MPEXL_SYSTEM_VOLUME_SET, so it'll require a reinstall. What’s the best way to reinstate my network config files? Just restore NMCONFIG and NPCONFIG? I'm hoping I can use my old CSLT to re-add all my old non-Nike drives and mod the product IDs in Sysgen—or do I have to add them manually after using the factory SLT?

Gilles Schipper replies:

Do the following steps:
- using your CSLT to install onto LDEV 1
- modify your i/o to reflect new/changed config.
- reboot
- use volutil to add non-LDEV1 volumes appropriately
- restore directory or directories from backup
- preform system reload from full backup - using the keep, create, olddate, partdb,show=offline options in the restore command
- reboot again

No need for separate restores of specific files.

Making backups while network services are running

Advice from James Hofmeister

The most common problem with performing backups in the past was that network configuration files were held open for READ/WRITE when the network was up. 3000 sites found they had no backup copy of the network configuration file NMCONFIG.pub.sys when it was time to install (reload) from backup tapes. I tested this on 7.0 building a CSLT and storing @.pub.sys, @.mpexl.sys, @.net.sys, @.arpa.sys on the same tape, and verified all of the network files including the configuration files were backed up.

Another problem from older systems was NETCP.net.sys was found missing in action following a install (reload) — and after it was recovered and restored from another source, then another system reboot was required to initiate NETCP. NETCP is now included on SLTs. 

Will the network function normally while backups are in progress? The answer to this is Your Mileage Will Vary. The building of a CSLT and the STORE process consume significant CPU, memory and IO resources.

From a networking perspective, TCP/IP networks are not guaranteed to maintain network connections in the event of severe system performance degradation. An acceptable level of CPU and IO performance is required to support TCP's ability to acknowledge the packets it has received (if a packet is not acknowledged it will be retransmitted as per the remote hosts configuration).

Also, an acceptable level of system bus performance is required to support the network hardware DMA to system memory -- if busy during a DMA attempt, the frame is dropped (store from disk to tape or from disk to disk consumes significant system bus band width).

07:38 PM in Hidden Value | Permalink | Comments (0)

April 11, 2018

Wayback Wed: HP group combines, survives

Connect LogoIn the aftermath of the Interex user group bankruptcy, an HP enterprise user group survived. That group remains intact to this day. Its survival is due to an ability to combine forces with other groups, an effort that kicked off 10 years ago this week.

That week was the time when Encompass, the user group that outlasted Interex, gave members a vote on merging with three other HP-related groups. At the time of the April vote, Encompass and these partners weren't even sure what the allied group would call itself. Endeavor was being floated as a possible new name.

The vote of the Encompass members approved the merger with the International Tandem User Group; the European HP Interex group, which was operated separately from the rest of Interex; and a Pacific Rim segment of the Encompass group. The European Interex reported that it had 35,000 members at the time of the merger.

Encompass became Connect, a name announced at HP's Discover conference later that same year. Connect still operates a user group with a large meeting (held at HP's annual event, for the in-person gatherings) as well as smaller Regional User Groups.

The group bills itself as Connect Worldwide, the Independent Hewlett Packard Enterprise Technology, a membership organization. Membership in any user group has evolved during the decade-plus since Interex expired. By now it's free to join the group that serves OpenVMS customers, companies that still employ HP's Unix computers and hardware (Integrity), and sites using the HP NonStop servers (the former Tandem systems).

Those Tandem-NonStop users make up nearly all of the in-person meetings other than the HP Discover event. Discover is devoted to everything HP Enterprise sells and supports. One of the few links remaining to the 3000 at Connect is Steve Davidek, whose management and then migration off 3000s at the City of Sparks made him a good transition leader at Connect.

There are Technical Boot Camps for both NonStop and VMS customers that Connect helps to organize. A boot camp for HP-UX never became a reality. That's one of the choices a group of allied users must face: even some support for a resource like a boot camp (some members were eager) needs to be balanced against the majority membership's desires.

Some missions have survived from the Interex model that drove that group for more than 30 years. Advocacy still has a place in the Connect membership benefits, a project that's called Community Voice by now. The old days of an HP user group with a taste for confrontation ended once Interex refused to join HP's effort to consolidate user groups and things like advocacy. The Interex board voted to stay independent—without giving its members a formal vote like the open balloting which Encompass did.

The benefits of Connect still lean heavy on networking, along with some technical resources steeped heavy in NonStop expertise. Advocacy flows through HP's Discover show, and there are 19 Special Interest Groups. SIGs like these were the hotbed of 3000 customer desires communicated to HP from Interex members.

From the Connect website, the list of membership benefits includes these resources.

  • 19 Connect Communities and SIGs (special interest groups)
     
  • Never miss a beat with Connect Now, a monthly eNewsletter
     
  • Problem solve and stay on top of your game with Connect's ITUG Library (a NonStop Download Library)
     
  • Have a say in HPE with the collective voice of Connect's advocacy programs
     
  • NonStop users receive a subscription and access to the archives of The Connection, a bi-monthly technical journal
     
  • All members receive a digital subscription to Connect Converge, our exclusive quarterly technical journal for HPE business technology users
     
  • Save hundreds of dollars per year with discounted registrations to premier events like HPE Discover, and Connect's signature tech-targeted Boot Camps
     
  • Save while you expand your knowledge base with 15% off HPE Education programs
     
  • Members and their dependents can apply for $2,500 Future Leaders in Technology scholarships

10:33 AM in History, Migration | Permalink | Comments (0)

April 09, 2018

Aspects to Ponder in Package Replacements

By Roy Brown

Shining-gemEach kind of migration has its advocates, and each has its pros and cons. Your constraints are going to be cost, time, and risk. Probably in that order. I can’t say much about the first two; that depends on your circumstances. Last week we talked about the differences between conversions and migrations and the risks. Another option is going to a package to execute a migration off MPE/iX. It might even be a familiar package — but on a less familiar platform.

Packages

If you have a package running on your HP 3000 which you are happy with, and the vendor provides that same package, or something very similar, on other platforms, then it’s likely just a case of choosing which platform to go with.

Your vendor-supported migration path should be pretty straightforward, and your hardest problem is going to be to decide what to do with the crust of subsystems and reporting programs that have built up, and which surround the package proper. If there are some you can’t do without, and the features aren’t provided by the package anyway, on the new platform, this may be a good chance to get to grips with the tools and utilities on the new platform, and how things are done there.

But maybe you had a bespoke or home-grown application on the HP 3000, in an area now covered by one or more packages on other platforms, and it makes more sense to move onto a package now than to go bespoke again?

In that case, you have a three-way analysis to do; what does your existing system provide, what does the new package provide, and what are your users looking for?

I’ve heard the advice “don’t go for customization, go for plain vanilla” a lot. It certainly gives cost and risk reduction, though perhaps at the expense of business fit. I reckon that a shame; every company has something that is its USP – unique systems proposition – something in its IT that gives it its edge in its chosen business.

On the other hand, sometimes a company does things differently because it was easier, or “it was always done that way.” Those are things you shouldn’t lose sleep over giving up.

A couple of concrete examples; firstly, meaningful SKU numbers. Chances are your SKU numbers, or product codes, have a structure you understand. If so your homegrown systems will likely have dozens of places where a program takes a different path based on what that SKU number is. A package is unlikely to support that; each property needs an explicit flag on the Part Master, and the package has to work off those.

This doesn’t mean that you can’t keep your meaningful SKUs, where everyone in the business knows what they mean. Just that the package won’t know from the SKU, and so you will need to set those flags for each one. And carefully too, or there will be some real confusion.

That’s a case where you go with what the package does. But in the system I’ve just migrated, we had a critical financial value, set against every order, in a complex and special way that was important to the business. The old system calculated it. The new system would accept a pre-calculated value as input, sure, but it wouldn’t do the calculation.

We could, I guess, have asked for it to be customized in. In practice, we built an Excel spreadsheet as a preprocessor to the package proper, and did the calculations there.

There’s still a little bit of me that says moving stuff off the HP 3000 onto a spreadsheet is going backwards. But then there’s a bit of me that says that of moving anything off the HP 3000.

Custom replacement

This is the Rolls-Royce (or should I say Cadillac?) option. But if your application is unique, so there’s no chance of a package solution, and if rewrite/code conversion doesn’t suit, possibly because of a pent-up backlog of business change requests, or the knowledge that your business area is changing radically, it can be the only sensible way to go.

And if so, you don’t need a few thoughts and observations about compromise; you need to know how to choose from the myriad of possibilities out there. I only wish I knew myself…

Roy Brown is CEO at Kelmscott Ltd., a developer and consultant with more than 35 years’ experience on MPE and the HP 3000.

06:05 PM in Migration | Permalink | Comments (0)

April 06, 2018

How to Measure Aspects of Migrations

Newswire Classic
By Roy Brown

GemSo you are going to migrate. When migrating to a different system or platform, there’s usually something the vendor needs you to lose. But is it essential business functionality, or just an implementation quirk of your old system?

Which migration are you going to have? The luxury option of a custom replacement of your old system? To a package on a new platform, maybe a version of a package you had before, or one new to you? Perhaps the rewrite option, where a team of programmers, possibly offshore, re-implement your system in a whole new environment, while keeping the existing functionality. Or will it be a conversion, where your existing system is transferred to a new platform using automated tools?

Each has its advocates, and each has its pros and cons. Chances are, your constraints are going to be cost, time, and risk. Probably in that order. I can’t say much about the first two; that depends on your circumstances.

Code Conversion

But risk comes in two timescales; immediate risk – “Can we do this? Can we get onto the new platform?” – and the longer-term risk that you are maybe painting your company into a corner by accepting some compromises now that later will turn into shackles.

Those with very long memories may recall some of the early packages being offered for the HP 3000, the apps with KSAM file structures, not IMAGE ones. You just knew they had been ported from elsewhere, not written native on the HP 3000. And if you could find what you wanted, on IMAGE, you were surely glad.

That’s the longer-term risk, then, for some conversions with low short-term risk; you’ll be on the new platform, certainly. But you may have something that plays like the modern-day equivalent of having KSAM, when the smart money is on IMAGE.

Look hard at where you are going to be after a tools-based conversion; will you be fully on the new platform with all-independent code, or will you be running in an environment provided by your conversion specialists? If the latter – and these can indeed lead to faster, cheaper, lower-risk conversions – treat your supplier as a package implementer that you are in with for the long haul, and judge them accordingly.

Likewise, what about ongoing, internal support? One of the reasons to move to new platforms and new paradigms is to tap in to the new generation of people who know their way around them. But if it’s hard to see how you are going to get ongoing support for your HP 3000 apps, how much harder will it be to find people who can support a hybrid old/new system you might wind up with?

Finally, and on the same note, how maintainable is the migrated system going to be? You need to ensure that the generated code is going to be something that a human can easily work with – and likewise the environment the code runs in – or you are going to be effectively frozen in the converted state. Fine if it’s a subsidiary app, perhaps, and this is a stopgap until it dies or you replace it more fundamentally. But not if it’s a key business app that needs to grow with your business going forward.

All this is assuming the conversion is feasible. Hopefully, in the average application suite, you are not going to have done too many unusual things that the automated conversion can’t cope with; maybe some system-y job and user status calls that are done differently on the new platform, and other corner cases, but nothing fundamental.

But sometimes the gotchas can be quite widespread. While inadvertently exploiting the specific way VPlus works, we once had an application that let you put in a partial key and a question mark, hit f2, and go to a lookup screen where you could review matching keys, and select the exact one you wanted.

This was used across quite a range of screens, and relied on VPlus not trying to validate the partial key when you went that route. But move to a screen handler that works rather differently, and you may lose that option. Nothing insurmountable; we just needed to change things so the partial key was entered in the lookup itself, with a code to say what sort of key it was. But this small inconvenience for the user was surely also a bypass of nicer techniques we could have used if we’d had the new handler from scratch.

Rewrites

If a conversion isn’t the way you want to go, then likely you’ll be offered a rewrite. This can be more flexible than a code conversion (though you can bet the conversion team will be using some internal tools for the straightforward bits). And you should wind up with an app that is fully and independently implemented on the new platform, dispensing with any helper environments. Likely, though, it will not quite be what you’d have got starting from a zero base on the new platform – it will retain a few traces of its HP 3000 origins. But hey, is that such a bad thing?

Here again, you need to make sure that you can work with the resulting code going forward, to keep on track with your business needs. But the main issue is what shape is the application you are migrating.

One way of looking at a rewrite is that it’s a full custom conversion—but the results of the business analysis, instead of being expressed as use cases in UML, or whatever, are being presented in “HP 3000 Modeling Language.”

So if you are really happy with how your HP 3000 app fits the business – and if your users are — this can be a better way to go than embarking on a needless rediscovery process. If it ain’t broke – at the business level – then fix it only at the code level.

Not to say you can’t save some money by looking hard at the 3000 app, and maybe trimming out some dead wood – old functionality that is no longer used, and that thus does not need conversion. But do try – at least in the first go-round – not to request new features. Else your rewrite starts creeping towards being a new custom implementation, thus losing you the best of both approaches.

But even with a rewrite, perhaps there will be a few things the rewrite team will balk at, or instances where you will find that the cost of keeping them in is likely to be disproportionately high. Maybe they will come up with an alternative approach, or maybe you will.

But if not, consider what you are losing. A nice-to-have, but which you use once in a blue moon? Or something integral to your business? On a rewrite, it’s less likely to be the latter that when considering packages, which I’ll come to later. And it’s likely to be something system-y again, albeit at the user level, where they interact with job streams, or print queues, or whatever.

Maybe something in the user interface? Though I’m always pleased to see how closely forms on the web follow the VPlus paradigm of fill in all the data, press Enter, get it validated, go forward if okay.

But you do need to be as flexible as possible about doing things a new way on a new platform. People often ask “How do I do x?” where x is an attempted solution they can’t get working. Invariably, the response is “Yes, but what’s the actual problem you are trying to solve?” And the answer turns out to be quite different from x.

Adopt that precept here; go right back to the problem you are trying to solve, and not how to get an envisaged solution working. You’ll get a much better answer.

Roy Brown is CEO at Kelmscott Ltd., and a developer and consultant with more than 35 years’ experience on MPE and the HP 3000

06:08 PM in Migration | Permalink | Comments (0)

April 04, 2018

Emulation leader hires ex-HP legacy expert

Sue_SkonetskiStromasys, makers of the HP 3000 virtualization and emulation product Charon-HPA/3000, announced the company has hired Susan Skonetski as its Director of Customer Development. Skonetski comes to the Charon product team from the VMS Software firm that's been taking over responsibility for that Digital OS from HP. She's also a former executive at HP, where she was the go-to person for the VMS customer community.

Birket Foster of MB Foster has compared Skonetski to a George Stachnik or perhaps a Jeff Vance: a company exec who's relies on an intimate knowledge of a customer base which uses legacy software and hardware. At HP she was manager of engineering programs for the OpenVMS software engineering group until 2009. She logged 25 years of advocacy service to VMS working first at Digital, then Compaq, and finally HP. She became a leader independent of HP and still strong in the VMS community after HP laid her off in 2009. That was the year HP was also halting the HP 3000 labs development. She became VP at third-party support vendor Nemonix.

In 2010 Skonetski revived a VMS boot camp that had languished during the year she left HP. The event was held in Nashua, NH because until 2008 an HP facility in that city was one of the places where VMS matured. At that boot camp attendees also heard from a 3000 marketing linchpin, Coleen Mueller, addressing technical issues and innovations along with OpenVMS partner companies. We chronicled the event in a story about how HP's unique enterprises stay alive.

Skonetski said that understanding a legacy community flows from years of organizing events and strategies aimed at a unique customer base.

Through my experience, I’ve seen up close the critical role that these legacy systems play in daily business cycles. Helping to ensure the availability of these applications is imperative, with service and support options decreasing for SPARC, Alpha, VAX, and the HP 3000. Stromasys’ innovations, along with their strong team of software designers, solutions executives, and account management professionals, made joining the organization a natural fit. I’m proud to help bring to market both cutting-edge solutions and the user communities of these systems.

The Stromasys release on the hiring explains that Skonetski will be working to further integrate the legacy-preserving products into customer communities such as the HP 3000's.

She will be focused on the market success of the product portfolio, developing and executing a communication strategy to engage with the Stromasys customers and potential clients. In addition, she will utilize her deep knowledge of the market to drive a well-rounded ecosystem for users of legacy hardware and software. Through the knowledge gained, Skonetski will be aiding the organization's product strategy into a new phase of integration.

The company added that relying on her extensive software training, Skonetski "provides a passion for applying technology automation to enhance business processes, reduce costs, and provide tools to allow companies to capitalize on investments they use every day."

Charon-HPA/3000 software creates fully-compatible virtualized HP 3000 systems, running on industry standard Intel x64 (Core i7 and Xeon) servers. The technology takes advantage of technological advantages since HP 3000 systems were built, translating PA-RISC instructions on-the-fly into optimized sequences of Intel CPU instructions. Charon servers for the 3000 run unmodified copies of MPE/iX 7.5 along with the existing applications and related software hosted on HP's 3000 hardware.

 

09:32 AM in Homesteading, Newsmakers | Permalink | Comments (2)

April 02, 2018

Options for HP 3000 Transformation

By Bruce McRitchie
VerraDyne

GearsTime — it marches on. We can measure it, bend it and try to avoid it. But in the end the clock keeps ticking. This is true for owners of the venerable HP 3000. In its day one of the top minicomputers ever manufactured, it went head-on against IBM (mainframes, AS/400, and System 36), Wang and Digital - and won many of those battles. And many HP 3000s are still running and doing the job they were designed to do. They have been upgraded, repaired and tinkered with to keep them viable. But when is it time for them to retire?

There are options. Many vendors have been working diligently to provide a transformation path to move from the HP 3000 to a modern platform and language. By making such a move these organizational risks are reduced:

  • Hardware failure.
  • Personnel failure - aging programmers.
  • Software failure.

Migration issuesSo why aren't the remaining HP 3000 owners flocking to newer technology? Is it because they know the technology so well — and it works? Have they been through large ugly development projects and never want to go through the pain again?

Whatever the reasoning, the arguments for staying with HP 3000 must be wearing thin. There are options, and to a greater or lesser extent, they all do the transformation job. Today's technology will allow companies to move their whole HP 3000 environment to a new modern environment, with or without changing language and operating system elements. Of course, with the different paths there are trade-offs that must be considered. This article briefly explores some of the options available to transform your HP 3000.

Emulation

At first glance this can appear to be the cheapest and easiest solution. A company picks the supplier of the emulation software, installs it, then puts their code on top and voila—their system is running as it always did. 

But is it? You may now have an emulator interpreting instructions from your HP 3000 and the new operating system you'll use to run the emulator. Is that interpretation always correct?

And what about the MPE commands themselves? Are they being interpreted correctly? Are you getting the results you expect? Although there are no new changes coming for MPE, you may have to learn Linux or another new OS for an emulator.

And what about the cost of emulation? Of course, there is the initial cost. But what of the ongoing costs? You now have to pay for the emulator and the native OS that drives it.

Here are some of the deficiencies we have noted over the years:

• Being an emulation of the existing system, it still requires continued use of old skill-sets. Add that to the new skill-set required to be learned for the emulation layer. Some emulation designs will also require new skill-sets for the target open systems platform and COBOL. You can now have a highly complex skill-set requirement for the IT staff and potentially a new OS to contend with.

• Some emulators only support certain parts of the current system; they do not support all the components that are currently used.

• If the application has to sit on top of the emulation layer, then access to the native operating system is restricted.

• Interoperability with other applications or services on the platform can sometimes be severely restricted.

• Emulators mimic the data structures of the legacy system. This is great for running the current system as is, but does nothing to make the data accessible outside to other systems, inquiry or reporting tools.

• Running emulation renders the customer completely dependent upon the emulation vendor for ongoing support and upgrades, which in itself is a sizable risk factor.

In other words, emulation can bring its own new set of problems.

Redevelopment

The specter of re-development haunts most people (especially those who have been large development projects and cannot forget the late nights, long days, and missed vacations). Defining the business requirements, architecting the new solution, finding business analysts, programmers and project managers. Persevering through a long and drawn out process with hundreds of meetings, thousands of decisions — and finally— testing!

Hours and hours of trying to figure out if it can work well enough for the business needs. And then the bill comes. In some cases, millions of dollars, years of effort—and believe it or not, after all the years of improving —many development projects still fail.

These considerations are even more important today, when business needs are quickly changing and the underlying technology is changing to keep up. The whole platform on which the system is developed will change during the project.

Companies have spent years injecting their business intelligence into their software. Re-developing is a sure way to erode that valuable asset.

Migration

Migration offers a lower risk, lower-cost solution than re-development—so that you can end up with the functionality you need, in a language and operating system that is modern and supported by many resources.

Maybe you are running COBOL on your HP 3000 under MPE/iX but would like to be running VB on Windows. VerraDyne can do such a migration. And we can do it at a reasonable price in a reasonable time frame.

Simply put, migration is the process of moving the current applications onto a new open system platform so that they run in 'true native' mode on that new platform. However, after migration, the application and core business logic are still the mature proven systems that they were on the old legacy platform. The old proprietary legacy components are removed and are replaced by the modern components of the new open system platform.

These are the main points when considering automated conversion with no use of proprietary middleware:

• Migration removes the old legacy skill sets but it does not affect the core business logic, so the customer's systems and existing programming staff can quickly retake ownership of the converted code and continue maintaining it with very little retraining. The converted system can be deployed on the new open system with the same screens and processes that existed on the legacy platform, so user department retraining is virtually eliminated.

• Since the converted applications operate on the new open system identically as they did on the HP 3000, determining successful migration is black and white. The converted system either runs the same or it doesn't — if it doesn't, it is fixed until it does. There is no gray area, there is no 'maybe' — obtaining successful project completion is a clear and concise process.

• By minimizing manual intervention, automated conversion provides an extremely low risk, short timeframe solution. The majority of the project cycle is devoted to the most important task of testing.

• The converted system runs as a 'true native' application on the targeted open system, so there are no roadblocks to gaining the full benefits of the new open systems platform: GUI screens, Web functionality, databases and more

• Conversion provides all the benefits a company seeks without any trade-offs. It absolutely minimizes intrusion into the company's operations. The entire project is performed at very low risk.

• After conversion, the customer has full ownership of its code. There is no dependence on the migration vendor after conversion.

VerraDyne has spent 25 years developing converters that move language and OS code from one platform to another. We have done it with hundreds of systems. At the end of a project you have a system that looks, feels and behaves like the old system, but is able to take advantage of new technologies. This is our claim to fame and our intellectual property, and we are very good at it. 

04:16 PM in Homesteading, Migration | Permalink | Comments (0)