June 30, 2015
Run-up to HP split-up sees enterprise splits
Later this week, Hewlett-Packard will announce the financial roadmap for the business that will become HP Enterprise, holder of the futures of the HP 3000 replacements from the vendor. More than the accounting is in flux, though. Today the vendor announced the executive VP of its Enterprise Group will be gone before the split-up takes place.
Bill Veghte will split the HP scene, leaving "later this summer to pursue a new opportunity." Big vendors like HP rarely track where an exec like Veghte is heading next. It's not in the same direction as the business that makes Integrity servers, the HP-UX operating environment, or the competitive mass storage product lines that some migrators have invested in.
He's been leading the efforts to separate the consumer printer PC side of HP from its Enterprise sibling, a sort of cleaving of what's become a Siamese twin of business at the vendor. It's been a project underway since last fall, employing Veghte after COO work. This is not the kind of announcement you want to release before a massive split is completed. HP's original estimate for revenues of HP Enterprise was $58.4 billion, larger than the PC-printer side.
There have been exits from a seat this high before at HP. Dave Donatelli left the company, and now has landed at arch-rival Oracle. From a tactical perspective, or at least not quite as customer-facing, HP's got to clone 2,600 internal IT systems, extracting and separating the data inside. It's the opposite effort of a merger, with no safety net. The Wall Street Journal says the IT enterprise split could stall the split-up of the company, if the project doesn't go well.It's not all glum forecasting among the enterprise industry analysts, though. One report from banking firm Raymond James says the divided HP could purchase its longtime rival EMC. "We continue to believe that an acquisition of EMC by HP is more than a distinct possibility, as HP strives to increase shareholder value following its tax-free spinoff in November, while EMC deals with a high-profile shareholder activist," the report says.
Donatelli was an EMC kingpin before he arrived at HP, a move that EMC sued over during Donatelli's earliest HP months. The EMC scheme didn't have traction anyplace but the Raymond James report. But HP and EMC have tried to merge before, with almost a year of discussions that petered out last fall. HP announced its split-up plans in the same timeframe.
HP's stock dipped below $30 a share today, and it's down about 12 percent over the past month.
June 29, 2015
Retiring ERP Systems, or Not-Free Parking
About a month ago, a migration company offered a webinar on leaving behind one use of an HP 3000. But the focus at Merino Services was not on MPE, or HP's 3000. The company wanted to help with an exit off MANMAN. In specific, a march from "MANMAN/ERP LN to Infor 10X."
While many manufacturing companies will recognize MANMAN ERP, it's the LN tag that's a little confusing. Terry Floyd, whose Support Group business has been assisting MANMAN users for more than 20 years, tried to pin it down. "ERP LN is Baan, I think – it’s very difficult to tell anymore. It’s not MANMAN, anyway." The target is Infor's 10X, more of a framework for the migration destinies of Infor's parked software. Such parking keeps up support, but nothing else changes.
Merino, which hasn't been on the 3000 community's radar up to now, might not be blamed for conflating a couple of ERP names, or just running them together in a subject line. The state of ERP applications is changing so fast, and declining, that an ERP Graveyard graphic lists the notables and the little-known, next to their current undertakers. Infor, which is the curator of both Baan and MANMAN, has made a business of this in-active retirement for more than a decade. Younger, more adept alternatives have been offered for MANMAN for several decades.
About Infor, Floyd added, "they have bought a lot of near-bankrupt companies. As you know, a lot of people have been trying to migrate companies off of MANMAN for over 20 years." It's a testament to the sticky integration of ERP and the customization capability of MANMAN that this application leads the graveyard in the number of times it's been acquired.Cloud-based ERP is on the rise today, and a half-dozen newer ERP suites like NetSuite, Workday, Plex, and Kenandy are taking the place of classic MPE-based manufacturing IT. A few years back it looked like Microsoft Dynamics was the stable bet to take on some of this mission, but now people in the ERP world are wondering if Dynamics will be acquired, too. Ned Lilly is the CEO of open source ERP software company xTuple, and he writes about ERP gravesites at his ERP Graveyard website. He pointed out a Diginomica column by Phil Wainewright.
This alternative strategy would allow Microsoft to focus on its core platform and product engineering strategy without the conflict of having a sales team intent on winning business away from its growing army of third-party partner vendors. Some or all of the ERP products would doubtless be part of that transaction, while Microsoft would likely prefer to retain the CRM product because of the tight integration that’s possible to its Office properties.
But the ultimate decision may depend on who the buyer will be.... I think it’s more likely that Microsoft would look to sell off some or all of its legacy ERP portfolio to a ‘friendly’ competitor — one that’s committed to the Microsoft stack.
Open source ERP — the Support Group provided guidance for OpenBravo — looks like it may be a migration choice that wouldn't land in a graveyard. Commercial open source demands that some company product-ize the open software, in much the same way that Novell or RedHat turned Linux into an enterprise-worthy solution.
As for any Infor strategy that a MANMAN site would want to study, the busy world of its acquisitions is covered in an Infor Discovery Guide that runs more than 100 pages and 25 solutions, ranging from CloudSuite to a legacy stalwart like Lawson. Not mentioned anywhere in those pages is MANMAN itself, apparently because Infor considers there's not much to discover.
Software like MANMAN still runs the business of some manufacturers. But these customers pay support as if it's a parking fee, since their software is going nowhere. Well, not nowhere: the customers still have the capability to customize their applications themselves, carrying the companies into a future where parking places are discovered, and places to steer a migration are regularly mapped out, as shown below.
June 26, 2015
What Has Made MPE/iX 8.0 A No-Go
The life of homesteading 3000 managers is not as busy as those who are managing migrated or just-moved business environments. But one topic the homesteaders can busy themselves with is the If-Then structure of making an 8.0 version of their operating system more than a fond wish. Our reader and 3000 manager Tim O'Neill visited this what-if-then module, a proposition was sparked by an April Fool's story we wrote this year. "I actually believed that article, until I recognized the spoofed name of Jeanette Nutsford," he said. We were having some Onion-like sport with the concept of an MPE/iX.
I had the thought that maybe somebody somewhere will apply all the MPE patches written since 7.5, add a couple more enhancements to subsystems (like maybe MPE users could see and use a Windows-managed printer,) test it in-house, then test it on a few customer systems, then release it and announce MPE/iX 8.0. The database options could begin with TurboImage and Eloquence.
That's pretty much the start of a workflow for an 8.0. If you were to make a list of the things that have stood in the way of such a watershed moment for MPE, it might look like an if-then tree. A tree that might lead to a public MPE, as free as Linux or HP's Grommet, the company's user-experience development application. Grommet will become open source, licensed for open use in creating apps' user experience. Grommet was once just as HP-proprietary as MPE.
The tree's not impossible to climb. Some of the tallest branches would sway in the wind of software law. The rights regarding intellectual property have blocked this climb to open-sourced MPE/iX. That's law that was tested outside of the HP and 3000 community. It came close to swaying in favor of customers who believe they're buying software, instead of just renting it.No software creator would call the act of licensing its product a rental. But ownership rights of code like MPE or the CAD program Autodesk have always reverted to their creators. These programs were developed inside software labs controlled by HP and Autodesk. Such creators' ownership was not in doubt, until in 2007 the right to restrict any software's climb to freedom was tested.
Autodesk was sued that year by Timothy Vernor, who said he was entitled to sell used copies of AutoCAD he'd bought at an office liquidation sale from an Autodesk customer. The suit wasn't foolhardy. In 2008 a federal district judge in Washington state denied Autodesk's motion to dismiss. The next year, both sides filed motions for summary judgment, to settle whether a structure called First-Sale Doctrine could apply to previously licensed software. And then that district ruled in Vernor's favor. Transfer of software to the purchaser materially resembled a sale, not just a licensing. The software had a one-time price, and a right to perpetual possession. You could resell your software, and so it could have a value in the market beyond what its creator had received.
What's all this got to do with 8.0? The concept, and defending the ruling, represents a type of the most critical if-then branch in the tree of used-software logic. By now, every copy of MPE/iX is used software. To make an 8.0 with any value to the companies and consultants who'd labor through new patch integration, plus two levels of testing, and managing support, it'd need to be worth some revenue to those who'd do the work. You'd need a law to make reselling a revamped MPE/iX as the 8.0 version legal.
Linux and the rest of the open source world enjoy this kind of ownership law. It helped that Linux never belonged to a company as a trade-secret product.
The Washington state court decided selling Autodesk could let a customer resell under the first-sale doctrine. So Autodesk could not pursue an action for copyright infringement against Vernor, who sought to resell used versions of its software on eBay. But like any software creator, that first-sale decision was appealed to the US Court of Appeals for the Ninth Circuit, where the lower court's ruling got reversed. Vernor was denied the right to resale Autodesk software on eBay. There were non-transferable licensing restrictions, and in 2011, the US Supreme Court let stand the Ninth Circuit ruling.
This is the quest for the holy grail of MPE futures that OpenMPE pursued for more than eight years. At some point, the group believed, control of MPE/iX could be released to the companies using the software. It should be, they argued, since HP was halting its business in the HP 3000 and MPE. Vernor didn't even offer to modify the CAD software, to improve it like O'Neill suggested. The appearance of such an MPE 8.0 would deliver new functionality, fix bugs — and most importantly, lavish some interest on the OS.
HP used to talk about an 8.0, in passing at user conferences in product futures talks. There was nothing as specific as "MPE users will be able to see and use a Windows-managed printer" during these talks. Applying existing patches and recent ones, then releasing it as an 8.0, is a stretch. The x.0 releases of MPE/iX each brought on a major set of advances, not just printer control and patch integration. 7.0 delivered support for a new hardware bus, PCI, for example.
All those patches written since 7.5 PowerPatch 5? That'd be just the beta-test patches that never went into customer-testing for General Release. HP holds the intellectual property rights to those patches. The company would have to cut that work loose into the customer base. (If HP cedes ownership rights, then integration begins.) If HP pemits this 8.0 MPE/iX to be tested in customer sites, then something stable enough to be adopted would emerge.
If no such ownership change occurred, then even an 8.0 would still belong to HP — a company with no more interest in selling or support it.
The rights to MPE/iX have been stretched recently. While a two-user version of the Charon HPA emulator was available for download, the Stromasys software was distributed with MPE/iX as part of that freeware version.
Since there's no open-sourced MPE for offer, the testing and integration, then pairing the revived software with its database as well as Eloquence (never built for MPE, but could be integrated) — it's all simply a fine ideal. Nobody was able to step forward and push this concept into a test of law. Wirt Atmar of AICS Research checked with lawyers about reclaiming MPE off of HP's discard pile. It was a card the community could not play. 8.0 is a dream, and while there's no reason yet to cast it away forever, plenty of 3000 owners have experienced their wake-up call.
June 25, 2015
Throwback: The Days of the $5,000 Terminal
By Dave Wiseman
Most of you will know me as the idiot who was dragging about the alligator at the Orlando 1988 Interex conference, or maybe as the guy behind Millware. But actually I am a long-time HP 3000 user – one of the first three in the south of England.
I was just 27 when I started with an HP 3000. I had been in IT since 1967. One day I was approached by Commercial Union Assurance (a Big Blue shop) to set up an internal Time Sharing system. My brief was to set up "a better service than our users have today," a Geisco MK III and and a IBM Call 360. In those days, the opportunity to set up a "green fields site" from scratch was irresistible to a young, ambitious IT professional.
I investigated 30 different computers on around 80 criteria and the HP 3000 scored best. In fact, IBM offered the System 38 or the Series 1, neither of which met our needs well. IBM scored better in one category only – they had better manuals. I called the HP salesman and asked him in. What HP never knew is that if the project went well, there was a possibility that they would get on the shortlist for our branch scheme – a machine in every UK branch office. That would be 45 machines, when the entire UK installed base of HP 3000s was around 10 at the time.
IBM tried everything, including the new E Series which had not been publicly announced at the time. It was to be announced as the 4331 and you only — yes only — needed 3 or 4 systems programmers. I asked about delivery time compared to HP's 12-14 weeks for the 3000. I was told that IBM would put me in a lottery, and if our name came up, then we would get a machine.
So HP's salesman came in. I said I wanted to buy an HP 3000, to which he replied, "Well I'm not sure about that, as we've never done your application before. Why don't you buy a terminal and an acoustic coupler first, and make sure that your application works"
"Okay" I said, "where do I buy a coupler from?"
"No idea," he replied, "but the 2645A terminal is $5,000."
So I bought that 2645A (from our monthly hardware budget of around $1.5 million) and started dialing into a 3000 at the Winnersh office. On occasion, when I needed answers, I would drive over there and work on their machines. One durability test was to unscrew the feet on the disc drive and push it until the disc drive bounced onto its HP-IB cable. On more than one occasion the cable came out and you could just plug it back in and carry in working. If you tried that with an IBM you could expect two days of work to get it restarted.
I went to the first European Users Group meeting at the London School of Economics in 1978 and listened intently to all of the presentations, especially when HP management took the stage. They got a hammering because the performance of KSAM was not as good as several people had expected. After having dealt with IBM, I came back with the view that if that is the worst thing that they had to complain about, I was having a piece of this action. At the back of the hall there were two piles of duplicated paper – one yellow and one white. These were advertising Martin Gorfinkel's products LARC and Scribe, which amounted to the first vendor show.
After those tests and the investigation, we bought a Series III with 2MB of memory, two 120Mb 7925 drives, an 7970E tape drive, and a 2635A console. We purchased the 3000 during a unique three-month window when SPL, IMAGE and KSAM were included. Additional software included BASIC, a Basic compiler and APL. The machine arrived on time and was located in the network control area of the suburban London datacentre — the HP 3000 was not important enough to despoil the gleaming rows of Big Blue hardware.
We had users in six different buildings around the country. We had an eclectic mix of 2645A, 2641A, 2647A and later 2640 terminals. As we grew, we added 2621, 2622, 2623, 2624 and 2626 terminals. We also connected Radio Shack TRS 80 machines and IBM XT PCs. What we wouldn't have given for PC2622 emulation then. (That's WRQ Reflection, for you newbies.) We needed a number of printers to print out Life Assurance Quotations, and HP only sold a 30 character per second daisy wheel, which was three times the price of a third-party printer. HP's view was very simple – they would not provide hardware support for the CPU if we bought third-party printers. I called their bluff and bought the printers elsewhere.
At first, I connected all of the terminals at 2400 baud as the original systems (IBM Call/360 and Geisco) only had 1200 baud dial-up, so 2400 was very fast for our users. As usage grew, I could turn the speed up to 9600 to give the users an apparent performance boost at no cost.
Performance was always an issue. The IBM guys couldn't understand how we could run so many users on such a small box, but we were always looking for improved performance, as we had the largest HP 3000 around already. There were no tools available in those days, so we used tricks like putting a saucer of milk on each disc to see which one curdled first from the heat. (Not really, but we did spend a long time just standing there touching the drives lightly to see.) We did a full system unload and reload every three months, and unloaded and reloaded most databases at the same time.
I was laid off in a downsizing exercise in 1983 and went into software and system sales. The company intended that the HP 3000 would be replaced by the IBM. But at least five years later, they were still using the MPE machine.
June 24, 2015
OpenSSL: Still working, but falling behind
This month the OpenSSL project released a new version of the software, updated to protect sites from attacks like Heartbleed. The release coincides with some interest from the 3000 community about porting this 1.0.2 version to MPE/iX. These cryptographic protocols provide security for communications over networks.
Heartbleed never had an impact on the 3000, in part because it was OpenSSL was so rarely used. Developer Gavin Scott said that last year's Heartbleed hack "does point out the risks of using a system like MPE/iX, whose software is mostly frozen in time and not receiving security fixes, as a front-line Internet (or even internal) server. Much better to front-end your 3000 information with a more current tier of web servers. That's actually what most people do anyway I think."
But native 3000 support of such a common networking tool remains on some wish lists. 3000s can use SSL to encrypt segments of network connections at the Application Layer, to ensure secure end-to-end transit at the Transport Layer. It's an open source standard tool, but deploying it on an HP 3000 can be less than transparent.
Consider the following question from Adrian Hudson in the UK.
Does anyone know anything about putting OpenSSL on a HP 3000? I've seen various websites referring to people who have succesfully ported the software, but with the HP 3000s being used less and less, I'm finding lots of broken links and missing pages. My ultimate intention is to try and get Secure FTP (SFTP) running from Posix on the HP 3000.
HP placed the OpenSSL pieces in its WebWise MPE/iX software, and that software is part of the 7.5 Fundamental Operating System. Cathlene McRae, while still working at HP in 3000 support, confirmed that "WebWise is the product you are looking for. This has OpenSSL." She's shared a PowerPoint document of 85 slides written in 2002, one of the last years that WebWise (and its OpenSSL) was updated for the HP 3000. (You can download these slides as a PDF file.)Keven Miller of 3K Ranger has detailed his notes from installing OpenSSL on a 3000.
"I'd be happy to talk with whomever has interest," he said. I'd like to do the "port" again with notes, so others can reproduce, and place it on my website."
I'm looking on my HP 918 (MPE/iX 6.0 PowerPatch 2)
OpenSSL 0.9.6a 5 Apr 2001
I believe AFTP did build and run. That would be from OpenSSH. As I recall, the process is
1. install zlib
2. install openssl
3. install openssh
usage: sftp [-vC1] [-b batchfile] [-o ssh_option] [-s subsystem | sftp_server]
[-B buffer_size] [-F ssh_config] [-P sftp_server path]
[-R num_requests] [-S program]
Connecting to hpux-1...
Couldn't connect to PRNGD socket "/tmp/egd-pool": Can't assign requested address
Entropy collection failed
ssh-rand-helper child produced insufficient data
As I recall, I need to stream a job for this EGDPOOL. I hope to get back to this and other porting things. But work gets in the way.
June 23, 2015
Migration platform gets Microsoft's retooling
Moving HP 3000 systems to Windows Server can include the use of the .NET framework, and Microsoft is retooling the framework to remain coupled with Visual Studio, rolling out a 2015 VS. The just-previewed development environment, a popular choice for migrating HP 3000 sites headed to Windows, means a new .NET release, as version 4.6 of the .NET Framework comes as part of the new Visual Studio 2015.
Microsoft is making its chief enterprise environment more feature-rich, but the retooling comes at a price. They all do, these revisions. The newest Visual Studio is powered by the new Roslyn compiler, and there are new APIs. Existing .NET apps aren't going to know much about new API capabilities, and so like everything in IT, the .NET frameworks from 4.5.2 backward will begin to age. But ASP.NET gets an upgrade and the Entity Framework data model increases its support for Azure data services and for non-relational databases. Alas, no IMAGE/SQL support in there, but that's what middleware from providers like MB Foster will continue to provide.
Users like the San Bernadino County Schools have been moving apps to .NET from MPE/iX, a project that was first scheduled to be complete at the schools by 2015. Four years ago, when the school system first started talking about using .NET, 2015 might've been outside of Microsoft's plans to keep .NET a strategic IT choice. VS 2015 as well as the newest framework put that worry to rest.
For more than four years, the COBOL code at the San Bernadino schools has been migrated into Microsoft's C#, and the dev environment has been Visual Studio. NET has been a Microsoft success, despite some bumps over the last 10 years.
As COBOL platforms go, Micro Focus has released Visual COBOL R3 to bring COBOL to a range deployment platforms including .NET, the Java Virtual Machine and the Microsoft Windows Azure cloud platform.
Dave Evans, who's going to migrate out of the district's IT shop before MPE does completely, said that initially the migration called for a "clean sheet" approach, rethinking and designing the apps from scratch. "As the amount of time left to get this done is decreasing," he said four years ago this week, "we're starting to switch to making a pretty screen for the user from the Windows world. Pretty much, the back end of this stuff we'll take as written on the HP 3000, and rewrite it over to .NET."
June 22, 2015
Fixing Date Problems From The Future
HP 3000 managers have traveled long roads toward the future of their servers, but sometimes the server travels even farther. Into the future, it seems, to apply modification dates to files that couldn't possibly be modified months or years from now.
This can cause problems with system maintenance. Craig Lalley experienced some last week. After running the NMVALCK command, he discovered "I have thousands of files with future dates." He was pretty sure there's a way to adjust a date like FRI, SEP 10, 2027, 1:53 AM by using MPEX. (A good bet, since the Vesoft product manages the 3000's files better than MPE/iX itself). But what about other repair options?
There are two, one in the community's freeware resources, and one in its Posix namespace. The freeware comes from Allegro Consultants. FIXFDATE (just do a "find" on the web page to locate the utility's entry) "will sweep through your files and change any creation, modification, access, allocation, or statechange date that is a "future" date to be today."
Another resource comes from within the 3000's Fundamental Operating System. Touch, a common Posix utility, exists on the HP 3000's implementation.Touch is chronicled on the Open Group Base Modifications website. While Posix is not as well-loved in the 3000 community as CI commands or freeware, its touch is bristling with options. The basics:
Touch shall change the last data modification timestamps, the last data access timestamps, or both.
The time used can be specified by the -t time option-argument, the corresponding time fields of the file referenced by the -r ref_file option-argument, or the -d date_time option-argument, as specified in the following sections. If none of these are specified, touch shall use the current time.
Changing the dates of a full system's worth of files might someday be the mission for any 3000 owner who's trying to carry MPE beyond January 1, 2028. The world of 3000 Future Time hasn't been explored much, mostly because the community seems confident some solution will be available in 2027. It's still 12 years away, after all.
Allegro's Steve Cooper knows date representation issues have been addressed by the 3000 development community before -- in an era when many companies were still engineering for the system. He can sound sanguine about the issue because his partner Stan Sieler engineered date repairs for the 3000 during Y2K work at the end of the 1990s.
Allegro sold a Y2K utility as part of its 3000 development toolset. "If anyone cares by then, one will need to do remediation similar to what was done for Y2K," Cooper has said. "Each program will need to be inspected for vulnerabilities, then fixed to use an alternate method of date storage and manipulation."
And, yes, as you suspect, this could arise before then. If, for instance, you are manipulating contract expiration dates in your COBOL program, and are using a 2027-sensitive format, then you will not be able to correctly handle any date past the 2027 cut-off.
If you don't mind, though, I'm not going to lose any sleep over this issue for another several years. Remind me again toward the end of the decade, and I'll ask Stan to look into it. If I ask him now, I will lose a few months of productivity out of him, while he solves next decade's problems.
The issue is not limited to the HP 3000, even though the 2027 date is unique to MPE/iX. Unix has got the same kind of deadline approaching 11 years later. Cooper pointed to a Wikipedia article that explains the "Year 2038 problem" as an analogy to the 3000's.
June 19, 2015
Changes Spark Healthy Adaptations
The constant grind of change in the 3000 community -- migrations, the shifting sands of homesteading resources -- may have a positive effect on managers who deal with it. "There is always a future," our ally and contributor Brian Edminster wrote. "It's just not always the future we think we'll have. And that's not always bad, in that it can force us to adapt, to improvise and stretch a bit. These are all signs of a healthy being."
Expanding the use of the HP 3000 in some companies seems outlandish, but it might not be. Not everywhere. In one case we're heard, new ownership of a division that uses a 3000 offered a chance to extend the use of their 3000, rather than just target the system as something to consolidate, maintain, or decommission. The company's mission includes the need to expand in the division's market that their 3000 system was designed for -- and seeing that their other markets' IT solutions won't work as well as the 3000.
Making that choice involves embracing used servers, and eventually emulated hardware. That's an adaptation of hardware sourcing. Independent support has been available a long time to make the former work, and the virtualized 3000s have been for sale for more than three years by now.
Older and common tools can also get adapted, because with this kind of field experience, practical application trumps strategic platform goals. It can happen at the simplest of levels. You might not expect that Notepad++ could become a 3000-related tool. Edminster tells a story about seeing this happen, though."Strangely enough," he reports, "Notepad++ is a popular editor in several of the shops I've done work in —primarily because of its versatility and availability of extensions via plug-ins.
Its biggest shortcoming is, you guessed it, it's a hassle to get at the files on the 3000 with it. It has no native connection facility, unlike ProgrammerStudio, or Qedit for Windows. At least not without using Samba. And for a site with security concerns and needs, sites who have to comply with PCI standards, Samba is considered a no-no.
The Notepad++ integration with the 3000 would work with a high-cost item like NSF/iX from Quest/Dell. Of course, "the development servers should be the only ones that need to 'mount' the MPE/iX filesystem, so PC tools can access them," Edminster adds, "so perhaps Samba might work in some environments."
He added that the web_dav module of Apache on the 3000 could also provide a way to make Notepad++ a better player with the 3000. Edminster's expertise includes such open source software. Such web_dav use "could be an easy way to get at files on the 3000 from any Mac or PC application. Making that work would make integrating a 3000 into the world of PC workstation apps so much easier."
June 18, 2015
Throwback: A Zealous Emulator Wonder
Five years ago this week, Stromasys announced the launch of its project to emulate the HP 3000's hardware set. Emulation was a quest for many years before 2010, though. The OpenMPE advocacy group was founded on the pursuit of an emulator for 3000s that would not be built after 2003. By 2004, the community was hearing about the timeline for emulator development. It did not promise to be a short journey.
We revisit those days to remind our readers about a time when then-recent 3000 boxes were standing in the way of making a virtualized 3000. Our podcast for this week includes comments from one of the first emulator vendor candidates, as well as the ultimate developer of a product that marks five years on 3000 planning timelines.
Along the way, the tracks on the trail to making HP's 3000 systems virtually unneeded followed the hard road HP learned about migrations. More than half the systems that were turned off between 2003 and 2008 went to other vendors, according to one report from an emulator vendor. That period saw Hewlett-Packard lose many customers while they departed the 3000, according to the Chief Technology Officer Robert Boers.
What's remarkable about the emergence of Charon from Stromasys is the persistent dedication the vendor showed for the concept. It demands patience to be in the world of emulators. In 2004, nobody was even certain about the best release date for an emulator. HP-branded 3000s in that year were still commonplace, and all had falling price tags. By the time Charon made its debut, that hardware had become seven years older, and used systems were commonly more than a decade old. Time has not enhanced the vintage of these systems. An evergreen emulator, first announced five years ago this week, changed all of that.
June 17, 2015
Passwords, MPE, and Security Flaws
Editor's note: in the past 24 hours the world has faced another breach of the LastPass security database, putting hundreds of thousands of passwords at risk. LastPass assures all of its users their passwords are secure after the breach — but change your master password anyway, they add. This makes it a good time to revisit security practices as they relate to the HP 3000 (thanks to Vesoft's Eugene Volokh) as well as our resident security expert Steve Hardwick. Sound advice stays fresh.
More than 30 years ago, VEsoft's Eugene Volokh chronicled the fundamentals of security for 3000 owners trying to protect passwords and user IDs. Much of that access hasn't changed at all, and the 3000's security by obscurity has helped it evade things like Denial of Service attacks, routinely reported and then plugged for today's Unix-based systems. Consider these 3000 fundamentals from Eugene's Burn Before Reading, hosted on the Adager website.
Logon security is probably the most important component of your security fence. This is because many of the subsequent security devices (e.g. file security) use information that is established at logon time, such as user ID and account name. Thus, we must not only forbid unauthorized users from logging on, but must also ensure that even an authorized user can only log on to his user ID.
If one and only one user is allowed to use a particular use ID, he may be asked to enter some personal information (his mother's maiden name?) when he is initially added to the system, and then be asked that question (or one of a number of such personal questions) every time he logs on. This general method of determining a user's authorizations by what he knows we will call "knowledge security."
Unfortunately, the knowledge security approach, although one of the best available, has one major flaw -- unlike fingerprints, information is easily transferred, be it revealed voluntarily or involuntarily; thus, someone who is not authorized to use a particular user id may nonetheless find out the user's password. You may say: "Well, we change the passwords every month, so that's not a problem." The very fact that you have to change the passwords every month means that they tend to get out through the grapevine! A good security system does not need to be redone every month, especially since that would mean that -- at least toward the end of the month -- the system is already rather shaky and subject to penetration.
There's a broader range of techniques to store passwords securely, especially important for the 3000 owner who's moving to more popular, less secured IT like cloud computing. We've asked a security pro who manages the pre-payment systems at Oxygen Financial to share these practices for that woolier world out there beyond MPE and the 3000.
By Steve Hardwick, CISSP
There has been a lot in the news recently about password theft and hacking into email accounts. Everything needs a password to access it. One of the side effects of the cloud is the need to be able to separate information from the various users that access a centrally located service. In the case where I have data on my PC, I can create one single password that controls access to all of the apps that reside on the drive plus all of the associated data.There is a one-to-one physical relationship between the owner and the physical machine that hosts the information. This allows a simpler mechanism to validate the user. In the cloud world it is not as easy. There is no longer a physical relationship with the user. In fact a user may be accessing several different physical locations when running applications or accessing information. This has led to a dramatic increase in the number of passwords and authentication methods that are in use.
I just did a count of my usernames and passwords and I have 37 different accounts (most with unique usernames and password). Plus there are several sites where I use the same usernames and password combinations. You may ask why are some unique and why are some shared. The answer is based on the risk of a username or password be compromised. If I consider an account to have a high value, high degree of loss/impact if hacked, then it gets a unique username or password.
Email accounts are a good example. I have a unique username and password for my five email accounts. However, I do have one email account that is reserved solely for providing a username for other types of access. When I go to a site that requires an email address to set up an account , that is the one I use. Plus, I am not always selecting a unique password. The assumption is that if that username and password is stolen, then the other places it can be used are only other web site access accounts of low value. I also have a second email account that I use to set up more sensitive assess, google drive for example. This allows me to limit the damage if one of the accounts is compromised, and so I don't end up with a daisy chain of hacked accounts.
So the next question is how do you go about generating a bunch of passwords? One easy way is to go into your favorite search engine and type in password generator. You will get a fairly good list of applications that you can use to generate medium to strong passwords. But what if you don't want to download an application -- what is another way?
When I used to teach security this was one trick I would share with my students. Write a list of four or five short words that are easy to remember. Since my first name is Steve we can use that. This of four or five short number 4-5 digits in length 1999 for example. Now pick a word and number combination and intersperse the numbers and letters S1t9e9v9e would be the result of Steve and 1999. Longer words and longer numbers make strong passwords – phone numbers and last names works well. With 5 words and 5 numbers you get 25 passwords. One nice benefit of this approach comes when you need to change your password. Write the number backwards and merge the word and data back together.
Once you have created good passwords, your next challenge is how to remember them all. Some of the passwords I use I tend to remember due to repetitive use. The password for logging into my system is one I tend to remember, even through it is 11 characters long. But many of my passwords I use infrequently -- my router for example, and many have the “remember me” function when I log on.
What happens when I want to recall one of these? Well the first thing is not to write them down unless you absolutely have to. You would be amazed how many times I have seen someone password taped on the underside of their laptop. A better option is to store them on your machine. How do you do that securely?Well, there are several ways.
One easy way is to use a password vault or password manager. This creates a single encrypted file that you can access with a single username and password. Username and password combinations can then be entered into the password vault application together with their corresponding account. The big advantage is that it is now easy to access the access data with one username and password.
The one flaw: what happens if the drive crashes that contains the vault application and data? If you wanted to get started with a password vault application, InfoWorld offered a good article that compares some leading products.
Another option is to roll your own vault services. Create a text file and enter all of your account / username / password combinations. Once you are done, obtain some encryption technology. There are open source products -- truecrypt is the leader -- or you can use the encryption built into your OS. The advantage of using open source is that it runs on multiple operating systems. Encrypt the text file by using your software. Take caution to not use the default file name the application gives you, as it will be based on your text file name.
Once you have created your encrypted file from the text file, open the text file again. Select all the text in the file and delete it. Then copy a large block of text into the file and save it (more then you had with the passwords). Then delete the file. This will make sure that the text file cannot easily be recovered. If you know how to securely delete the file do that instead. Now you can remotely store the encrypted password file in a remote location, cloud storage, another computer, USB drive etc. You will then have a copy of your password file you can recover should you lose access to the one on your main machine.
Now, if you do not want to use encryption, let's look at why not. Well, most programs use specific file extensions for their encrypted file. When auditing, the first thing I would look for is files with encryption extensions. I would then look for any files that were similar in size or name to see if I could discover the source. This includes looking through the deleted file history.
The other option is steganography, or stego for short. The simple definition is the ability to bury information into other data – for example, pictures. Rather than give a detailed description of the technology here, take a look at the Wikipedia page. There is also a page with some stego tools on it . For a long time my work laptop had a screen saver that contained all my passwords. I am thinking of putting a picture up on Facebook next.
Here are a few simple rules on handling multiple passwords
1. Try and use uniques usernames and password for sensitive account. You can use the same username password combination for low sensitive accounts.
2. Run through an exercise and ask yourself, what happens if this account is hacked. So don't use the same username and password for everything.
3. Do not write down your passwords to store them.
4. Make sure you have a secure backup copy of your passwords; use encryption or steganography.
If you want to do some extra credit reading on passwords, there are two good references out there and they are free. The National Institute of Standards and Technologies has a library on security topics that is used by the federal government., a good publication on passwords.
The SP 800-118 DRAFT Guide to Enterprise Password Management focuses on topics such as defining password policy requirements and selecting centralized and local password management solutions.
Steve Hardwick is the Product Manager at Oxygen Financial, which offers advanced payment management solutions. He has over 20 years of worldwide technology experience. He was also a CISSP instructor with Global Knowledge for three years and held security positions at several companies.
June 16, 2015
Migrating Like Mercury, or NoSQL Is Plenty
More than a decade ago, database advocate Wirt Atmar said that "killing the HP 3000 was a little bit like hitting a drop of mercury with a hammer; it caused the drops to squirt out in every direction, with people migrating every which way to a whole host of new systems and new databases." The newest databases of that decade were modernized iterations of SQL, like MySQL and Postgres. In our current era, however, the schemas of Structured Query Language data management have begun to turn into a liability. What were once touted as an advantage over IMAGE (at least until IMAGE acquired SQL queries to become IMAGE/SQL) are now being viewed as not fluid enough.
The reason lies in how much we track today. Billion-record databases are not uncommon anymore. Establishing a query structure that remains in place for every search is slower than devising the best one on every search. That's the promise that NoSQL and its cousin file system Hadoop offer. When data leaps into the realm of the Internet of Things and tracks instances as small as light bulb blowouts, then database technology like SQL devised in the 1980s, no matter how much it's updated, won't be able to keep up.
SQL will be replaced with NoSQL, once the messiness of data becomes the norm. Oracle and PostgreSQL and MS SQL rule today. Even Microsoft Access has a ready enterprise base, as simple its structure is. But data is growing fast enough to become BigData. And the HP 3000 community which has migrated, or soon will, is going to look for newer data structures and tools to send its SQL data into NoSQL's schemas.
MB Foster is working to be this kind of tool provider. Tomorrow on June 17 the company will demonstrate how its UDACentral product moves the data today. The aim for versions in the years to come is support for BigData's tools of NoSQL and Hadoop.The demonstration covers the latest workflow for UDACentral, but the questions the company asks go beyond a tour of this year's features. What are the changes to prepare for over the next few years for data integration requirements and workflows? NoSQL and especially Hadoop, designed for managing BigData, tend to be championed by executives at the C-Level of companies.
That's a situation that will feel a lot like the push toward Open Systems in the early '90s. Less technical leaders will start asking for a tool that promises the most. It will be up to the technical managers to deliver, starting with the data they've got on hand today in SQL databases.
Looking ahead, even while increasing today's feature set, is an attribute of a vendor dedicated to the future of data. The HP 3000 was built around the intrinsic combination of file system and database management. The managers of these systems understand the exponential value that such a combination provides. For its time, the HP 3000 never had a rival that blended files and data so efficiently.
More than a decade ago Atmar's company AICS Research looked ahead to supporting "PostgreSQL on Linux, SQLServer on NT, Oracle on IBM, based on whatever migration patterns we see in the user community. Ultimately, it is our intention to support every common combination of host operating systems and databases."
A data extraction and loading tool makes that kind of support more than just a future requirement. Any tool like that — with a track record of embracing one database structure after another — can support the growth of data into BigData. The concept of BigData seems like a buzzword. So did cloud computing, too. And all it takes is one corporate acquisition for a BigData IT shop to develop a need to ingest data from something as relatively modest as an IMAGE/SQL database.
NoSQL and Hadoop offer another benefit, one that will resonate with HP 3000 managers. They're open source solutions, even if the technology is packaged by vendors like Cloudera or MongoDB. After a forced migration from the HP 3000 and IMAGE/SQL, migrators won't have to remain lashed to a proprietary data schema like Oracle's or Microsoft's. The BigData solutions will, as Atmar put it years ago, "be providing users with significantly more protection against vendor abandonment than they've had in the past."
June 15, 2015
ERP floats changes for classic models
Since the HP 3000 got popular in the 1970s, manufacturing has always claimed a majority share of its business use. MANMAN and the work of ASK led the new minicomputer into major corporations and thriving manufacturers. To this day, that software runs operations in places like Rockwell Collins, Calsonic, and Amatek Chandler. But the day of changes to classic ERP is coming. One of the things that's sparking it is the regularity of change.
Terry Floyd of the Support Group, which provides app support for companies using MANMAN and other ERP software, updated us on the use of alternatives to MANMAN. With a package as comprehensive as that suite, companies have to be cautious when replacing it. "Things have changed," he said. "The new stuff is NetSuite, Workday, Plex, and Kenandy, and a dozen others," he said. It's a lot better than Microsoft Dynamics, a solution we reported on earlier. The trend is illustrated in the chart above (click for detail.)
And among the changes taking place today is adoption of cloud ERP.
Kenandy says it's is making headway because it's more flexible and responsive to change in business than the classic ERP platforms. Cloud-based ERP is becoming a replacement choice because its fluid design can be responsive when business grows.
Companies running a legacy ERP system -- those are the MANMAN sites -- have to factor in the cost, time and effort of scaling a system to respond to new business requirements. Some of these MANMAN customers are being acquired, since they're efficient enterprises. "What happens when you open a new facility or attempt to integrate a new acquisition?" Kenandy asks in a white paper.
As a small company running on a combination of business applications, what happens when your business expands? Can you easily integrate new business lines? Can your systems easily adapt to new processes? What happens when you decide to scale and develop a global presence? Do the applications support multiple sites, multiple currencies and multiple languages? Moving to a cloud ERP solution allows you to easily scale across all these dimensions.
In addition to costs for integrating with an acquisition, or a new ownership partner, for additional hardware, ancillary software and maintenance, there's time needed to scale.
Any organization will go through the process of capital planning, hardware and software procurement, installation, and on-boarding — a process that could take weeks or months. Moving to a cloud ERP solution allows you to bypass much of this effort and scale more quickly. New users, new sites or new acquisitions can be on-boarded and then go live in a matter of days.
Floyd's company broke ground for MANMAN alternatives with the IFS ERP package. Kenandy, which traces its roots back to the ASK founders, is a point on the next horizon for ERP replacement.
June 12, 2015
Find hardware specs, move DTCs, and more
Is there a command or way to see the hardware specs of a HP 3000 via MPE or its installed utilities? This machine has no other utilities, like MPEX. I am looking to document the processors, memory, number of hard drives, and size of those drives.
Jack Connor replies
Depending on MPE/iX version, you can use SYSDIAG for 6.0 and older or CSTM for 6.5 and later. In SYSDIAG, type SYSMAP, then IOMAP, and GENERAL for the IO components, then exit and go to CPUMAP for the CPU info.
In CSTM, type MAP, then SELECT ALL, then INFO, then IL (InfoLog) to get a listing of everything that MPE owns.
I don't work that much with COBOL these days, but I wanted to compile a program and I got an error message,"size of data segment greater than 1 gig or 64 bytes" How do I get around this?
Steve Cooper replies
That means that the total space you asked for in your Working Storage Section is more than 1 GB. Now, there are ways to work around that, but my guess is that you don't need to work around that. My guess is there is a typo or some other unintended problem, where you are asking for way more storage than you intended. Check your OCCURS clauses and PICs to make sure you mean what they say.
We have to move a DTC into our network. Along the way there are Procurve switches and a Cisco router or two. I know that somehow the switches and routers must be configured so as to allow multicasting on addresses 09-00-09-xx-xx-xx to be forwarded and not filtered, but our Procurve administrators aren't quite sure they know how exactly how to do this. What is Procurve-ese for configuring what's necessary to allow remote DTC operation across our network?
Jeff Kell replied
You don't have to do anything at layer-2. The layer-3 will have to bridge the traffic. If the layer-3's are Cisco, you can specify the traffic to bridge, in which case you just want the [08|09]-00-09-x-x-x traffic.
The 09-00-09 is multicast used for discovery, but beyond that, you're going to get some directed layer-2 08-00-09 as well. You will need to include your HP's DTSLINK NIC MAC prefix as well if it is a later model that does not have the old HP 08-00-09 prefix.
If you own the whole infrastructure path, what we used to do is run L2 trunks between sites, and propagate a common vlan for the DTC/HP traffic, while routing everything else.
Or if the Ciscos are real routers rather than Catalyst-style switches, you can run a L2TP tunnel end-to-end to propagate the HP vlan across the routed hops.
I need to rebuild an environment from one HP 3000 system to another. Trouble is, we want to have groups from the same account end up on different user volumes. Is there a way to do this using BULDACCT?
Keven Miller replies
BULDACCT was made for processing complete accounts. Do BULDACCT CHC%VSACCT=MEDADV_1. Then edit BULDJOB1 for the other group, changing MEDADV_1 to _2
Mark Ranft adds
What I usually do is use BULDACCT to move the entire accounting structure. Then I surgically PURGEGROUP and NEWGROUP (with the appropriate HOMEVS= and ONVS= options, plus CAP= and ACCESS= etc) to duplicate the special groups.
I'm having trouble locating TurboIMAGE limits. What's the max number of items in a dataset?
Paul Edwards replies
From HP’s website:
The objective of this TurboIMAGE enhancement is to raise the limits on number of items, sets and master paths in a TurboIMAGE database to these values:
Number of items: 1200
Number of sets per database: 240
Number of paths for master : 64
All new databases created will have the benefit of increased limits. The new limits will not be applicable for old databases created under previous versions of TurboIMAGE. However, these databases will continue to function with the old limits and applications will be compatible.
NewWave was once Poe-tic to some
Our NewWave article yesterday seemed to limit the impact of NewWave's design to a new GUI and some object oriented computing, but HP intended much more for it. Alexander Volokh of the Volokh enterprise — also known as Sasha — even penned a poem in 1988 to celebrate the networked environment that would only last until Windows 95 was release. [Tip of the hat to his dad Vladimir, as well as Adager for hosting the poetry on its website.]
NewWave — A Ballad
By Sasha Volokh
Sasha Volokh is the Vice-President of Poetry of VESOFT. He tells us this poem is in the style of "Ulalume -- A Ballad" by Edgar Allen Poe, and offers his apologies to Mr. Poe.
The skies they were shining and lacquered,
And the programmers looked very brave,
Looked confident, happy and brave --
'Twas the day that the firm Hewlett-Packard
Unveiled its great product, New Wave,
Its magnificent product, New Wave.
New Wave worked in conjunction with Windows
(The version two point zero three);
It would function with Microsoft's Windows,
But only two point zero three.
Too long had it stood in the back rows,
For no one had witnessed its might --
For example, its system-wide macros
That could make heavy tasks very light
(It deserved to be brought to the light!)
There were "hot links" between applications
To do many things at a time --
Icons could represent applications
And could save you a whole lot of time.
Here, performance and swiftness were wedded,
Which made integration just right
(And again, HP leads us aright);
In New Wave, ease of use was embedded
To the users' content and delight
For New Wave brought an end to their plight!
Yes, it lit up the sky through the night!
It was written to work on the Vectra
In the language that people call C.
You can even transfer, on the Vectra,
Many programs not written in C.
But alas! the directors of Apple
With evil, not blood, in their veins,
With hate in nefarious veins,
Decided with HP to grapple
And to cause it no end to its pains.
They decided on filing a lawsuit:
They accused it of trying to steal
Their Mac interface -- but Apple's lawsuit
Was just based on "the look and the feel."
"Who cares that the few things we can match,
On the Mac, are the pictures we show,
Are the similar icons we show?
And who cares that our programming language,"
Apple said, "is unbearably slow
('Cause they say that our Hypertalk's slow)?
And we don't care about a third party --
It's just HP's success that we mind!"
Then they laughed, and their laughs sounded hearty,
For they chuckled with evil in mind.
David Packard was in Cupertino
When he figured out what he should do --
For this suit for New Wave wouldn't do.
So he said, "sending us a subpoena
Was an action that Apple will rue.
Yes, its heart should be laden with rue!
Those directors must surely be punished
For defaming the name of HP,
And I'll see to it that they get punished
So beware fearless Dave of HP!"
David Packard consulted his lawyer
And then he got up from his desk,
His expensive, mahogany desk,
And he marched in the courthouse's foyer
Where he said, "This is so Kafkaesque!
I know well that I'm here for a trial,
Yet I don't even know what's the deal!"
Apple said, with no air of denial,
"Look and feel! Look and feel! Look and feel!
'Cause New Wave has the Mac's look and feel!
Now your losses will temper your zeal."
David said that the suit had no merit,
And then he called Apple a louse.
But his rage grew 'till he couldn't bear it,
And he aimed and then clicked with his mouse;
In one swoop, double-clicked with his mouse!
So thus justice prevailed over evil,
And Dave uttered many "So there"'s.
HP workers are still fighting evil
And success and good fortune are theirs.
June 11, 2015
TBT: When NewWave beached on Mail shore
NewWave Mail makes its debut in an effort to give HP 3000 users a reason to use the GUI that was ahead of its time. Apple took the interface seriously enough to sue Hewlett-Packard over similarities. The GUI lasted more than five years in the wild before Microsoft's Win95 emerged.
Twenty-five years ago this summer, the HP 3000 got its first taste of a graphical user interface. NewWave, the avant garde GUI rolled out a year from the Windows 3 release, got a link to HP DeskManager when the vendor pushed out NewWave Mail. Not even the business-focused user base of the HP 3000 — in that year HP's largest business server community — could help a GUI released before its time. Or at least before the time that Microsoft finally made Windows a business default.
NewWave introduced a look and feel that one-upped Apple's GUI of 1990. It seemed a natural product to pair with DeskManager, the mail system so efficient and connectable that HP used it and massive farms of 3000s to link its worldwide employee community. NewWave was developed in the HP's Grenoble software labs, not far from the Bristol labs that birthed DeskManager.
During that era, the vendor was looking forward to products more accessible to its customers than a memristor. A concept video called 1995, aired for summertime conference attendees two years earlier, included simulated workstation screen shots of advanced desktop interfaces. NewWave got its first customers in 1989, but uptake from the developer community was slow. PC software makers like Lotus were the target of HP development campaigns. But a NewWave GUI for software as omnipresent as Dbase or 1-2-3 wasn't created by Lotus. Its Ami Pro word processor got a NewWave version, pairing a little-known PC product with HP technology ahead of its time.
HP scored a breakthrough with Object Oriented Computing with NewWave, though, the only vendor of serious size to do so. NeXT was rolling out object-based software a few years later, tech that Apple acquired when Steve Jobs returned to the company he helped to found. Agent-based computing, intended to use work habits of each user, was another aim for NewWave.
For all of those far-reaching concepts, though, NewWave Mail was "totally dependent on HP DeskManager," according to HP's manuals. It was as if a GUI skin were put on the minicomputer-bound HP Desk. Microsoft needed little more than PCs to spark its first useful version of Windows, 3.0.
It wasn't the first summer that Hewlett-Packard got upstaged by Microsoft. Twenty years ago this summer, that year's Interex show rose its curtain while Redmond unfurled the Win95 banner, 300 feet worth literally draped off a tower in Toronto in the week of the show. Win95 grounded NewWave, marking the end of HP's unique R&D into GUI.I watched an aerial daredevil rappel down the CN tower that week, one of a half-dozen stunts Microsoft staged in contrast to the laid-back HP marketing. Printer sales made a hit with HP's consumers while the company hoped to capture IT dollars with its Vectra PC line. But not even agent-based OOC software could spark sales like a Windows campaign using the Rolling Stones' Start Me Up lauding Win95's new Start button. Paying $3 million for the rights to use the song, Microsoft tattooed it into our brains -- enough that I played it in a loop while I batted out the first edition of our FlashPaper late-news insert as we launched the NewsWire.
Two decades later, Microsoft has announced it will no longer release versions of Windows. It will simply update the current version of Win10 automatically, having long ago dropped the version names that were linked to to a release year. After NewWave, HP made no other efforts to push an in-house R&D project that could offer object oriented computing to developers and business IT users. In the back end of the 1990s the company focused on catching up to Windows business use — after it had been charging into the Unix and NT technologies, hoping to make a splash with businesses.
June 10, 2015
Making Migrations of Data Your Big Tool
Data is the one element that every business computer user has in common. Whether they're still using MPE/iX or never had a single 3000 online, data is what defines a company's profile and mission. Even within the Windows environments that have been so popular for migrating 3000 sites, data must be migrated. The benefits go beyond consolidation and efficiency, too.
Birket Foster checked in with us to catch us up on what he's been showing IT managers for the past year about managing and migrating data. The tool for this kind of project is MB Foster's UDACentral. The software has been the crucial element in the company's services work, both for the 3000 sites on the move as well as companies that have no 3000 history at all. Foster's company does more business all the time with the latter kind of customer, he said.
"Not every 3000 vendor made this leap," he said. "These are becoming a bigger and bigger part of our revenues."
The UDACentral mission is going beyond a tool for MB Foster to use in engagements. The company's now offering it as Software as a Service. It can be rented for the duration of a migration, either of data or systems. On June 17 at 2 PM Eastern, the tool will be demonstrated in a Wednesday Webinar.
Foster said the software has evolved to include an entity relationship mapper, and the migration speed now clocks in at just 8 hours to move 300 million records. "Rows," Foster reminded us, because at one site the SQL term used for them illustrates how IMAGE never ran a day there.Customers are revising data structures when migrating data, in one case consolidating 40 separate databases down to a more efficient number. The Entity-Relationship builder tracks elements like keys, table names and record counts, all of those rows. At one site in the retail world, a migration of data simply wasn't going to finish soon enough to be possible.
"They told us that they had a rogue SQL Server database they wanted to move into Oracle," Foster explained, "and after using their in-house scripts for the migration, it wasn't finishing any faster than 8 days." The customer recognized moving the database, crucial for operations, could only take place on off hours, "and they knew they were never going to get any 8-day weekends. They wanted us to move it for them."
After a couple of days of tech support, it only took 90 minutes using UDACentral to move 80 million records. "We also trained them on our data migration methodology when we trained them on the software," Foster said. His company's already lining up more data migration projects for the fall.
Consolidating databases brings more than efficient management. In one customer site, data was being rekeyed as much as six times for various databases, so introducing errors and creating bad data was a real risk. But getting a Key Performance Indicator dashboard, and perhaps extracting data into a custom data mart, are genuine benefits that come from establishing an up to date relationship with your data structures. The biggest benefit might come from using the techniques in Big Data strategy.
"It's hard to steer your enterprise from the rear view mirror," Foster says. UDACentral will have support for Hadoop, Teradata and NoSQL databases in its next version, the kind of data resources used to develop forward-looking views of what customer transactions can predict in key performance.
"We're trying to help people master how to do Big Data," Foster said. Smaller companies can have this within reach now. "They'll have something to grab trends from their data, because there are patterns in there. Even moving from legacy systems, you can see better from a newer dashboard." That's the long-term benefit of being able to easily migrate, integrate and move data, he says.
June 09, 2015
What to Expect Out Of a Free Emulator
Emulation has been in the toolset of HP 3000 users for decades. It began with emulation of HP's hardware, yes, but it was the hundreds of thousands of HP terminals that were soon replicated in software. Just like with the Stromasys product to mimic 3000 CPU work, terminal emulators like those from Minisoft and WRQ virtualized hardware using Intel-based PCs.
Early in this century, even those emulators received some tribute: the first high-functionality 3000 terminal emulator distributed as freeware. But can you make that QCTerm software do the work of a Reflection, or MS/92? We asked Brian Edminster, curator of the open source repository MPE-OpenSource.org. An early adopter of QCTerm who worked to beta test the early versions, he says he uses the latest version and compared it to Reflection's V. 14.
"QCTerm has a number of things to recommend it," he said. "It's fast, and it's free. In addition to regular Telnet, it also supports Advanced Telnet — which can reduce bandwidth use and feels more responsive over a slow connection, because it works more like NS-VT."
Edminster says that QCTerm is simpler than Reflection, and acts more like a cross between a browser and conventional Windows program. But he notes that there are some drawbacks, too, such as the lack of support for the software.
"It also doesn't do NS-VT," he said, "which is not really a problem, since Telnet and Advanced Telnet are available for all late-model versions of MPE/iX. It is also less sophisticated than Reflection -- not as configurable, no file-transfer ability, and has no 'programmatic' interface."
Another downside for this free emulator is that it won't accommodate using the vi editor and Advanced Telnet. But the list of technology that QCTerm can employ is thorough.Edminster says he uses QCTerm with QEdit, QUAD, VPlus and Formspec, on Windows XP, 7, and 8 PCs. "In general, anything that would work on a 700/92 terminal works with QCTerm, including NMMGR.PUB.SYS, which is notoriously picky." I've heard that it'll work under WINE on Linux systems as well. Come to think of it, I can't recall anything that a CRT would do that QCTerm couldn't, but I'm sure somebody will prove me wrong someday."
The freeware was not intended to be an exact work-alike for a 700/92 terminal, nor was it designed to work like Reflection either. The documentation "makes it clear that QCTerm was intended to be something different, but better," Edminster says. "I think that for the most part, it hit the mark, even though the QCForms feature was never fully realized."
There is also a little-known basic scripting language for QCTerm. Unlike Reflection's scripting, QCTerm's commands are really only useful for automating connections and logins. It allows you to set up a script containing connection and login commands in a text file on the PC. This can then appear as a clickable icon on your desktop that can start QCTerm. "It will either dial a modem or make a network connection, then navigate the login process," Edminster said."I use this quite often now."
A page of the documentation includes instructions on how to utilize this Autolaunch Scripting, Edminster reported.
It's much simpler than what Reflection can do, but for most vanilla access to your MPE/iX based applications (that is, if your application doesn't expect Reflection) the scripting should work just fine. I'd urge testing it in your own environment with your own applications and tools before assuming that you can ditch other terminal programs.
One of the applications that I support actually has dependencies for Reflection coded into it (mostly to programmatically automate file-transfers). But aside from that specific functionality, QCTerm works like a champ.
June 08, 2015
In 20th year, NewsWire digital turns 10 today
A decade ago today, this blog received its first post. On June 8 of 2005, a death in the 3000's family was in the news. Bruce Toback, creator of several 3000 software products and a man whose intellect was as sharp as his wit, died as suddenly as HP's futures for the HP 3000 did. I wrote a brief tribute, because Toback's writing on the 3000-L made him a popular source of information. His posts signed off with Edna St. Vincent Millay's poem about a candle with both ends alight, which made it burn so bright.
I always thought of Bruce as having bright ends of technical prowess along with a smart cynicism that couldn't help but spark a chuckle. His programming lies at the heart of Formation, a ROC Software product which Bruce created for Tymlabs, an extraordinary HP software company here in Austin during 1980s and early 90s. Toback could demonstrate a sharp wit as well as trenchant insight. From one of his messages in 2004:
HP engineer [about a Webcast to encourage migration]: During the program, we will discuss the value and benefits of Transitioning from the HP e3000 platform to Microsoft's .NET.
Bruce: Oh... a very short program, then.
In the same way Toback's candle burned at both ends, I think of this blog as the second light we fired up, a decade after the fire of the NewsWire's launch. Up to this year we burned them both. Now the blog, with its more than 2,600 articles and almost 400,000 pageviews, holds up the light for those who remain, and lights the way for those who are going. This entry is a thank-you for a decade of the opportunity to blog about the present, the future, and the past.
We always knew we had to do more than give the community a place to connect and read what they believed. We're supposed to carry forward what they know. The NewsWire in all of its forms, printed and digital, is celebrating its 20th year here in 2015. A decade ago our June 2005 blogging included a revival of news that's 20 years old by now. It's news that's still can still have an impact on running a 3000 today.In the blog's first month of 2005, I wrote
"HP 3000 enhancements can travel like distant starlight: They sometimes take years to show up on customer systems. A good example is jumbo datasets for the 3000's database. Jumbos, the 3000's best tool for supporting datasets bigger than 4GB, first surfaced out of HP's labs in 1995, just when the NewsWire was emerging. We put our news online in the months before we'd committed to print, and our report of September 1 had this to say."
HP will make the enhancement available as part of its patch system, bypassing the delay of waiting for another full release of MPE/iX. But there are already discussions from the HP 3000 community that a more thorough change will be needed before long — because 40-gigabyte datasets someday might not be large enough, either.
"Why care about 20- or 10-year-old news? Because the 3000 has such a long lifespan where it's permitted to keep serving. In the conservative timeline of 3000 management, jumbos were the distant starlight, only becoming commonplace on 3000s a decade later. Jumbos are finally going to get eclipsed by LargeFile datasets. HP's engineers say their alpha testing to fix a critical bug in LFDS is going well."
"Like the jumbos before them, LFDS are also going to get a slow embrace. How slowly did jumbos go into production systems? Five years after jumbos first emerged, John Burke wrote in our net.digest column "it is hard to tell about the penetration of jumbo datasets in the user community beyond users of the Amisys application." His column also offered some tips on using jumbos, even while database experts in the community continued to lobby for a way to build larger files."
That reporting in 2005 marked the first time in a decade that 3000 customers could build a dataset as big as they needed. Up until then, LFDS had not been recommended for 3000 customers except in experimental implementations.
The nature of the 3000 community's starlight made a 10-year-old enhancement like jumbos current and vital. Alfredo Rego of Adager once said that his database software was designed like a satellite, something that might be traveling for decades or more and need the reliability of spacecraft to go beyond the reach of support transmissions. HP's signal for 3000s has died by now. We hope to repeat signals, as well as report, for more than another decade, onto the cusp of MPE's calendar reset of 2027. Thanks for receiving these transmissions.
June 05, 2015
Plan B: Stay on the HP 3000 to 2027?
Could you really stay on the HP 3000 through 2027? What follows is a classic strategy for 3000 owners. Wirt Atmar of AICS Research wrote the following column in the months after HP's 3000 exit announcement. The article is offline for the moment, so I thought we'd put it here as a reference document for any IT manager who's trying to defend the case for remaining on their HP hardware a few more years. When Atmar passed away in 2007 the community lost a dynamic advocate for MPE computing. His company eventually migrated its QueryCalc application for IMAGE reporting to Windows. But not before he organized advocacy like the World's Largest Poster Project, at left. Few 3000 experts did more for MPE owners than Atmar — including thinking outside of HP's box.
Plan B: Staying on the HP 3000 Indefinitely
By Wirt Atmar
Hewlett-Packard and a few others are stating that staying on the HP 3000 for the long term is your least desirable option, the one that puts you at the greatest risk. Let me argue here that remaining on the HP 3000 is not likely to be all that much of a risk, at least for the next 25 years. It will certainly be your least expensive option and the one that will provide you with the greatest protection for your current investment in software and business procedures.
AICS Research, Inc. wholly and enthusiastically supports the evolution of an HP 3000 MPE emulator, another path that has been described as "risky." But there's nothing risky at all about the option, should HP give its blessing to the project. It is technically feasible and completely doable. Indeed, the emulator actually offers the very real possibility of greatly expanding MPE's user base. However, staying on the HP3000 does not require HP's blessing. It's something you can decide to do by yourself. And should you decide later to move off of the HP 3000, you've really lost nothing in the interim. Indeed, you've gained time to think about what is best in your circumstances.
A part of calculating your "risk" is really nothing more than sitting back and determining what part of the computer market is rapidly evolving and which part is more or less stable.
The HP 3000 is well-known for its qualities: a very nice CI scripting language, a very robust job scheduler, an extremely stable and scalable database, and its simple, English-like commands. Beyond that, we have also been lucky that the HP e3000 has also recently had put into it several standards-based attributes: network-based IP addressable printing, telnet and FTP, and all of these qualities are now very stable.
But all of the other processes of modern computing, the material encompassed by POSIX (Java, Samba, Apache, bind, DNS, etc.) are the qualities that are rapidly evolving. And none of these need to be on the HP 3000. In fact, you're probably better off if they weren't on the platform.
The picture at left is of a $450, 128MB, 900MHz, 30GB Dell server running Red Hat Linux and a used, unlimited-number-of-users, 128MB, 8GB Series 927 we bought from a customer for $200. Because of HP's announcement, some fraction of users, undoubtedly greater than 50%, are going to move off of the HP 3000. What this migration is going to do is provide a glut of hardware on the market in the next several years that is simply going to be unbelievably inexpensive, and there's no reason that you shouldn't take advantage of the situation.
You can actually telnet to this 927 by logging onto 126.96.36.199 and typing:
Once there, you can then telnet from the HP 3000 to the little Dell server by typing:
And that's very much the point. The telnet and FTP standards are now very stable. Almost no change is going to occur in these standards in the next quarter-century. Fortunately both the HP 3000 and Linux have them deeply embedded in their structure now. Because of that, you can very readily append Linux and Windows processes onto your HP3000 as auxiliary cheap external boxes. Using the FTP site command, the HP3000 can easily operate as a master controller of any number of external Linux and Windows machines.
It is our intention to move our web pages up on the Linux box. It is undeniable that Linux makes a fine webserver. But on the other hand, it is equally undeniable that the HP 3000 is a very nice database platform. Using HP3000 scripts and jobs, it is very easy to transfer files to and from the Linux box, constantly updating web pages as need be from data held in your HP3000's databases.
Most of the applications on the HP3000 are quite old and very stable. If the more modern -- and therefore much less mature -- applications such as web and file serving are put onto the Linux box, such auxiliary Linux platforms can fail without impacting the HP3000 at all, other than perhaps holding open the one or two processes that might be waiting for a reply. But even if that should prove to be true, all that these processes should do is hang until the Linux boxes are resurrected. They certainly will not crash the HP3000.
There are fewer parts in a modern computer than most people imagine: a power supply, a few circuit boards, a few disc drives and a backup device, generally something like a DDS or DLT tape drive. But beyond that, they're hardly anything else but sheet metal.
One of our HP3000's, the 918 in the picture at left, originally came with two 4GB drives mounted internally. One of the drives failed, as that particular series of 4GB drives that HP supplied had a tendency to do. Access to the drives is merely a matter of unscrewing two screws at the base of the faceplate, lifting the faceplate away, and pulling the disc drive cage out a bit from the central case.
To replace the drive, all I did was unplug the power cables and the ribbon cable from the defective drive inside the cage. Otherwise, I left the drive mounted where it was. I then ordered an extremely inexpensive, external SCSI-connected LaCie drive from APS that was designed to work on PC's or Mac's and plugged it into the SCSI port at the back of the HP3000, giving it the same SCSI address as the dead drive [I prefer SCSI-connected external drives, even though they're a bit more expensive, simply because they're so much easier to replace if the time comes again to do so]. I wasn't able to order an exact replacement 4GB drive. The smallest, cheap external $250 drive that I was able to order was 18GB, and that was from the "legacy" series. Nonetheless, it booted instantly.
How stable is this sort of repair process likely to be over the next 25 years? SCSI is SCSI. We were an early and very enthusiastic adopter of Macintoshes when they first appeared in 1984, and this 18GB drive could have been just as easily connected to one of our 1985 Mac Pluses as it was to the 1999 HP 3000. Although there are other external bus structures in existence (USB, Firewire, optical, etc.), SCSI is likely to be approximately as common 25 years from now as it is currently. But even if it were supplanted by some other bus structure, you can reasonably be assured that bus convertor boxes will be available. While there is likely to be a great deal of evolution in peripheral devices over the next quarter century, SCSI frees you to be able to accept that evolution rather easily.
Can you really operate a business on 25-year-old hardware, 25 years from now? We do it here with our Macintoshes. Because we were an early adopter of the Macs, and because Apple has not attempted to maintain backwards compatibility in its lines, we were orphaned within just a few years of adopting the Macs. Our initial enthusiasm for the Mac caused us to put 5,000 pages of company documentation on the machines. Unfortunately, the very next series of Macintoshes, the PowerPC's, would not run our software and thus we were constrained to keeping our original Mac Pluses alive forever.
Although Apple has made the Mac line incompatible within itself several times since, none of these more recent incompatibilities bother us, because we were stuck on the very first generation of Macs. When the Mac Pluses and Mac Classics began to become obsolete, we bought 10 spare machines from the local high schools for almost no money at all. These spares are now stuffed in every nook, cranny and closet, but so far, they haven't proven to be necessary. Although the original Macintoshes were never made nor advertised to be rock-solid, reliable devices, so far they've held up to 17 years worth of daily use.
And that too is simply the nature of electronic devices nowadays. Mechanical devices (discs, tape drives, keyboards) may fail, but the electronic circuits could easily run for several hundred years without much maintenance.
Pictured at left is a third small HP 3000 that we run, another 918. However that's not the device of interest in this picture. Rather the machine of importance is the small $400 e-machine PC in the center of the image.
Adobe Acrobat Distiller is the program that converts PostScript files into the PDF format that's become very popular on the web. Beginning about five years ago, for a period of two years, I spoke to everyone I could at HP and Adobe about porting Distiller over onto the HP3000, but I was able to make absolutely no headway with anyone. No one was interested. Even more frustrating, because POSIX is not UNIX, the UNIX version of the Acrobat distiller would not run on the HP3000 as it was, even though it was certified for HP-UX.
One day, in an epiphany not unlike Saul's conversion on the road to Damascus, it simply dawned on me that I didn't need to keep beating my head on the wall. Rather, I could purchase the very inexpensive Windows-based version of Acrobat and FTP my files from the HP3000 down into a PC. The process worked so well that I have now become a very enthusiastic advocate of not porting material onto the HP3000 directly. Rather I now argue that it's best to run the programs on the platform for which they were designed and control them from the HP3000. Indeed, doing this insulates and protects the HP3000 in two ways: one is from random software bugs, the second is from obsolescence.
In the arrangement we now use, a standard, simple HP3000 job runs our QueryCalc reports and prints their PostScript output to MPE flat files. As a second step in the jobs, the ASCII flat files are FTP'ed down into the e-machine, into an Acrobat "watched" folder, adding the file extension ".ps" onto the file as an intrinsic part of the transfer. The PC is set up so that when a ".ps" file appears in the watch folder, Acrobat automatically converts it into PDF, moving it to a pre-specified output folder. Although the distillation process generally takes less than a second, we have our HP3000 jobs wait 10 seconds before they retrieve the newly-converted PDF files and move them back onto the HP3000. Once the new files are back on the HP3000, they're FTP'ed to a third server, our webserver in Minneapolis, MN, inside the same job. It's all surprisingly very simple, very straightforward and very efficiently done.
Because virtually any process on a Linux/UNIX or Windows machine can be controlled in this manner, there's essentially no reason to port anything to the HP3000 nowadays. But just as importantly, this simple observation makes the current version of MPE nearly obsolescence-proof. Even more than SCSI, FTP and telnet, because they are now nearly ubiquituous 30-year-old standards, are going to look the same in 25 years as they do now. They cannot be changed.
Hardware ages, but software doesn't. It is essentially immortal. But can you run 25-year-old software 25 years from now, especially if no one is "maintaining" it? What does maintenance mean? To a great degree, it means keeping up with the evolving standards, not fixing bugs. But what would you really want to change on your HP3000? Your code works now. It will work just as well a quarter-century from now.
The little Linux box in the topmost picture is set to dial back to Red Hat every evening, check for updates, and apply them automatically, if need be. Doing this is necessary at the moment because of the rapid evolution attendent to trying to make Linux a mission-critical operating system, and it will be that way for the next five years or so. But there's virtually nothing that really needs to be fixed on the HP 3000 that sits next to the Linux box. MPE code has proven itself to be extremely reliable at tens of thousands of sites over decades of use. And although the total sum of all of the equipment in the upper image came to less than $2000, there is sufficient computing power on the table to run a $50 million/year business easily.
All software contains bugs, and on the last day that HP corrects whatever bugs it finds in MPE, if no emulator and no Open MPE should come to pass, those defects that exist in the code on that day will remain there forever. But in many ways, operating under these conditions is more stable and more predictable than when code is still actively being modified. You rapidly learn where the remaining pitfalls are and you simply work around them.
The real trick to operating obsoleted hardware and an O/S is to buy multiple spare equipment. This equipment is going to become startlingly cheap in the next few years, so keep your eyes open for it. In your free time, configure these spare systems to be identical to your production boxes. In this manner, if your primary systems should fail, you can actually swap out a spare system faster than you can call for assistance and certainly be back on line before the repair people arrive, if you need them. Doing this also allows you to find out what's wrong with the failed system at a leisurely pace and get it back up and running on a schedule that's far more appropriate to the task than one dictated by panic.
As I mentioned at the beginning of this note, I do not believe that staying on the HP 3000 indefinitely to be a particularly risky strategy. If your code and business procedures work well today, they will work just as well tomorrow, a week from today, or twenty years from now. In great contrast, migration may be the riskiest thing you can do.
Over the years, we've had a great many customers move off of the HP 3000 and we've been very interested in hearing about their successes and their failures. The former users who have had their companies bought out by a larger organization have had the greatest success. The larger organization dictates what kind of computer system they're going to use, and in this situation, the HP3000 often loses. Nonetheless, our former customers generally have to do no more than have their terminals changed out and learn the new business rules as they connect to the central server at company headquarters.
In this company-purchase environment, everything has been relatively well smoothed out in advance by the purchasing organization, optimizing their procedures over a period of years, if not decades. But the same hasn't been true of our customers who have "migrated" off of the HP 3000 onto some other platform, based on their own volition. Their costs of migration have universally been far higher than anyone originally estimated, as have the times involved. Indeed, the migration efforts were so difficult that a few of our former customers have outrightly failed during the process and many others were put at high risk. There's no single day in the life of a company when the computer system, no matter how it's architected, that it can't operate and fulfill its business purpose, and it's this simple necessity that makes migration so extremely difficult.
If your choices boil down to choosing between a "migration" process, which may cost millions of dollars, and which may well put the company at risk, and doing nothing, other than purchasing a number of very inexpensive spares, staying put may well be the least risky thing you could ever do.
June 04, 2015
More open HP shares its source experience
It's not fair to Hewlett-Packard to portray its Discover meeting this week as just another exercise in putting dreams of industry-rocking memristor computing to rest. The company also shared the source code for one of its products with the world, a tool the vendor has used itself in a profitable software product.
HP’s Chief Technology Officer Martin Fink, who also heads up HP Labs, announced the release of Grommet, HP’s own internal-use advanced open source app. The platform will be completely open source, licensed for open use in creating apps' user experience, or UX as it's known in developer circles. Fink said Grommet was HP’s contribution to the IT industry and the open source community.
HP says "Grommet easily and efficiently scales your project with one code base, from phones to desktops, and everything in between." The vendor has been using it to develop its system management software HP OneView for more than three years. The code on GitHub and a style guide help create apps with consumer interfaces, so there's a uniform user experience for internal apps. Application icons like the one on the left are available from an interface template at an HP website.
The gift of HP's software R&D to a community of users is a wide improvement over the strategy in the year that followed an exit announcement from MPE/iX futures. A campaign to win an MPE/iX open source license, like the Creative Commons 4.0 license for Grommet, came to naught within three years of that HP notification. There were some differences, such as the fact that HP still was selling MPE/iX through October of 2003, and it was collecting support money for the environment as well.
The 3000 community wanted to take MPE/iX into open source status, and that's why its advocacy group was named OpenMPE. It took eight more years, but HP did help in a modest way to preserve the maintainability of MPE/iX. The vendor sold source code licenses for $10,000 each to support companies. These were limited licenses, and they remain a vestige of what HP might have done -- a move not only echoed by Grommet, but reflected in HP's plan to move OpenVMS to a third party.
"I guess there is a difference between licensing the MPE code and then distributing it," our prolific commenter Tim O'Neill said last week.
I have heard that HP hangs onto the distribution rights because they are afraid of liability. Surely they do not, at this point, still seek to make money off it, do they? Is there some secret desire within HP to once again market it?
It feels safe to say not a bit of desire exists in HP today, even though Grommet shows the vendor can be generous with more mainstream tech. In at least one case, HP's offer of help with MPE's future was proactive, if not that generous.Steve Suraci of Pivital Solutions tells a story about that MPE/iX source license. He was called by Alvina Nishimoto of HP in 2009 and asked, "You want to purchase one of these, don't you?" The answer was yes. Nobody knew what good a source code license might do in the after-market. But HP was not likely to make the licensing offer twice, and the companies who got one took on that $10,000 expense as an investment in support operations.
Pining over Grommet or the sweeter disposition of OpenVMS won't change much in the strategy of owning or migrating from MPE/iX. Open source has become a mainstream enterprise IT scheme by 2015, pumped up by the Linux success story. O'Neill said he still believes an open source MPE/iX would be a Linux alternative. He reported he recently discovered the Posix interface in MPE/iX. Posix was supposed to be a way to give MPE the ability to run Unix applications, using 1990 thinking.
The aim for Posix was widely misunderstood. It was essential to an MPE/iX user experience that didn't materialize as HP hoped. But John Burke, our net.digest and Hidden Value editor for many years, noted in the weeks after that exit announcement that HP's training on Posix expressed that desire of bringing the Unix apps to the 3000.
The following is an example from HP training:
"Before we proceed, let's stop to ask a question, just to ensure you've got the fundamental idea. Which of the following statements best summarizes the reason why HP has brought POSIX compliant interfaces to the MPE/iX operating system and the HP3000?
- POSIX is the first step in HP's plan to move all HP3000 users to UNIX
- POSIX is a tool that HP is using to bring new applications to MPE from the UNIX environment.
- POSIX is a piece of software that HP is using to eventually combine the HP3000 and the HP9000 into a single system.
Choose the best answer, and press the corresponding key: ‘1’, ‘2’ or ‘3’."
June 03, 2015
HP's Machine dream migrates off OS plan
The HP Discover show has wrapped up its second day, an annual event full of sales and engineering staff from the vendor as well as high-line customers. The show included an introduction of the new logo for the Enterprise half of Hewlett-Packard, a spinoff the vendor will cleave off the company in October. It's an empty green rectangle, something that drew some scorn an an icon bereft of content or message.
CEO Meg Whitman said the green represents growth and the rectangle is a window on the future. We can only hope that a logo for a $65 billion corporation that turns out to be a rectangle in green has a good discount attached to the project's invoice.
But another session today that can be consumed on Livestream.com showed a consistent removal of substance from HP's dream factory. The Machine, a project that reportedly was attracting more than half the R&D budget for the full corporation, had its mission backed away from the platform that promised to lead into computing's future. A computer built around the long-pursued memristor will make a debut sometime next year, but bearing standard DRAM chips instead. Of greatest interest to HP 3000 customers, former and those still current, is abandoning the R&D to create a Machine operating system.
An OS for the Machine would have been HP's first such project since MPE. HP hasn't built an environment from scratch since MPE was introduced in the 1970s. Its Unix began in Bell Labs with System V, NonStop was created at Tandem, and VMS was the brainchild of DEC. The Palm OS came from the company of the same name, and HP sold that software to Samsung to be used in refrigerators. HP's head of Labs Martin Fink said that Linux will be the software heartbeat of the Machine going forward. Creating a computer that runs Linux: Nothing there to suggest there's new love for software R&D in Fink's labs.Fink told the New York Times regarding the Machine that "Big will happen first, small will happen later." He was talking about the scale for the Machine, at first being a 320TB RAM device before becoming sized for smartphones and printers.
Big has developed a way of happening later at HP, not first.
The most telling part of this retrench of this big idea: falling back to a customized Linux as the operating environment for the Machine. Operating systems are big, but HP's not going to force software developers to learn something that has a chance to change the world's computing, as proposed for the Machine last year.
In the main hall of the Las Vegas show, thousands of customers and HP employees watched a clip from Avatar as evidence of HP's ambitions during today's talk. When the clip ended, the hall was silent. "You can clap for that, c'mon," Whitman said. She announced three sequels to come, called the movie a franchise, and adding that HP Enterprise has a five-year partnership with the creators of Avatar. HP's goal is create a rich user experience for the science fiction of James Cameron's films.
Science fiction about computer science might be for sale before the first sequel airs in 2017. HP's keynoted Discover content is all available on Livestream this year, for the first time. Tomorrow is the final day for the event.
June 02, 2015
Migrating Data Makes First Step Away
Beginning at 2 PM Eastern US time tomorrow (June 3), Birket Foster leads a Successful Data and Application Migrations webinar, complete with a breakdown on the strategy and ample Q&A opportunity.
Registration for the webinar is through the MB Foster website. Like all of the Wednesday Webinars, it runs for about an hour. The outline for the briefing, as summed up by the company:
A successful migration – application and data - has three major sections. We like to start with the end in mind. What does the business want to accomplish through this transformation? In fact, the best way to organize things is to create a dashboard for the “Application Portfolio” and to visualize the current and future fit of IT investments in aligning with the business needs and where the business plan is going.
As an example; if you use fleet management techniques (capital cost, estimated useful life of asset, next review inspection, number of service incidents, etc.) on your IT assets, a map and the value of each application to the business will emerge. A barometer status of green, yellow or red can be assigned based on a scorecard. A three year forward projection will show the parts of the portfolio that will need attention over time, a forecast of investment of both capital and labor can be forecast; as a result budgets and projects can be put in place so there are no surprises.
Foster says that in the webinar he will discuss a framework to effectively manage the Application Portfolio and the best way to help your business transition into the process of rating, and the risk-benefit value of each Application, for each department’s workflow. There will also be techniques outlined to achieve a successful result and the steps involved to triage applications, the business fit, maintainability, the roles of the team, and desired outcomes.
The company has been in the data migration business since the 1980s. Data Express was its initial product for extracting and controlling data. MB Foster revamped the products after Y2K to create the Universal Data Access (UDA) product line. As an example, UDACentral supports the leading open source databases in PostgreSQL and MySQL, plus Eloquence, Oracle, SQLServer, DB2, and TurboIMAGE, as well as less-common databases such as Progress, Ingres, Sybase and Cache. The software can migrate any of these databases' data between one another.
June 01, 2015
Older laptops find current use for 3000s
By Brian Edminster
Back in the MPE-III, MPE-IV, and MPE-V days, I often advocated using a printing terminal as a console (i.e. an HP 2635), in order to leave a permanent hardcopy audit trail. A little loud, sometimes, but it made it hard to hide what was going on, and allowed you to flip back through prior 'pages' of history. And unlike PCs, the messages were persistent (that's to say they would survive a power-fail).
Since then, I've been an advocate for using PCs as a system console workstation -- often ones that would otherwise be ready for retirement.
Actually, I prefer to use laptop PCs, as they're typically smaller and lighter, have a battery in them that can act as a short-term UPS, and many can be configured to allow folding the screen closed while leaving them turned on and active. A laptop saves space, and if the system's been configured to shut off the display and spin down the drives when there's little to no activity, it can save power as well.
Key documentation and/or other useful info can also be kept on the laptop as well -- so you don't have to look things up on paper. If the laptop is old enough, either it (or a docking station for it) will have a serial port, or you can also go the USB to Serial adapter route, if necessary. Something like this Compaq Armada is quite old, but it does include a serial port.On the laptop, the freeware QCTerm terminal emulator is a perfect choice for use as a system console, and has a 15000-line memory. That beats the heck out of any CRT I've ever seen! You can still get copies of this emulator from the Internet archives. I also plan to make QCTerm available, along with associated documentation, on my website.
I've provided a working URL that'll take you right to the old download page for the latest version of QCTerm (v3.1a). Just click here. From that page, you can also search 'backwards' through time, to find earlier versions of QCTerm as well.