How to use Perl on an HP 3000

By Dave Lo

Perl is an interpreted language that is the brainchild of Larry Wall. He continues to develop and guide the language, which, through the help of the net community, is available on virtually every computer platform, from Apple’s Macintosh to MPE.

Perl, officially known as either Practical Extraction and Reporting Language or Pathologically Eclectic Rubbish Lister, is a popular language for implementing web page CGI scripts, processing string data, even system administration. The official Perl web site is www.perl.com.

However, Perl is much more than a sometimes-odd-looking web scripting language. It has enough power to be a full programming language. One glance at some of the O’Reilly Perl books will testify to that (Perl for Bioinformatics, Perl for System Administration, Perl for Website Management).

If you think of perl as a shell-like programming language that evolved from quick-and-dirty handling of text, lists, associate arrays, and regular expressions, then you’re already thinking Perl. Let’s dive in!

Scalar variables

0 # Perl has no line numbers, they’re here for reference only
1 $num = 123;
2 $str = “abc”;
3 print “number=$num string=$abc”;

Line 0 is a single-line Perl comment, which are similar to Unix shell comments. There are no multi-line comments.

Perl is not a strictly typed language, so a variable can hold either numeric or string values. Also, no declarations of variables are needed before they are used. Line 1 and 2 assign a number and a string. These type of variables are known as scalar variables (they hold a single primitive value), and their names are prefixed by a $ sign. One characteristic of Perl is that all variables names have a prefix character such as $. This may seem strange, but not when you think of similarities to Unix shell scripts.

List variables

1 @list = (12.3, “abc”, 4..6);
2 foreach $i (@list) {
3 print “$i\n”;
4 }
5 $count = @list;
6 print “There are $count items\n”;

Another fundamental variable type in Perl is List. Line 1 shows the assignment of a list variable, whose name is prefixed with a @ sign. A list variable is like a 1-dimensional array. But unlike strictly typed languages, a list in Perl can contain mixed types of data. Here, the list contains 5 values: the number 12.3, the string “abc”, and numbers 4, 5, and 6.

Lines 4-6 are a typical way of looping through all the items in a list, using the foreach construct. In line 5, notice that literal strings in double quotes can contain variables that are dereferenced.

Line 7 may initially look like an error (assigning a list variable to scalar variable), but it is a common occurrence in Perl and shows an important concept: context. Think of context as fancy type-conversion. Here, because the left-hand-side of the assignment is a scalar variable, the right-hand-side must also be scalar. So the list variable is “evaluated in a scalar context”, which does the convenient thing of returning the number of items in the list. You’ll discover that Perl has many of these types of conveniences built-in.

We could also have accessed the list in a traditional array-like fashion by numeric indexing. Like C, Perl starts array indexes at 0.

9 for ($i=0; $i<@list; $i++) {
10 print “$list[$i]\n”;
11 }

Note in Line 10 that we prefixed the list with a $. This is because list[$i] is a scalar value, so a scalar prefix is needed. Another important point is that scalar and list variables names are in different namespaces. This means you can simultaneously have a $abc scalar variable and @abc list variable. Use this feature carefully, otherwise you end up write hard-to-understand code such as $abc = $abc[$abc].

Hash variables

A powerful feature in Perl are hashes (otherwise known as associate arrays). Hashes are like arrays that are indexed not by sequential numbers, but by string value. It is a simple way to store key-value pairs.

$review{“Monsters Inc.”} = “funny and original”;
$review{“Harry Potter”} = “true to the book”;
$review{“Lord of the Rings”} = “also true to the book except for Arwen”;

Here is an example of parsing a string similar to those commonly returned from an HTML form:

1 $line=”option=1&company=Robelle&product=Qedit,Suprtool”;
2 @pairs=split(/&/,$line);
3 foreach $item (@pairs) {
4 ($name, $value) = split(/=/,$item);
5 $form{$name} = $value;
6 }
7 @list = keys(%form);
8 foreach $name (@list) {
9 print “$name = $form{$name} \n”;
10 }

In Line 2 and 4, the split function takes a regular expression (although we are using it here just for a simple string search) and a string, finds the substrings that are separated by the regexp, and returns a list of those sub-strings. So in Line 2, we are looking for the substrings separated by an ampersand &. Split returns this list of three strings:

option=1
company=Robelle
products=Qedit,Suprtool
In Line 4, the split works in a similar way to break up “option=1” into a list of two elements (“option”, “1”). Notice that Perl allows simultaneous assignments of several variables. The assignment puts the first element in $name and second element in $value.

In Line 5, the assignment to a hash looks almost like assigning to an array assignment, except that curly braces are used instead of square brackets

In Line 7, the keys function returns a list of all the key values in a hash (“option”, “company”, “product”). Notice that a hash is prefixed by a % sign when used in hash context. If you wanted to get all the values in a hash, you would use the values function, which would have returned (1, “Robelle”, “Qedit,Suprtool”)

Perl on MPE/iX

Perl for MPE/iX is available for download from the HP Jazz Web site: jazz.external.hp.com/src/hp_freeware/perl/. Perl was ported to the HP 3000 by Mark Bixby. Here are some notes on Perl/iX from the Jazz Web site:

“The following prerequisites apply to Perl on MPE/iX: MPE/iX 6.0 or greater. This software has not been tested on versions earlier than 6.0. Approximately 325,000 sectors of available disk space.” Perl has been linked to use the shared libraries /lib/*.sl and /usr/lib/*.sl; if these libraries are missing or have restrictive permissions then Perl will not run. These libraries are a standard part of MPE FOS starting with 6.0. If for some reason you are missing these libraries, you can recreate them by logging on as MANAGER.SYS and then running the shell script /PERL/PUB/mpebin/LIBS.hp3000.

Integration With MPE

A few MPE-specific modules are starting to become available for Perl. The following is a partial list; none of these are bundled with this distribution, so if you want to play with them you’ll have to download and build them yourself:

MPE::CIvar — Ken Hirsch’s interface for MPE/iX JCWs, CI variables, and the HPCICOMMAND intrinsic. Please see invent3k.external.hp.com/~MGR.HIRSCH/CIvar.html for more info.

MPE::IMAGE — Ted Ashton’s interface for MPE/iX TurboIMAGE databases. Please see search.cpan.org/search?dist=MPE-IMAGE for more info.

Web Resources for Perl

These include www.perl.org, the Perl user community; www.perl.com, O’Reilly’s official Perl home; and www.cpan.org, the Comprehensive Perl Archive Network for perl distribution, doc, modules. If you have trouble installing packages from CPAN, read Ken Hirsch’s installation tips at invent3k.external.hp.com/~MGR.HIRSCH/cpan.html

O’Reilly books

• Learning perl - A good introduction to the language

• Programming perl - Known as the “camel book”, it is the definitive Perl reference written by the authors of Perl

• perl cookbook - Solutions for common operations. For example, Problem: How to do something to every word in a file?

Solution: Split each line on whitespace:

while (<>) {
for $chunk (split) {
# do something with $chunk
}
}

 


The HP 3000's Earliest History

By Bob Green

With HP announcing its latest sunset for the HP 3000 in 2007, I thought some of you might be feeling nostalgic for some history. The original 16-bit HP 3000 (later called “the Classic”) was released in 1972 and re-engineered into a 32-bit RISC processor in the 1980s.

Background (1964-1969)

The HP 2000 Time-Shared Basic System (1968) was HP’s first big success in computers. The 2000 line was based on the 2116 computer, basically a DEC PDP-8 stretched from 12 to 16 bits. HP inherited the design of the 2216 computer when it acquired Data Systems, Inc. in 1964 from Union Carbide. The 2000 supported 16 to 32 time-sharing users, writing or running BASIC programs.

This product was incredibly successful, especially in schools. The original 2000A system was created by two guys working in a corner: Mike Green, who went on to found Tandem much later, and Steve Porter, who also went on to found his own computer company. Heavy sales of the 2000 brought the computer division of HP its first positive cash flow, and with it the urge to “make a contribution.” The engineers and programmers in Cupertino said to themselves, “If we can produce a time-sharing system this good using a junky computer like the 2116, think what we could accomplish if we designed our own computer.”

Abortive First Try (1969-1970)

The project to design a new computer, code-named “Omega,” brought new people into the Cupertino Lab, people who had experience with bigger operating systems on Burroughs and on IBM computers. The Omega team came up with a 32-bit mainframe: It was stack-oriented, had 32-bit instructions, data and I/O paths, eight index registers, up to 4 megabytes of main memory, up to four CPUs sharing the same memory and bus, both code segmentation and data segmentation, and a high-level systems programming language instead of Assembler; it was capable of multiprogramming from the start, and had support for many programming languages (not just BASIC as on the 2000).

The Omega was designed to compete with big CPUs. But Omega looked too risky to management. HP would have had to borrow long-term funds to finance the lease of machines to compete directly with IBM. So it was cancelled. Some of the Omega architects left HP, but most stayed. “Several people who remained took to wearing black-velvet armbands, in mourning for the cancelled project,” according to Dave Packard in his 1995 book, The HP Way.

The 16-Bit Alpha (1970-71)

Most of the Omega team were re-assigned to the Alpha project. This was an existing R&D project to produce a new 16-bit computer design. The Omega engineers and programmers were encouraged to continue with their objectives, but to limit themselves to a 16-bit machine. Alpha was Omega squeezed into 16 bits: 128 KB of main memory (max), one index register, and Huffman coding to support the many address modes desired (P+- for constants, DB+ for global variables, Q- for parameters, Q+ for local variables, and S- for expression evaluation).

Same People, Smaller Hardware, Bigger Software

The original design objectives for the Omega Operating System were limited to multiprogrammed batch. The Omega designers put off time-sharing to a later release that would be supported by a front-end communications processor. The cancellation of Omega gave the software designers another year to think of features that should be included in the Alpha Operating System.

As a result, the software specifications for this much smaller machine were now much more ambitious that those for the bigger Omega. They proposed batch, time-sharing and real-time processing, all at the same time, all at first release, and all without a front-end processor.

The instruction set of the Alpha was designed by the systems programmers who were going to write the compilers and operating system for the machine. The prevailing “computer science” philosophy of the day was that if the machine architecture was close to the structure of the systems programming languages, it would be easier to produce efficient, reliable software for the machine and you wouldn’t need to use Assembler (that is, a high-level language would be just as efficient and the code would be much easier to maintain).

The Alpha was a radical machine and it generated infectious enthusiasm. It had virtual memory, recursion, SPL instead of Assembler, friendly MPE with consistent batch and online capabilities instead of OS-360 with its obscure command syntax, variable-length segments instead of inflexible pages, and stacks instead of registers. The Alpha was announced as the HP 3000 with a fancy cabinet of pizza-oven doors, available in four colors. Prospective users were assured that it would support 64 users in 128 KB of memory.

Harsh Realities (1972-73): 200 Pounds of Armor on a 90-Pound Knight

I worked at Cupertino at the time and was assigned to coordinate the production of the ERS (External Reference Specifications) for the new software. I was as excited as everyone else. The first inkling I had that the HP 3000 was in trouble came in an MPE design meeting to review the system tables needed in main memory. Each of the ten project members described his part of MPE and his tables: code segment table, data segment table, file control blocks, etc. Some tables were memory-resident and some were swappable. When the total memory-resident requirements were calculated, they totaled more than the 128 KB maximum size of the machine.

MPE wouldn’t fit, so everyone squeezed: The programmers squeezed in 18-hour days, seven days a week trying to get MPE to work. Managers were telling their bosses that there was no problem, they just hadn’t had a chance to “optimize” MPE yet. When they did, the managers maintained, it would all turn out as originally promised. So marketing went on selling the machines to the many existing happy users of the HP 2000. As the scheduled date for the first shipment approached, the Cupertino factory was festooned with banners proclaiming “November Is a Happening.”

The first HP 3000 was shipped November 1, 1972 to Lawrence Livermore Hall of Science in Berkeley, California. But it was incomplete: It had no spooling, no real-time, etc. It supported only two users, and it crashed every 10 to 20 minutes. Customers who had been promised 64 terminals and who were used to the traditional HP reliability became increasingly frustrated and angry.

Eventually the differences between the HP 3000 reality and the HP 3000 fantasy became so large and well-known that there was even a news item in Computerworld  about it — the first bad press ever for HP. Bill and Dave were not amused. The product was withdrawn from the market for a short time.

Struggling to Restore Lost Credibility (1973-74)

Hewlett-Packard had no experience with bad publicity from low-quality products. Paul Ely was brought in from the successful Microwave Division to straighten out the computer group. The first priority was to help out the existing HP 3000 users, the ones who had trusted HP and placed early orders. Many of them received free 2000 systems to tide them over until the 3000 was improved. The second priority was to focus the programmers’ energy on fixing the reliability of MPE.

Once the HP managers realized the magnitude of the 3000 disaster, the division was in for lean times. Budgets and staffs that had swollen to handle vast projected sales were cut to the bone. Training, where I worked, was cut from 70 people to fewer than 20 in one day. HP adopted a firm “no futures” policy in answering customer questions (a policy that lasted for years after the HP 3000 trauma, but was forgotten by the time of the Spectrum-RISC project). The new division manager was strictly no nonsense. Many people had gotten in the habit of taking their coffee breaks in the final-assembly area, and kibitzing with the teams testing the new 3000s. Ely banned coffee cups from the factory floor and instituted rigorous management controls over the prima donnas of the computer group.

By continuing to work long weeks, the programmers managed to reduce MPE crashes from 48 a day to two, and to increase users from two to eight. Marketing finally took a look at what the 3000 could actually do, and found a market for it as a replacement for the IBM 1130. They sold the 3000 as a machine with more software capability than an IBM 1130 that could be available to a number of users at once instead of just one. Eventually the 3000 became a stable, useful product. To my mind, this happened when someone discovered the “24-day time bomb” bug. If you kept your HP 3000 running continuously for 24 days (2^31 milliseconds) without a shutdown or a crash, the internal clock register would overflow and the system would suddenly be set back by 25 days!

The Comeback: Fulfilling the Promise (1975-76)

The original 3000 had a minimum usable memory size of 96 KB and a maximum of 128 KB — not much of an expansion. The Series II went beyond that 16-bit limitation by adding “bank” registers for each of the key pointers (that is, code segment, data segment, and so on). Thus the Series II could support up to 512 KB, a much more reasonable configuration for the software power of MPE.

The choice of SPL as the HP 3000 machine language instead of Assembler truly began to pay off now in an avalanche of excellent software: The IMAGE database (again, two guys working in a corner: Jon Bale and Fred White) was soon joined by compilers for COBOL and RPG, a screen handler, and other tools to support transaction processing.

Concurrent, consistent batch and time-sharing was now a reality and the goal of concurrent real-time was finally dropped as unrealistic. The HP 3000 hardware now matched the software written for it. Business users discovered that the 3000 was great for online transaction processing; they dragged Hewlett-Packard firmly into the commercial information processing world.

At last, with the Series 64 in 1982, the 3000 reached the original target of 64 users on a single machine.

P.S. For another interesting history of the HP 3000, read HP’s Early Computers, Part Three: The Strongest Castle: The Rise, Fall and Rise of the HP 3000 by Chris Edlar


Cooking with Python on MPE

Whip up tasty dishes with ample applications using this scripting language

By Curtis Larsen

Something I’m sure I share with other HP 3000 folks is a love for technology that solves problems in some straightforward yet elegant manner. After all, this is why we all continue to use our HP 3000 systems (and probably why we chose them in the first place).

Especially exciting, though, are the new technologies that further extend our systems and solve even more problems. I can remember eagerly reading each new “Communicator” for the enhancements list, and my personal favorites – upgrades to the Command Interpreter. I’d be excited to try out each new function and new “FINFO” parm. (When the JOBINFO function hit the streets I was practically singing.)

As much fun as I have with the CI though, I recognize that every programming language has its limitations. So HP 3000 programmers have used various different languages available such as COBOL, Business Basic, or VESoft scripting – or a combination of them — to solve larger tasks. Each language brings its own flavor and abilities to the programming table, and when POSIX scripting was added to the ingredients, a wonderful curry resulted. Like a great curry, you can still see, smell and taste each individual ingredient, but they all contribute to a sum greater than the parts.

Some of the new flavors the POSIX world gives us are the Bourne, BASH, C, and Korn, shells, as well as public-domain C, FORTRAN, Assembly, and Basic compilers. It also gives us the newer interpreted scripting languages such as Perl, Java, and Python.

Now chances are good you’ve heard about Perl and Java, but how much do you know about Python? Ahhh… let’s start opening some of those spice jars, shall we?

Python is a fast, extensible, object-oriented scripting language created by Guido Van Rossum and named after the ’70’s British comedy troupe and television show “Monty Python’s Flying Circus”. (It’s too good to make up folks. If that doesn’t begin to spell “curry” to you, I don’t know what does.)

Python and Perl are very similar in capabilities —what can be done in one, can also be done in the other – it’s just that their approaches to those capabilities differ. Perl leans towards terse syntax, and many small “add-in” modules, while Python (if not verbose) leans more towards self-describing syntax and fewer, larger, “add-in” modules. Both can be written in Ye Olde Top-Downe style, or by using objects and methods in the newer Object-Oriented style of programming — pick the sauce that suits you. Both languages have been ported to almost every OS imaginable (and a few that weren’t). Python also uses OOP syntax similar to Java with it’s “Object<dot>Method”. This similarity to Java comes in very handy, but more on that later.

One quirk that Python people have to get used to though is the indenting. The beginning of a text line is significant in Python, and indicates dependence. Now, now – I heard that agonized cry, like you had just eaten too much pepper. Trust me on this one – like a pepper you may find it hard to swallow at first (especially in one large bite), but if you come to it slowly, you’ll learn to appreciate the subtleties and grow just as addicted to it. And let’s be reasonable – if you can start a significant line of COBOL at a certain column for twenty-odd years, you can handle a little reasonable indenting.

Rather than explain Python’s syntax in detail though, I’ll point you to the Python web site where you can really chew on it. The main plate is at www.python.org, and many other links can be found from there for general consumption.

Ok, so let’s say you buy into it for now – you’re at the MPE table checking out the various Python dishes for appearance and aroma before biting into one. A wise course of action, but while some of the offerings can be a bit exotic, I’ve found nothing unattractive. You can find an incredible list of Python applications at “The Vaults of Parnassus” (www.vex.net/parnassus) — here are a few that might be of interest to you in combination with an HP 3000:

• “ZOPE”: An extremely powerful Web-application creation utility — and then some!

• “Gadfly”: A powerful Python-based SQL engine – see its Web page for details.

• “Peercat”: Web-based aggregate data collector. (Usually news items, but…)

• “Scanerrlog”: A script to meaningfully parse Apache’s “error_log” file to HTML, etc.

• “txt2html.py”: Converts various ASCII formats to HTML.

• “inpim.py”: Calendar/To-Do app w/cross-system sync. Budgeting subsystem, too.

• “PyBook”: Searches, downloads, and displays Project Gutenberg books.

• “Roundup”: Issue tracking system with web,CLI, and e-mail interfaces.

• “RoutePlanner”: A highway trip planner.

• “Pyrite”: Allows Palm Pilot synchronization.

• “PyPsion”: Similar application for Psion PDAs.

• “web2ldap.py”: Full-featured web-based LDAPv2+ client.

• “flad”: Create, read, and write INI-like config files.

• “ODBC Socket Server”: Access Windows ODBC sources via XML TCP/IP interface.

• “Python-DSV”: Parses CSV, TSV, etc. files, guesses string encoding, etc.

And more CGI scripts than you can shake a kebob stick at.

This list just scratches the surface, since there are far more HP-useful Python applications than can be shown here, and due to its ease of use and power, more people are sitting down to the “Python picnic” each day. There are many Python applications that could have a strong impact on your regular MPE recipes, and more are continually arriving. Look over the list above – ever want to directly use your HP data with a Palm Pilot? Send and receive data from an ODBC source directly from a batch job? Map the shortest routes to clients? Check on the weather before shipping a product there? Many more possibilities exist!

Python includes very good support for XML (including XML-RPC and SOAP), LDAP, and offers an interesting native object persistence (or “object pickling”) to a file, so that you can save an object and its properties, freezing it for cooking another day.

Although the “original” Python was written in C, a newer version of Python named “Jython” has been written using Java. In addition to running native Python scripts, this version also permits use of the native Java objects and features from within Python. Looking (and reading) very similar to Java, you can actually code something very quickly using Python for later export to Java, etc., treating Python as a sort of RAD language. So whether you think of Python as a “C food” entree, or as a side dish to chew on with your morning “cup of Java,” you’ll find it both digestible and flavorful. You can use a little or a lot, spread it thinly or plaster it on thickly – whatever suits your taste and mood.

I’ve only used the C-based Python on my HP system, but you could give Jython a whirl. The C version is a bit dated but runs quickly, while the Java version is very current, but starts a tad slower. It just depends on what you want to use Python for, and whether or not you’ll need some of the newer functions and modules.

Probably the thing that really excites me personally about Python is its “dictionaries,” or what Perl calls “hashes.” Folks who use these types of variables almost always get hooked on them. For example, you can create a dictionary named “D”, and store a value in it using a string key named “info” which might look like this: D[“info”] = “555-1212”. Now whenever you say ‘print D[“info”]’ you’ll get “555-1212”. It’s like using an array, but without the limitations of an array — you don’t need to preset a dictionary, worry about internal structure, use only numbers, or know how many “elements” it will have. You can even dynamically store a dictionary within another dictionary (and so on) to get some incredible depths of sophistication. Suddenly a “simple scripting language” allows fast use of tables and three-dimensional databases. (And it pulls up those values just as quickly!)

For an HP 3000-specific example of using Python dictionaries, I combined a VESoft Security/3000 report with a quick Python hack – er – script to show me the last date any terminal in a given date range was used, then print those results sorted by terminal. Sure, I could have coded it all some other way, but using the report as input to a filtering script was simplest for me, and gave me the results I wanted very quickly. Figure 1 below shows the script (be gentle).

Figure 1

import fileinput, string # These modules provide some
# nice extra methods for file
# and string handling, etc.

x = " " # Inits: Not needed, but. . .
hist = {}
totals = {}

for line in fileinput.input(): # Loop thru each report file
r = string.strip(line) # line (given as the parm)
if r[0:5] == "Logon": # Detail record? Process it.
rdate = "0" + string.strip(r[07:14]) # Load our work variables
rdate = rdate[-7:]
rdev = int(string.strip(r[23:27]))
rlogon = string.strip(r[27:62])
rname = string.strip(r[63:94])
hist[rdev] = (rdate, rlogon, rname) # Load the ‘history’ dictionary
# using the LDEV # as the key
if rdate <> x: # Print each new date as we go
x = rdate
print rdate
# Initialize the ‘totals’ dictionary logon count at first use
if not totals.has_key(rdev): totals[rdev] = 0
totals[rdev] = totals[rdev] + 1 # Add one to the logon count

keylist = hist.keys() # ‘keylist’ contains seen LDEVs
keylist.sort() # BAM! Now it’s a sorted list

for ldev in keylist: # for each LDEV we saw. . .
rdate, rlogon, rname = hist[ldev] # load some work vars, and:
print "%04u %5u %7s %s" % (ldev, totals[ldev], rdate, rlogon)

This script assumes that a chronologically-ordered list of LDEV logons will show the most recent logon for every terminal (with any luck, hey?), and records every logon for each LDEV. Each successive assignment overwrites the previous value used by that LDEV key, so the last logon for each LDEV key is the last value kept. Additionally, the script counts each time that LDEV key was seen, so that you have an idea as to how often the terminal is used. This little script could easily be adapted to record other information, like the first logon, or whatever else you might need.

I hope this helps whet your appetite for the Python language, and that you will give it a try – again, there’s far more to the language than can be described here. Many thanks to Joseph Koshy at HP Bangalore for bringing Python to MPE/iX, and to the HP3000-L list for assistance. You can  see what Koshy is doing in his spare time to port Python v2.x by visiting the Python/iX web site on Sourceforge: pythonix.sourceforge.net. For a copy of Jython, head on over to www.jython.org and download a copy.

Go ahead — join the Python/iX groundswell now, and get your piece of the Python.

Curtis Larsen has been working with HP 3000s for 11 years, and believes that, given enough time, any application can be written using the CI. He currently works for Covance Laboratories, in Madison, Wis.


The Spectrum Project, Part II

By Bob Green

Last month I presented the first half of our history of the PA-RISC 3000 development, using excerpts from our old customer newsletters, supplemented with new comments (My comments are shown below prefaced by “In Retrospect”). By 1986 we reached the point where Robelle was allowed to experiment with a prototype MPE system at the migration center and were aghast at how slow and unreliable it was. And since the Unix versions of Spectrum seemed to be humming along nicely, the problem seemed to be software, not hardware.

September 11, 1987 Newsletter:

First Spectrum Shipments: Rumor has it that HP shipped the first four 930 machines on Friday, August 21st, with more to follow every week thereafter. As of press time, we have been unable to find out whether ordinary mortals are allowed to touch these machines (as opposed to those who have signed non-disclosure agreements).

In Retrospect: Due to the NDA, over a year passed with no Spectrum news in our newsletter. The project was now 18 months past the original promised delivery date, but was still far from completion. Many people wrote articles, about the Spectrum, mostly based on marketing hype from HP, but no one broke the embargo on real information. We were all terrified. The MPE group had dug themselves into a very deep hole, and no one wanted to be the one who caught the eventual backlash.

October 19, 1987 Newsletter: The Spectrum Song

Orly Larson and his database singers performed again at the Interex show, including their hit, “The Spectrum Song:”

If it takes forever, we will wait for you
For a thousand summers, we will wait for you
‘Til you’re truly ready, ‘til we’re using you
‘Til we see you here, out in our shops!

From the HP Management Roundtable: Schedule for Shipping Spectrums — “We are shipping equally around the world. Our first shipments went to both North America and Europe. We are acknowledging availability for 930s and 950s through December at this time … We expect by the end of November to be able to have acknowledged the entire backlog.”

In Retrospect: HP continued to spin the “shipments” of Spectrums, without mentioning that these were not finished products. The models were 930 and 950 and the operating system was called MPE/XL, changed in later years to MPE/iX when POSIX was integrated into MPE. By this time, HP was worried about their stock price also and did not want any negative news in the financial press, no matter how accurate. As shown by the next Q&A at the roundtable…

Early 930/950 “Shipments”

Question: “Are the 930s and 950s being shipped or not? In public you tell me they are shipped. In private, however, I hear from both users and HP that these machines are still covered by non-disclosure agreements and that access to these machines is very restricted, even when in customer sites. What is the story?”

Answer: “MPE/XL architecture is very, very new. There’s a million new lines [of code] that go into MPE/XL, and a lot of software sub-systems as well. And so we are being extremely cautious in how we proceed at this point. We’re going through what we call a slow ramp-up through the remainder of this year and going into large shipments in 1988. The reason for that is that we want to fully test out the system capability in a large number of customer environments and we want to make sure that the information on what’s going on in there and the people involved are protected from outside folks who either benevolently or not benevolently would like to find out what’s going on.

I’m sure we’re going to run into some problems along the way that haven’t been encountered in our earlier phases of testing. We haven’t really hit these machines with full production pressure yet. We know from experience that when you do that, you uncover things that you could never uncover in testing, even though extremely rigorous. [Rumor has it that the customers receiving Spectrums now are not allowed to put them into production until 1988.]”

In Retrospect: Early Spectrum customers called us to ask which version of Suprtool and Qedit they needed for their new systems, and whether there were any problems that they should be aware of. But legally, we could not even admit that we knew of the existence of the new servers. So we came up with the following wording: “If you had a new 3000, and we are not admitting that we know anything about a new 3000, you should be using Suprtool version 3.0 and Qedit version 3.6. On this hypothetical system, it might not be a good idea to hit Control-Y while copying a file from any other HP 3000. We can’t tell you what will happen, but you won’t like it.”

February 12, 1988 Newsletter

Spectrum Finally Leaves the Nest: Hewlett-Packard has officially released the 930 and 950 Spectrum computers, finally abandoning the protection of non-disclosure agreements. We have heard from several sources that the 930 and 950 attained Manufacturing Release during the month of January. This means that people who received “Control Shipment” Spectrums can now put them into production and let outsiders use them. You no longer need to sign any restrictive agreements to get a 930/950. It also means that we users can now compare notes on what the MPE/XL systems are good for.

Interestingly, we didn’t hear about the Manufacturing Release (MR) of the Spectrum from Hewlett-Packard itself. As far as we can determine, HP kept this event very quiet — no press conferences or splashes of publicity. Even some HP people in Cupertino were not aware that MR had occurred. Just because the 930 and 950 are released does not automatically guarantee that you can get one. Given the huge backlog of orders that HP has, it will make “controlled shipments” for a while, picking sites whose expectations match the state of the machines.

In Retrospect: Users had been following Spectrum for almost four years and you could see that we were eager for the release of the product. The MPE lab had grown to hundreds of engineers and technicians and hundreds of Spectrum servers. The amount of money being plowed into the project was awesome. Anyone with any kind of skills was being hired as a consultant, in an attempt to get the situation under control and begin shipping revenue-generating servers. But we were premature in our February proclamation of Manufacturing Release, an HP corporate milestone that requires signed confirmation that the product passes the performance tests set for it in the design specifications.

March 31, 1988 Newsletter

Spectrum Is Released but Not Released: In our last news memo, we reported that MPE/XL users were now removed from non-disclosure restrictions and allowed to put their Spectrum machines into production. In the last month, that news has been confirmed by many sources.

We also concluded, and reported, that MPE/XL had obtained MR (Manufacturing Release). That is untrue. MPE/XL has apparently obtained SR (System Release), but not MR. “System Release” seems to be a new category of release, created just for MPE/XL. We have heard from some new 950 customers who did not need to sign a non-disclosure agreement. However, one customer reported that before HP would allow him to order, he had to sign a document stating that he had no specific performance expectations. On the other hand, we heard from a school that recently went live with 35 student sessions and had great response times (“the machine is coasting along at 10 percent PCU utilization”).

In Retrospect: In order to stem the rising tide of bad expectations, HP released the MPE systems even though they could not pass the testing department. And the performance was still poor in many cases, less than the non-RISC 3000s being replaced, although excellent in a few other cases.

Non-disclosure restrictions are not lifted for everyone. Sites that are beta-testing subsystems which were not released with the initial MPE/XL system are still restricted. Also, third-party FastStart companies such as ourselves are still restricted from passing on any performance or reliability information that we obtain from HP. We face no restrictions regarding performance information received from our customers, so please call with your experiences.

Non-disclosure continues – HP is picking their initial customers carefully and coaching them to only pass on the good news about their new systems. We are still frustrated to not be able to pass on our ideas about how users can improve the performance of the Spectrum.

October 12, 1988 Newsletter

Gary Porto at Childcraft reports that with MPE/XL 1.1 the problem of a serial task in a batch job hogging the system is not so bad as it was with 1.0. This problem can occur with SUPRTOOL, QUERY, or any long serial task. The batch job still hogs the system, but at least people can get a minimum of work done. With 1.0, they couldn’t even get a colon! Gary reports that he has 65 on-line users on his 64-megabyte Series 950 and that the performance is pretty good — as good as his Series 70.

In Retrospect: On the 4 year anniversary of the project, HP released version 1.1 of MPE/XL, which made the systems much more useful, but still not up to the original promised performance of 1984. However, the promise of the “Precision Architecture” (HPPA) was there, as certain tasks were amazingly fast.

By this time, HP salesmen were getting irritated with us for not giving our customers any kind of endorsement for the switch to the 930/950. But our NDA was not cancelled until Manufacturing Release. Finally, the sales force convinced HP Cupertino to send us a signed release from our NDA. I don’t know when MR eventually happened.

From the UK’s HP World magazine: Early MPE/XL Migration Results. London Business School is not a typical installation. Much of their software is written using double precision floating point Fortran which benefits considerably from the Precision Architecture. MIS Director Gordon Miller says “Our straight line performance is up considerably — one program runs 40 times faster — but the performance gains are very application dependent and cannot be accurately forecast beforehand.”

Keith Howard of Collier-Jackson in Tampa, Florida participated in the Spectrum beta testing and upgraded from a Series 58 to a Series 950 — quite a leap. One application was found to be 6% slower due to constant switching between compatibility and native modes, but in most circumstances the machine was five to ten times faster than the Series 52 and one batch job ran 53 times faster!

Glaxo Export has temporarily deferred delivery on its second and third 950 systems due to implementation problems on the initial machine.

HP promises performance improvement for Precision Architecture over the next five years of 40-50 percent per year. Some of this will be achieved by further tuning of MPE/XL — version 1.1 is said to be at least 20 percent faster overall.

In Retrospect: As with the original 3000 project, the birth of the Spectrum was traumatic, expensive and embarrassing. But it paid off. HP was able to roll out better servers for the 3000 line on a regular basis for the next 10 years.

Despite the numerous expansions and revisions to the HP 3000 hardware and software, upgrades have been painless. Even the conversion to the PA-RISC design was backward-compatible and reasonably painless (if you ignore the slipped schedules). Often the user just rolled the new system in on a Sunday, plugged it into the power, reloaded the files, and resumed production. The original 1974 MPE Pocket Reference Card is still useful; everything on it works under MPE/iX version 7.5, except the Cold Load instructions. I have programs that I wrote in 1972, for which the source code was lost years ago, and they still run in Compatibility Mode.

When asked for an eulogy for the 3000, my reply was, “A great IT platform: reliable, affordable, flexible, easy to operate, and easy to program. And every release compatible with the previous for over 30 years. Perhaps some future OS team will adopt these same goals.”


The Spectrum Project, Part I

By Bob Green

Commemorating the Oct 31, 2003 “wake” for the HP 3000, Robelle are devoting our NewsWire column to some history. Our story of the original 16-bit HP 3000 (1972-1976) is told on our Web site.

After initial development, the HP 3000 grew and prospered. From 1974 to 1984, HP continued to produce more powerful 3000 hardware running more capable software. Each new model was compatible with the previous version and a joy to install.

But the pressure was on to switch to a 32-bit architecture, as other manufacturers were doing. So HP announced a radical change: a new 32-bit hardware for the 3000. The project was code-named Spectrum. As a 3000 consumer and 3000 vendor, Robelle was excited and concerned about the prospect of a new hardware architecture. Certainly it would be wonderful to have more powerful processors, but what about the disruption to our steady incremental, risk-less progress?

The first notice we took of the Spectrum appeared in our December 1984 customer newsletter, with continuing news to follow for the next four years (my retrospective comments are included as “In Retrospect”).

December 12, 1984

The first Spectrum machine will be an upgrade for the Series 68. Other size models will follow soon after, since HP is working on different Spectrum CPUs in three divisions at once (in the past, all 3000 CPUs came out of one division). This first Spectrum can be expected in the first half of 1986.

In Retrospect: Please make a note of that 1986 promised delivery date, and remember that HP faced serious competition from DEC and others. Customers who loved the 3000, but had outgrown the power of the system, were demanding more capable models.

Spectrum is based on the RISC concept, modified by HP Labs. RISC stands for Reduced Instruction Set Computing. Such a computer has no micro code, only a small number of key instructions implemented in very fast logic. The original Berkeley RISC machine had only 16 instructions. Spectrum has more than 16, but not many more. HP selected the instructions for the fast base set by studying typical application mixes on the existing HP machines. Other functions will be done via subroutines or co-processors (e.g., a floating-point processor, an array processor, or a database processor).

In Retrospect: The actual number of instructions in the Spectrum turned out to be about 130, not 16, but they were all simple enough to run in a single clock cycle. HP was the first computer company to go with the RISC philosophy and the only major one to risk the firm by converting all their computer models, both technical and commercial, to a single RISC design.

June 11, 1985

HP’s new Spectrum machine will have both Native-Mode software and 3000 software. The first Spectrum machine to be released will have 3-10 times more computing power than a 68, about 8-10 MIPS in Native Mode. Programs copied straight across will run about twice as fast as on a 68, and those that can be recompiled in Native Mode should run 6-8 times faster. Much of MPE, including the disk portion of the file system, has been recoded in Native Mode. Since most programs spend most of their time within MPE, even programs running in emulation mode should show good performance (unless they are compute-bound).

In Retrospect: The expectations were building in our minds: these machines would be much faster than our current models!

Spectrum will use much of the new operating system software that had been written for Vision, which saves a great deal of development time. Spectrum will use 32-bit data paths and will have a 64-bit address space. Forty Spectrum machines have been built and delivered for internal programming, but product announcement is not likely before 1986.

In Retrospect: Vision was an alternative 32-bit computer project at HP, using traditional technology, which was cancelled to make way for the RISC design from HP Labs. Invoking Vision re-assured us that this project is possible, that progress is being made. It was now six months after the first announcement of the project.

August 16, 1985

According to an HP Roundtable reported in the MARUG newsletter, “Most of what is printed about Spectrum is not to be trusted. Spectrum will be introduced at the end of 1985 and delivered in Spring 1986. There are 40-50 prototypes running in the lab and the project team consists of 700-800 engineers. HP will co-introduce a commercial version and a technical version with the commercial version fine-tuned to handle many interactive users, transaction processing, IMAGE access, and the technical version will be structured for computational programs, engineering applications, and factory automation. HP will eventually offer a choice of MPE and Unix. Most software will be available on Spectrum at introduction time and over time all software will be available.”

In Retrospect: HP tried to dispel rumors, but still predicted 1986 for delivery. HP would produce two Spectrum lines: the Unix line for technical users and the MPE line for commercial users, using the exact same hardware.

“The following describes what will be required to convert – Least: restore files and IMAGE databases as they are and run. Next: recompile programs in native mode. Next: change over to new IMAGE database system. Next: change source code to take advantage of RISC.” Robelle Prediction: Spring 1986 for a Spectrum that will reliably run existing MPE applications is not an attainable release date.

In Retrospect: The new relational HPIMAGE database mentioned here was cancelled much later in the project, after a brief encounter with end-users. I don’t remember much about HPIMAGE, except that a lot of work went into it and it didn’t succeed as hoped. TurboIMAGE ended up as the database of choice on the Spectrum. Without any inside information, but based just on past experience and common sense, Robelle tried to inject some caution about the 1986 release date. During the original traumatic HP 3000 project, Dave Packard “sent a memo to the HP 3000 team,” according to Chris Edler. “It was only two lines long and said, essentially, that they would never again announce a product that did not then currently meet specifications.” The division listened for over 10 years, but eventually, people forget….

September 20, 1985

From a Spring 1985 UK conference: Most existing peripherals will be supported and it will be possible to use networking software to link existing model HP 3000s to Spectrum, with the exception of Series II/III and 30/33. These would need a Series 37 or other current range machine to act as a gateway to Spectrum.

From an HP press release: “100 prototype models were already being used internally for system development as of April 1985.”
HPE, the new operating system for the commercial Spectrum is a superset of MPE. It will have two modes of operation: Execute mode (HP 3000) and Native Mode. The switch between the two will be made on a procedure call, but there will be some programming work needed to translate parameters when switching.

In Retrospect: Execute mode was eventually called Compatibility Mode and switching between modes turned out to be major CPU bottleneck in the new system, albeit one that would be removed over time.

The Spectrum is rumored at this time to provide 32 general-purpose registers to the user program and a virtual data space of 2 billion bytes.

December 30, 1985

From Gerry Wade of HP: The name of the Spectrum machine, when it comes out, will not be Spectrum. Another company already has that name. Spectrum will use the IEEE standard for floating-point arithmetic and will also support the HP 3000 floating point. Each data file will have a flag attached to it that tells which type of floating-point data it contains (the formats are not the same).

In Retrospect: The file flag idea never happened, although the TurboIMAGE database did introduce a new data type to distinguish IEEE floating point. Information on implementation details is starting to flow, which helps us believe that the project is on schedule and likely to deliver the more powerful servers we desire.

June 16, 1986

In reporting on Joel Birnbaum’s Spectrum presentation, the HP Chronicle had these observations: “Comparisons with Amdahl and DEC mainframes in slides showed areas where the Spectrum computers topped the larger machines’ benchmarks. ‘Even with un-tuned operating systems software, it’s significantly superior to the VAX 8600,’ Birnbaum said.”

In Retrospect: Joel was the HP Labs leader who was the sparkplug of the RISC project, building on research that he had done previously at IBM. In retrospect, we can see that Joel was talking about the performance and delivery of the UNIX Spectrum, not the MPE version, but customers took this as a promise of vast performance improvements in the very near future. It was now past Spring 1986 and the promised new 3000 machines were nowhere in sight. In fact, HP has not yet announced the new models and pricing. This was the first slippage in the project, barely noticed at the time.

July 20, 1986

Many people have been asking, “What is Robelle doing about Spectrum?” HP has invited us to join its Fast Start program for third parties and we have agreed. This program gives us pre-release access to Spectrum information and actual systems. We have visited Cupertino and run our software on the new machines. We are confident that all of our products will operate properly at the time that Spectrum is officially released.”

In Retrospect: Since Suprtool and Qedit were essential to the large 3000 customers that HP was targeting, HP asked Robelle to start porting and testing our products on the new systems. But to do that, we had to sign a Non-Disclosure Agreement, the most draconian one we had ever seen. We used careful wording in our announcement above. From this date on, until years later, we could not tell our customers anything useful about the new machines. HP was especially sensitive about their reliability and performance.

When we arrived in Cupertino to do our first testing, we found the prototype Spectrum systems crashing every few minutes and running slower than our tiny system 37. We were appalled. Nothing in HP’s public statements had prepared us for the state of the project. I had personally gone through a similar situation with the original 3000 in 1972-74, and I wondered if upper management at HP knew how terrible things were. I thought about talking to them, but our NDA also prohibited us from talking to anyone at HP.

The Unix versions of Spectrum, on the other hand, seemed to be humming along nicely, showing that it was not a hardware problem.


Your Guide to Image Logging

Pexels-kaboompics-com-6076
By Bob Green

The system is down – the hard drive is toast – and you may have to restore your IMAGE database from yesterday’s backup. In the past, this is the scenario that typically got HP 3000 system managers interested in the transaction-logging feature of the TurboIMAGE database.

But now, as a result of the Sarbanes-Oxley law (SOX), IMAGE Logging is also being used to create audits for data changes. Managers who have never used transaction logging before are now enabling it to create an evidence trail for their SOX auditors.

Here is an example from Judy Zilka, posting to the 3000-L newsgroup:

“As a requirement of Sarbanes-Oxley we are in need of an HP 3000 MPE system program that will automatically log changes to IMAGE data sets, KSAM and MPE files with a user ID and time/date stamp. We often use QUERY to change values when a processing error occurs and the user is unable to correct the problem on their own. The external auditors want a log file to be able to print who is changing what and when.

George Willis and Art Bahrs suggested IMAGE Transaction Logging:

Judy, we have enabled Transaction Logging for our TurboIMAGE databases coupled with a reporting tool known as DBAUDIT offered by Bradmark. For your other files, consider enabling a System Level logging #105 and #160. The LISTLOG utility that comes with the system can extract these records and provide you with detail or summary level reporting.

Hi George & Judy:

Yep, Transaction Logging will meet the requirements for Sarbanes-Oxley and HIPAA for requirements relating to tracking “touching” data.

Also, remember you must have a corporate policy relating to this tracking and either a SOP or a formal procedure for reviewing the logs. The SOP or procedure needs to address what constitutes normal and abnormal activity with regards to reviewing the logs and what action to take when abnormal activity is noted.

— Art “Putting on the InfoSec Hat “ Bahrs

P.S. The fines for not being able to show who did what and who has access to what can be very, very eye-opening! Of course these comments only apply to the US and businesses linked into the US.

So what is IMAGE logging?

First of all, it is not the same as “system logging” or system “logfiles.” These record MPE system activities such as logon and file open, and have their own set of commands to control them. You can see in George’s answer above that he suggests system logging to track KSAM and file changes.

IMAGE logging is a variety of “user logging” and is a part of the TurboIMAGE database application. Once enabled, it writes a log record for each change to a database. There are three programs that can be used to report on those database log records:

LOGLIST (a contributed program written by Dennis Heidner; I am not certain what the current status of this program is).

DBAUDIT (a product of Bradmark; in the spirit of SOX disclosure, I must admit that I wrote this program and it was a Robelle product before we sold it to Bradmark!)

AuditTool 3000, from Summit Solutions (www.sumsystems.com), created for ERP system logging and expanded to work with any 3000 application.

Setting Up IMAGE Logging

A number of MPE Commands are used to manage IMAGE logging; see the MPE manual at docs.hp.com/en/32650-90877/

index.html

:altacct green; cap=lg,am,al,gl,nd,sf,ia,ba

:comment altacct/altuser add the needed LG capability

:altuser mgr.green; cap=lg,am,al,gl,nd,sf,ia,ba

:build testlog; disc=999999; code=log

:getlog SOX; log=testlog,disc ;password=bob

:comment Getlog creates a new logid

:run dbutil.pub.sys

>>set dbname logid=SOX

>>enable dbname for logging

>>exit

:log SOX, start

:log SOX, stop

You can use the same Logid for several databases. For a more detailed description, see Chapter 7 of the TurboIMAGE manual, under the topic “Logging Preparation.”

IMAGE Logging Gotchas

Although the basics of user logging are pretty straightforward, there are still plenty of small gotchas. For example, Tracy Johnson asks about backup on 3000-L

“If when backing up IMAGE Databases that have logging turned on and you’re not using PARTIALDB, shouldn’t the log file get stored also if you store the root file? This question also applies to third-party products that have a DBSTORE option.”

He continued, “One problem I’ve been having is that since a log file’s modify date doesn’t change until it is stopped, restarted, or switched over, one might as well abort any current users anyway, so any log files will get picked up on a @[email protected] “Partial” backup, because DBSTORE and “online” (working together) features won’t do the trick. Because even though a root file’s modify date gets picked up on a Partial backup, the associated log file’s isn’t.

Then Bruce Hobbs pointed out that there is the Changelog command to close the current logfile before backup (which ensures that its mod-date is current and that it will be included on the backup) and start a new logfile.

Later Tracy ran into another interesting gotcha regarding logging and the CSLT tape

“If you use IMAGE logging, always make your CSLT the same day you need to use it! (Or make sure no CHANGELOG occurred since the CSLT was made. Thanks be to SOX...which forced IMAGE logging.)

“We added so many log files identifiers for each of our production databases it reached the ULog limit in sysgen of 64 logging identifiers. So, per recommendations of this listserv (and elsewhere,) I had to update the tables in sysgen and do a CONFIG UPDATE this weekend to bring it to the maximum HP ULog limit of 128. Not a problem. Stop the logging identifiers with “LOG logid,STOP” Shut down the system and BOOT ALT from tape. System came up just fine — UNTIL it was time to restart logging! Every logging identifier reappeared with old log file numbers a few days old. (We do a CHANGELOG every night and move the old log file to a different group.) I scratched my head on this one for half of Sunday.

<Epiphany Begin> Then it occurred to me, the Log file numbers the system wanted were from the day the CSLT was created. I had made it before the weekend, thinking it would save me some time before the shutdown! </Epiphany End>

Therefore:

a. Logging Identifiers retain the copy number on the CLST tape in the case of an UPDATE or UPDATE CONFIG.

b. Logging Identifiers on the system retain the NEXT log file they need to CHANGELOG to.

So if one needs to use a CLST to load and you’re using Image Logging, remember to use it just after you create it, or make sure no CHANGELOGs occurred since it was made.

This may effect some sites as they may believe their CPU is a static configuration and only do a CSLT once a month or once a week. In the case of an emergency tape load, to save some heartache rebuilding image log files, they may need to do a CSLT every day.


Making Your Legacy Foundation Open

By Scott Hirsh

After having been immersed in networked storage, I’ve had a lot of time to think about infrastructure architecture. The first lesson is that storage (along with networking) is the foundation of an IT architecture. So it stands to reason that an infrastructure that’s built to last will begin with external storage, ideally from a company dedicated to storage with as much heterogeneous operating system support as possible.

What you get from an external storage platform that supports multiple operating systems is the ability to change vendors, hosts, and operating systems with a minimum of fuss. Yes, a migration of any kind is not without its pain, but it’s a lot more painful when all hardware and software is tied to one vendor. That’s a lesson everyone reading this should have learned by now.

Furthermore, these independent storage players have extensive expertise in supporting multiple platforms, including migrating customers from one to another. And frankly, unless you’re contemplating a non-mainstream operating system, networked storage is an excellent investment because it is a best practice for storage vendors to provide ongoing support for any operating system with critical mass.

For example, any HP 3000 users running on Symmetrix will have no problem using that same storage on HP-UX, Solaris, Linux, Wintel, and many others. If you’re running on internal disk, you’re stuck with HP-UX, best case — not that there’s anything wrong with that.

The Best Offense is a Good Defense

Here are some quick guidelines that recap the concept of minimizing transfer costs:

Start with a networked storage platform that supports as many operating systems as possible. This is the foundation layer for your IT infrastructure.

The best Total Cost of Ownership in IT is based on consolidation. However, that doesn’t necessarily imply homogeneity. It’s a matter of degree. It’s a matter of physical location as well.

Software drives hardware. Choose DBMS, applications, tools based on support for multiple operating systems and hardware. Be cautious regarding any decision that locks you into one vendor. For example, SQL Server-based solutions, which only run on Wintel, will have higher transfer costs than Oracle.

Keep your vendors honest, but at the same time don’t underestimate the value of a true partnership. One company I consulted for dropped HP after learning that HP felt they “owned” them. Any time one side thinks they have the other over a barrel, there’s bound to be trouble. We’re all in this together.

The Glue That Holds It All Together – You

In the new, defensive, minimum transfer cost environment, IT departments take on the role of systems integrator. That’s the catch for designing maximum flexibility into your environment. The IT staff must make everything work together, and be prepared to shift gears at a moment’s notice. To me, that’s the silver lining to this otherwise dreary story of no loyalty and diminishing options. More than ever, it’s the people who make the difference.

Back in the day, hardware was expensive and people were not. Today, if you're still using your own hardware on-premise, the hardware is cheap and the people are expensive.

Perhaps the greatest legacy of the HP 3000, and what will ensure our continued leadership in IT, is the hard-earned knowledge of what’s a best practice and what is not.


Worst Practices: Staying on HP's 3000s?

By Scott Hirsh

In the years since HP’s end-of-life decision for the HP 3000 has worn off, was it a “worst practice” to be an HP 3000 user over its final vendor life? What could we have done differently, and how do we avoid painting ourselves into a technology corner in the future?

The key is a concept that vendors understand intimately – transfer cost. Transfer cost is the cost of changing from one vendor or platform to another. For example, switching phone carriers involves low transfer costs. You hardly know it happens. But changing from one operating system to another – say, HP 3000 to Solaris – means high transfer costs. Vendors try to make transfer costs high, without being obvious about it, to discourage customers from switching to competitors.

It is your job to identify potential transfer costs in your technology decisions, and to keep them as low as possible. One way to lessen risk to spread the risk over multiple platforms and vendors. Trust no one, and always have a Plan B.

This means making the assumption that everything that has been happening – vendor consolidation, commoditization of hardware and the subordination of operating system to DBMS and application – will continue unabated.

The days of the “HP shop” are long over. Even if you've decided to standardize on HP, Sun, IBM, you should do so with the knowledge that one day you may need to switch gears abruptly. In other words, these companies are noted for their legacy hardware, which you must be prepared to dump for another brand with as little pain as possible.

Start with a networked storage platform that supports as many operating systems as possible. This is the foundation layer for your IT infrastructure.

The best Total Cost of Ownership in IT is based on consolidation. However, that doesn’t necessarily imply homogeneity. It’s a matter of degree. It’s a matter of physical location as well.

Software drives hardware. Choose DBMS, applications, tools based on support for multiple operating systems and hardware. Be cautious regarding any decision that locks you into one vendor. For example, SQL Server based solutions, which only run on Wintel, will have higher transfer costs than Oracle or Sybase solutions.

Keep your vendors honest, but at the same time don’t underestimate the value of a true partnership. One company I consulted for dropped HP after learning that HP felt they “owned” them. Any time one side thinks they have the other over a barrel, there’s bound to be trouble. We’re all in this together.

No loyalty

We now know more than ever that there is no loyalty on either side of the bargaining table. The IT culture of planned obsolescence has accelerated, and any hope that a technology vendor will watch out for the customer is laughable at best. Ironically, in the computing arena, it’s IBM who seems to protect its customers the best. The shop I managed for 12 years was an IBM System 3 to HP 3000 Series III conversion. Who could have imagined?

Mix and Match

For the longest time I enjoyed being an HP 3000 user and an HP customer. Rather than see the HP 3000 as a “proprietary” platform that I was locked into, I looked it as an integrated platform where everything was guaranteed (more or less) to work together — unlike the emerging PC world where getting all the various components to work together was a nightmare.

But around the time that HP decided commercial Unix was the next big thing, the concept of heterogeneous computing was reaching critical mass. As discussed in my last column, the glory days of the HP 3000 were just too easy. IT decision makers seemed to have a complexity death wish, and we live with this legacy. Consequently, the way to lessen risk today is to spread the risk over multiple platforms and vendors. Trust no one, and always have a Plan B.

This means making the assumption that everything that has been happening for the past few years – vendor consolidation, commoditization of hardware and the subordination of operating system to DBMS and application – will continue unabated.

Separation of OS and Hardware

When the concept of hardware independence first manifested itself in the form of Posix, I was intrigued. Is this too good to be true, the user community having the upper hand on its technology destiny? Perhaps not the holy grail of binary compatibility among Unix flavors, but a quick recompile and hello new hardware. Well, it was too good to be true and nobody’s talked about Posix lately that I’ve heard anyway.

Likewise for Java. Write once, run everywhere — slowly. Yes, there are lots of handy applets and specialized tools that are Java based, but many of these Java applications use “extensions” the scourge of openness.

Two main operating systems that facilitate hardware independence are Linux and Windows. Each has its issues, from the standpoint of transfer costs. Linux, of course, comes in several flavors; all based on the same kernel, but tweaked just enough to derail binary compatibility. (Can’t we all just get along?) And Windows is from Microsoft, who knows something about locking people in then shaking them down. But while these two options are not without their problems, they represent at least the short-term future of computing.

Of the two hardware independent operating system solutions, Linux seems to me the better story. Clearly the flavor is a major decision, with Red Hat having the most support from major hardware vendors. But I have seen other distributions – notably SUse – adopted in large organizations, so don’t necessarily assume only one choice. The idea is not to turn this into “Linux everywhere” discussion, but to illustrate the concept of Linux as a means of avoiding being painted into a corner.

But will everything run on Linux? No. You almost certainly will need some kind of Windows presence, although I do business with some companies who absolutely, positively want nothing to do with Windows (and Microsoft). But that’s not typical. Most of us in IT resign ourselves to doing at least a little business with Microsoft.

Microsoft, however, has shown itself to be the boa constrictor of software companies. They never stop squeezing, especially when they know they have your critical applications. The hardware independence story is good, but Microsoft substitutes software dependence. Proceed with caution.

But the principle here is that even if you choose an operating system that only runs on one vendor’s hardware, you can at least mitigate the risk by choosing a DBMS and applications that can be transferred to another hardware and OS if necessary.


Carly's exit sparked new hopes for 3000

Newswire Classic

March 2005

After board demands CEO’s resignation, 3000 sites ponder new future

The CEO who hawked change as HP’s new mission — and so sparked the 3000’s exit from the company’s lineup — has left HP in a resignation that made some customers hope for a change in HP to alter the 3000’s fate.

But HP’s board of directors, after demanding Carly Fiorina’s resignation on Feb. 9, have shown no signs of changing the company’s commodity and consumer-driven strategy, one which hurried the 3000’s HP exit.

Interim CEO Bob Wayman told stock analysts the next CEO will need to march to the tune Fiorina composed during the five-plus years she headed the company.

The company won’t change because its board hasn’t changed much. Venture capitalist Thomas Perkins came on board in early February, but the list of directors includes a group of officers who have approved Fiorina’s plans to grow HP. The board said it removed its CEO and chairman because she did not execute HP’s strategy well enough. The company’s earnings growth has disappointed analysts in recent quarters.

Wayman said during an analyst briefing that the board is looking for a CEO to work with the current strategy: Offering a broad portfolio of products while operating a printer business integrated with the rest of HP.

“While they won’t preclude any open discussion on a new CEO’s view of what the future strategy should be,” Wayman said, “we are looking for a CEO who also embraces that strategy, in all probability.”

Fiorina, who earned $44 million in signing bonuses to join HP in 1999, left the company with a $21.1 million payout. Her contract also provides $50,000 in job counseling services, a point of irony that didn’t escape HP 3000 customers who have seen careers ended or altered after the 3000’s cancellation.

“She was the executioner,” said John Dunlop of 3000links.com “She chopped and pruned product lines and employees. Unfortunately for the HP 3000 community, the HP 3000 was one of the early casualties. Thus she became the name synonymous with the death of the HP 3000.”

Another customer said Fiorina represented a strategy of judging a customer by what they’ve bought lately. The 3000 customer has been expected to buy what HP produces after it said it won’t offer the HP 3000.

“Carly was viewed by many to be of the mindset that our value as customers was limited to our wanting to buy what HP had to sell,” said Russ Smith of credit union Cal State 9. “It was not that our value was inherent as customers, period — and that HP should produce what we need.”

The majority of customers were realistic about how much change would filter down to the HP 3000 issues that remain at the company. “HP now has bigger problems such that this issue won’t even be on the radar,” said John Wolff, the CIO at LAACO, Ltd and vice-chair of OpenMPE. “Not only did they break the HP 3000 product line, but Carly broke the whole company — 60 years to build it, six years to wreck it.”

Fiorina was the first CEO ousted from HP in such a public manner: Stories of the forced resignation aired on all major US TV networks; HP called a press conference to explain on the day it removed Fiorina. She was not the first to leave involuntarily, though. Another HP CEO, John Young, “was politely retired when Dave Packard came back out of retirement to put the company back onto the right path in October, 1992,” Wolff said. “Young was paid $1 million for ‘unused vacation time.’”

An enterprise change?

Some 3000 customers said they were hopeful a better enterprise server strategy would emerge under a new CEO. The majority of customers responding to a spot poll by the NewsWire reported they were migrating away from the server, a position that has them considering HP’s server alternatives. For many, the damage has already been done.

“We lost all faith in HP’s strategy some time ago,” said Don Baird, president of EnCore Systems. “We do not rely on anything HP except our 3000s, which we are replacing with non-HP solutions.”

HP’s change of heart is having an impact on a choice of vendor for migration sources. At the Anchorage, Alaska light and power utility, systems analyst Wayne Johnson is moving to Windows — but HP’s moves with the 3000 make the utility wary.

“Part of my company’s fear has been the HP 3000 is going away, so let’s steer clear of any other HP product,” he said. “Could the change mean that the HP 3000 will be resurrected and not meet its demise in 2006? Our Windows platform is not HP.”

Some drew a direct link to Fiorina’s strategy and the slide of HP’s enterprise business. “HP lost its personality under Carly. Their niche was solid, reliable computing platforms, not PCs and not iPods,” said John Lee of reseller Vaske Computer Solutions. “Hopefully, the new CEO will re-focus the company on its core strengths, one of which used to be enterprise computing.”

Even those moving to HP’s Unix systems want to believe more change in management is on the way. “I really hope the shakeup continues down the line,” said a long-term HP 3000 manager who wanted his name withheld. At his firm, HP 9000s are replacing HP 3000s. “Maybe we can get back to a point were the customer and our needs come first, and the profits and sales will follow,” he said. “Since the Compaq-HP merger, the quality of our service programs and sales support have dropped.”

The CEO’s departure won’t change much for Pivital Solutions, a company that signed on for the last year of HP’s authorized 3000 sales and now offers third-party support for the server and MPE. “The only hope I still loosely hold is that they will sell off the enterprise systems group before they run it into the ground,” said president Steve Suraci.

Operator seeks operations whiz

HP’s executives say the company now needs a CEO with better operational skills. Its top sales officer Mike Winkler, quoted in a published report from the recent Goldman Sachs Technology Investment Forum, said HP’s fortunes would rise with a CEO like Lou Gerstner, the IBM leader who came in to turn around that company in 1992. In that same year, HP’s founders Bill Hewlett and Dave Packard asked LaserJet czar Dick Hackborn to take the CEO reins from John Young. Hackborn wouldn’t leave his home in Boise, Idaho to take the job and retired a year later.

But Hackborn, an operator behind the scenes in most of HP’s business choices since his retirement, played the lead role in bringing Fiorina to HP after the company felt it missed out on the Internet boom during CEO Lew Platt’s watch. Another report, published in the wake of the Fiorina ouster by BusinessWeek editor Peter Burrows, says Hackborn acted as the catalyst to spark the board’s removal of Fiorina.

Now Hackborn and the rest of the HP board will try to find an operational, COO style of CEO. HP will change CEOs because of Fiorina’s inability to execute, not over her direction. “She had a strategic vision and put in place a plan that has given HP the capabilities to compete and win,” HP’s press release assured investors.

The strategy which Hackborn has pulled HP into — commodity sales like printers, with less direct customer contact — relies on resellers and outside distributors to stay in touch with all but the largest customers. Typical HP 3000 shops, working for small and medium-sized businesses, say they have not felt much contact with any HP operation except its support group.

“Working for a small company, I don’t feel that I or my company has ever been part of an ‘enterprise systems strategy,’” said John Bawden of health insurance provider QualChoice, an HP 3000 shop. “Generally we are ignored unless we have the energy and the need to go to HP for something.”

Continue reading "Carly's exit sparked new hopes for 3000" »


HP 3000s and the time to end Daylight Saving

Pexels-darwis-alwan-1454769
During the 1990s, Shawn Gordon wrote a column for the NewsWire on VeSOFT products and reviewed software for us. He also left the 3000 world for the novel pastures of Linux, long before that OS was a commonplace IT choice. His departure was an example of thinking ahead. Along those lines, Gordon's got a classy article from his website about Daylight Saving Time. DST is a failed experiment that costs everyone more money. California, where the HP 3000 was born, is on the path to eliminating DST. Arizona and Hawaii are already non-DST states.

DST became a thorn in the side of 3000 shops because it had to be accommodated with customized code. The cutover days, into Saving and then out of Saving, were different every year. A handful of clever jobstream hacks lurched systems into and out of time zones that were working perfectly until the law said every zone had to shift forward. Or back.

Here's Shawn's article, as polished as all of his offerings have been in both software and writing. You can write your US Representative to get this clock switching put away for good. The US Senate already is hearing a bill about this, although it's the misguided solution to make DST permanent. The alleged Saving has only been going on since HP first made 3000s. Since HP's given up on that, maybe the US can give up on Daylight Saving.

By Shawn Gordon

One might think that the societal contributions from New Zealand mostly consist of the band Crowded House and sheep-based products, but it is New Zealand entomologist George Vernon Hudson that we have to thank or curse for modern Daylight Saving Time (DST). Benjamin Franklin is often credited with the idea, but it is based on a satire he wrote in 1784 about Parisians rising late in the day. Hudson authored and presented a paper in 1895 to the Philosophical Society proposing a 2-hour shift. This was entirely due to him working a “shift schedule” and not having enough daylight left after work during certain times of the year to collect bugs. His proposal was entirely self-serving. If he couldn’t get the time off, he’d force society to change.

Shortly after, and totally independently, the prominent English builder and outdoorsman William Willett noted in 1905 how many Londoners slept through the beautiful summer days, and as an avid golfer, he also didn’t like playing at dusk. Willett is often wrongly credited as the man who came up with DST. Again, totally self-serving and a desire to control other people's behavior. Willett was able to get Parliament to take up the proposal but it was rejected, he continued to lobby for it until his death in 1915.

DST wasn’t formally adopted by anyone until WWI in 1916 as a way to conserve coal, but again, this only controlled behavior, it didn’t change time. The same results could have been had by just starting everything an hour earlier. After the war, DST was abandoned and only brought sporadically, notably during WWII, but did not become widely adopted until the 1970s energy crisis.

In 1973, President Nixon changed the US to year-round DST, which of course was silly, everyone could just start earlier. The act was repealed when it resulted in a marked increase in school bus accidents. A study done by Stanley Coren of the University of British Columbia in 1991 and 1992 showed an 8 percent jump in traffic accidents on the Monday following the “spring forward” time change. After some jumbling around for a couple of years, it was finally settled in 1975 to the last Sunday in April through the last Sunday in October. Making changes to computer clocks in those days was not trivial and this was an enormous burden in the budding technology sector.

In the mid-1980s, the Sporting Goods lobby and associated lobbyists were able to convince Congress to extend DST to the first Sunday in April, which increased DST from six to seven months of the year in 1986. Computers were now far more prevalent and the change had an even larger impact and cost that everyone just had to eat. Simply having to change the clock twice a year was an enormous burden.

The systems I worked on at the time required the computer systems be restarted to change the clocks, which meant making sure all batch processing was completed so you could have a quiet 20 minutes or so to restart the systems in the middle of the night, which required a human being be sitting there.

In 2007, as part of the Energy Policy Act of 2005, DST was extended another 4 weeks so that the United States and Canada are now on it almost two-thirds of the year. The claim was that this additional 4 weeks would save 0.5% electricity per day for the country, enough to power 100,000 homes. There is a provision in the act to revert to standard time if those savings didn’t materialize. A 2008 study examined billing data in the state of Indiana before and after the 2006 change to DST and showed an increase of 1-4 percent due to the extra afternoon cooling and increased morning lighting costs.

All public safety claims made in the 1970s by the US DOT have been discounted by later empirical studies by the NBS. Similar claims by Law Enforcement of reduced crimes were also discounted as the sample set was too small (two cities) and did not allow for any mitigating factors.

In November 2018, California passed Proposition 7 which repealed the Daylight Saving Time Act of 1949 that approved the clocks in California to stay in sync with changes made at the federal level. This is the first step in allowing California to either (sensibly) cancel DST altogether like Arizona and Hawaii, or (foolishly) staying on DST year-round. If a state as large and influential as California were to abandon DST altogether, you would likely see a lot of adoption across the US and possibly the end of this silly practice altogether. The people that think year-round DST is a great idea, don’t remember when we did it before.

Arbitrarily changing something like the clock has huge effects and costs across society, as previously noted. Major systems can go down from bad date calculations. There was an outage in Microsoft Azure on leap day 2012 because of a simple date math bug. Politicians and lobbyists are oblivious to these costs and concerns and blithely change the clocks around as though they are some Olympian Gods that command time itself.

Ultimately, it is the arrogance of politicians that seem to think they are creating or giving you an extra hour of daylight, when in fact they are just controlling everyone’s behavior. There is no energy savings, quite the contrary. It doesn’t improve public safety, it does none of what it is purported to do. What it does have is a deleterious effect on public health and safety. A negative impact on kids performance in school, as many studies show that kids do better in school by starting later in the day and DST is contrary to that.

DST mandates massive hidden costs and dangers in adjusting delicate computer software systems. Modern life does not require DST. Our lighting energy costs are trivial compared to our other usages like computers and TV. Flexible work arrangements and a global economy makeshift work mostly a thing of the past. It’s time to move to the 21st Century and drop this anachronistic legislative holdover that was developed by arrogant and self-serving men. Write your Senators and your Representatives and let them know what you think.

Photo by Darwis Alwan from Pexels