Wednesday, June 8, 2016

Profiles Inefficiency

I got into this racket in 1971 when IBM decided I should be a programmer rather than an accountant, and who was I to argue?  They introduced me to the IBM 029 keypunch machine and the PL/I(F) language along with JCL and utilities.  I began to program, such as it was.  Write a description of the program, convert it to a flowchart, code the program by hand onto keypunch sheets, give the sheets to the keypunch operator and wait for her (always 'her') to produce a deck of cards with neat rectangular holes, proof-read the cards, deliver them to the RJE (Remote Job Entry) station downstairs (we had a Mod 25, I think, that was used as a glorified card reader), pick up the output when ready along with the cards, check the SYSOUT, correct the program, repeat until success.

A few years later, the department got a shipment of 2741 terminals, Selectric-y things with the ability to communicate with computers far away.  There was a sign-up sheet where one would bid for time on the few terminals, and we would wait in our offices (yes, real offices) for the phone to ring with the news that it was 'time' to go work on the terminal.  Log on to TSO, edit the datasets, compile the program, build the JCL, run it, check the output, fix the program, fix the JCL, repeat until success.  Later, the 2741s were replaced by 3274s, 'green screen' paperless terminals, but the process, while faster, remained essentially the same.

Then, along came SPF, the Structured Programming Facility, with a suite of utility functions and an editor...  a WYSIWYG editor!  What You See Is What You Get.  Each improvement made the process easier and faster and less error-prone.  It was a long time before anyone realized those three things were operationally connected.

SPF mutated into ISPF, the Interactive System Productivity Facility, and with it came a whole raft of new features...  and new problems.  One of the new features was something called "edit profiles".  ISPF would handle different types of data differently, and the user could control this by strategically naming datasets.  The profiles were named based on the low-level qualifier (LLQ) of the particular dataset plus its record format (F or V) plus its record length, so you wouldn't get a PLI-F-80 profile mixed up with a CNTL-F-80 profile, even though the data had the same general 'shape'.  Alas, ISPF only made provision for 25 profiles, and when a user created a 26th profile, the least-recently-used profile would be summarily discarded to make room for the new one.  Because users were never given much education in 'profiles and their care and feeding', and since the system administrators were often system programmers whose use of ISPF was rarely more than rudimentary, the stage was set for all kinds of mischief:

The Jones Company uses COBOL and PLI, along with CLIST and REXX, in a zOS/TSO setting.  Everyone uses ISPF.  There are no standards regarding dataset names that deal with other than the first two nodes;  the LLQ is never even considered as something that ought to be standardized.

Arthur is a programmer.  He has datasets named (we ignore here the high-level qualifiers and concentrate on the LLQs) PAYROLL.PLI, TOOLS.PLI, REPORTS.PLI, and MISC.PLI.  They all contain PL/I source and all use the same profile, PLI-F-80.  He also has PAYROLL.CNTL, TOOLS.CNTL, REPORTS.CNTL, and MISC.CNTL.  These all contain JCL and similar data and all use the same profile, CNTL-F-80.  Betty has datasets named PLICODE.STUFF, COBOL.STUFF, and JCL.STUFF.  They all use the same profile, STUFF-F-80, even though their contents are radically different.  Betty is constantly changing profiles in ISPF Edit to get the behavior she wants from the editor.  Cheryl has datasets named ACCTG.SOURCE (a mixture of PL/I and COBOL), and PROTOTYP.SRC (a similar mixture of languages) along with ACCTG.JCL and PROTOTYP.JCLLIB.  Cheryl has four profiles: SOURCE-F-80, SRC-F-80, JCL-F-80, and JCLLIB-F-80.  Betty asks Cheryl to look at a module in COBOL.STUFF that won't compile in hopes that Cheryl might see something wrong.  Cheryl views Betty's COBOL.STUFF and suddenly gets a new profile, STUFF-F-80, that she never had before, and it's different than Betty's STUFF-F-80.  Cheryl is stumped by the problem and asks for Arthur's help.  Presently, Arthur has a STUFF-F-80 profile and his, too, is different than either Betty's or Cheryl's.  Neither Arthur nor Cheryl edited Betty's data;  they both used VIEW which, unfortunately, is affected by the problem since it uses edit profiles.

We're dealing here with just three people and already the problem is showing its potential.  Imagine a setting with 200 programmers, technicians, testers, and so on, all operating guidance-free.  One day you go into edit on one of your datasets and the highlighting is wrong, the data is shifted into all-CAPS whenever you type something new, and your tab settings seem to have gone away.  "What the hell happened to my profile?" you ask.  The answer is that it got purged when Larry had you look at a stack of ISPF skeletons he found in an archive.  You created profile #26 and lost one that you had counted on keeping.  P.s.:  when you went back into edit on that all-wrong data you re-created a profile for that dataset and purged another, different profile.  Which one?  I have no idea and neither does anyone else.

Is there an answer to the problem of "where did my profile go?"?  There is, and the answer is 'standards'.  Someone in authority — and the higher, the better — must say:  "All xxx data must reside in a dataset whose LLQ is aaa" and repeat as necessary for all known common types of corporate data.  Exceptions, where a case can be made for an exception, are granted by managers.  There also needs to be a catch-all category for material that doesn't fit neatly anywhere else.

In recent years, ISPF changed the protocol a little.  Profiles can be locked or unlocked.  When EDIT goes looking for a profile to purge, it selects the least-recently-used unlocked profile.  If there aren't any of those, the least-recently-used locked profile gets tossed.  Also, there's something the sysprogs can do post-installation that allows each user to have 37 profiles, somewhat easing the problem if not eliminating it.  Regardless, be careful about how you name your datasets and urge everyone else to be just as careful.  If corporate doesn't set standards, try to get your department or division to do it. 

Sunday, May 22, 2016

Quick! Get me member (Q47MMT1)!

Imagine a partitioned dataset with 8,000 members (or more).  This is getting into the range where finding the directory entry for a specific member is becoming a real chore and is chewing up cycles.  I heard of an imaginative way to speed up the process.

Define the partitioned dataset as a Generation Data Group and make the group large enough that, when the dataset is split, searching the directory of each is less of a chore (it will be even if only because each fragment of the whole is smaller).  Let's say, for the sake of argument, that we break it into 27 generations, one for each letter of the alphabet plus a catch-all.  Now copy all the members beginning with non-alphabetics into generation #1, all the "A"s into #2, all the "B"s into #3, etc.  When you access the group via its base-name (without specifying a generation) you get them all concatenated in 27-26-25...3-2-1 order.

When you look for member 'Q47MMT1', the directory of generation #27 is scanned, but member names are always in alphabetical order and this directory starts with 'Z...'.  That's not it; skip to G0026V00.  Its first entry starts with 'Y...'.  Nope.  G0025V00 starts with 'X...', G0024V00 starts with 'W...', G0023V00 starts with 'V...', G0022V00 starts with 'U...', G0021V00 starts with 'T...', G0020V00 starts with 'S...', G0019V00 starts with 'R...', G0018V00 starts with 'Q...'.  Got it!  You quickly find the required member and processing continues.  What's happening here is that instead of searching through 8,000+ directory entries and finding what you seek in (on average) 4000-or-so lookups, you looked at (on average) 13 + ~150 (8000 / 27 / 2).  As the original partitioned dataset gathers more members, this comparison gets more stark.  At some point, the comparison is so stark that someone will wonder if the quicker method failed because it just couldn't complete that fast.

Monday, May 9, 2016

ALIAS is not a four-letter word

Are you one of those who thinks "Alias?  Why bother?"?  They do have their uses, and with a little imagination they can be leveraged to deliver surprising productivity gains.

Aliases come in two flavors:  member aliases and dataset aliases.

Member aliases are nothing more than entries in a partitioned dataset's directory.  Each such entry holds the TTR (track and record) of an existing member — called the "base member".  If you edit an alias and save it, BPAM writes the saved text at the back of the dataset and records the new TTR in the directory entry for the alias, making it a base member in its own right (no longer an alias of some other base member).  But as long as it is an alias, any reference to the base name or any of its aliases points to the same code.  Most languages have the facility of knowing by which name the routine was called, and the logic may branch differently for each (or not — it depends).

Dataset aliases provide the same sort of facility but at the dataset level.  These aliases must be kept in the same catalog as holds the dataset name for which the alias is created.  The kicker here is that the alias and the dataset it aliases must have the same high-level qualifier.  If they didn't, they'd be in different master catalogs and couldn't exist in the same sub-catalog.

So, what can you do with a dataset alias?  Why would you bother?  Well, here's a practical application that can save hours of updating and weeks of grief:  You have a dataset (or a series of datasets) that IBM or some other maintainer periodically updates.  Maybe it's the PL/I compiler or something similar.  You have a cataloged procedure, a PROC, or possibly several of them that programmers use to compile programs.  If your PROC(s) all reference SYS1.COMPLIB.V04R012.LOADLIB, then when IBM sends down the next update, somebody is going to have to change all those PROCs to reference ...V04R013... and slip them into the PROCLIB at exactly the right moment.  Usually that means Sunday afternoon when the system is quiesced for maintenance and only the sysprogs are doing any work.  Or...

You could alias whichever is the currently supported version as SYS1.COMPLIB.CURRENT.LOADLIB.  When the new version has been adequately tested and is ready to be installed for everyone's use, you use IDCAMS to DELETE ALIAS the old one and DEFINE ALIAS the new one.  These two operations will happen so fast it will be like flipping a switch:  one instant everyone is using V04R012, and the next they're using V04R013.  The system doesn't even have to be down.  You can do it Tuesday during lunch.  Nobody's JCL has to change, but (more importantly) none of the PROCs have to change, either.  Your favorite beta-testers can access the next level just by overriding the STEPLIB.  Everybody else just uses the PROC as-is.  If somebody reallyreallyreally needs to get to the prior version, that, too, is just a STEPLIB override.

I think (but I don't know for certain) that you can write an ACF2 rule that allows certain privileges to an ALIAS that are prohibited to the BASE (and vice versa), but the most amazing ALIAS-trick (as far as I'm concerned) is the ability to swap one dataset for another with none of the users being any the wiser.

Saturday, March 22, 2014

Proverbs 27:17

One of my favorite sayings is "God made programmers in pairs for a reason."  Some of my friends will recall me telling them why it's so important that programmers review other programmers' code — and why it's so important that they have other programmers review their code: iron sharpens iron, the lesson of Proverbs 27:17.

"Gee," I can hear some of my friends exclaim, "I didn't think Frank was that into Scripture!"  Relax, I'm as ignorant of Scripture as you suspected I was, but I do have friends who are otherwise endowed.

First, however, the background: I remember seeing my Mom getting ready to slice the Sunday roast: she would take two carving knives and whisk them rapidly together — snick snick snick snick — blade-to-blade alternating edges.  Both knives got sharp.  I didn't know why, and it wasn't important until many years after she left us.

When I mentioned this technique to a friend and colleague, Don Ohlin, years later as a justification for holding informal (at least) code reviews, his response was "Proverbs 27:17."  As you might have anticipated, my response was "Huh?"

"As iron sharpens iron, so one person sharpens another," he replied.

Whatever your opinion of the Bible, there's a heap o' wisdom nestled within.

Now, schedule the damn code review and stop stalling.

 

P.s.: God made parents in pairs for very much the same reason.

 

Thursday, October 24, 2013

Retrospective

So, here I am on the feather-edge of retirement (I'll be 70 in a few months) and I'm still learning things.  I had an insight last night that kept me awake mulling it.  My last contract was with Bank of America in Texas and, while it was fun, it was also more than just a little frustrating.

When I first started looking at the code I would be working with at BofA, I was confused.  Everybody these days writes 'strictly structured', right?  No, wrong, and that was what was so confusing.  Last night's insight cleared away all the cobwebs...  just in time for Hallowe'en.

 

There are two ways to approach a programming problem.  In the first, you start out by assuming that this is an easy problem;  you have to do A, B, C, and finally D.  Voila!  You sit down and write code and it comes out as a single long module.  It may be fairly complex.  (This was typical for most of the BofA code I saw.)

If, instead, you assume that the problem will be complex, that you have to do A, B, C, and finally D, you will sit down and write a top-level module with stubs for the called subroutines.  Then you will write the innards of the subroutines, probably as routers with stubs for their called subroutines.  This process will continue through n levels until each subroutine is so simple it just doesn't make sense to break it down further.  The resulting program will be longish, but (all things considered) pretty simple regardless of the initial estimate of complexity.

I (almost) always presume a programming task will be complex.  If that turns out to be wrong, no big loss.  If I were to assume some programming task were simple and it turns out not to be quite as simple as I originally thought — that would hurt.  It would hurt because halfway through writing that 'one long module', I would discover the need for the same code I used in lines 47 through 101.  Stop.  Grab a copy of that code.  Create a subroutine at the end of the code.  Insert CALLs at both places.  Continue writing that 'one long module' where you left off.

If that scene happens more than once or twice, what we wind up with is a long main module with several calls to randomly-placed subroutines.  The coefficient of complexity has just been bumped up, and the bump could be quite a lot.  If it's one of the newly-created subroutines whose function needs to be partitioned, the code soon takes on a distinct air of 'disorganization'.

Do I have to point out that there's way too much overly-complex and disorganized code out there and running in production?  No, I probably don't;  we've all experienced Windows.

So, there's a built-in penalty for assuming simplicity, and it turns out this penalty applies (in REXX, at any rate) no matter how complex the eventual program actually is.

If a (REXX) program is written as 'one long module', possibly with a few random subroutines for function required in more than one place, diagnosis becomes a problem.  Unless the programmer has anticipated bypassing iterative loops, a trace will have to endure every iteration in every loop before getting to the next stage.  To avoid this most painful experience, what happens most often with such code is a quick one-time fix to turn TRACE on here and shut it off there.  But then, the program being diagnosed is no longer the program that failed;  it's a modified version of the failing program.

If a (REXX) program is highly-structured, function will be very encapsulated to the point that any error will be isolated to one or a very small number of suspect segments.  Running such a heavily-encapsulated program in trace-mode means that entire trees of logic can be bypassed:  if TRACE is on for a higher-level module, it can be turned off in a submodule (and all its children) but will still be on when control returns to the higher-level module.  The more structured the code, the easier it is to debug.  With one proviso...

You can have a highly-structured program that is nevertheless disorganized.  If, for example, you place your subroutines in alphabetical order, the flow of control will appear chaotic.  Ideally, submodules that are merely segments of an upper-level router should appear in roughly their order-of-execution.  Although they're broken out into separate segments, they still retain the flavor of that 'one long module' insofar as they appear one after the other like the cars of a train.  Reading such code becomes easier because a CALL to a submodule is a call to code which is (probably) physically close by.  (This is not always strictly true.)

COBOL programmers long ago adopted a more-or-less universal convention: they prefix the name of each code segment with a glyph ('D100', perhaps) that indicates its logical position in the complete program.  A COBOL programmer seeing a reference to 'D100-something' in module 'C850-GET-USER-ID' knows to look later in the code for that segment.  The same technique works equally well in all languages, and REXX is not an exception.  (I tend to use alpha-only such that the mainline calls A_INIT, B_something, etc.  Module C_whatever calls CA_blah, CB_blah, etc.  Whatever works...)

Exactly the same sorts of things can be said about modifying an existing program.  The 'one long module' requires careful planning and skillful execution when inserting new function or changes to existing function.  Testing the new function is a chore to the same extent diagnosing an error is a chore, and for the same reasons.  Highly-structured code is designed to be modified;  it was written that way.

Summarizing:  a highly-structured REXX program may be a little longer than it (strictly) has to be, but it will be easier to understand and easier to diagnose in case of an error.  This understanding can be enhanced by strategic naming of segments and by arranging the segments to more closely align with the actual order of execution.

Recommendation:  Structure is your friend.  It may be your best friend.

Tuesday, August 21, 2012

What's Wrong With This picture?

How many times have you seen this and thought nothing of it?

"EXECIO * DISKR FIRST (STEM FIRST. FINIS"
"EXECIO * DISKR NEXT  (STEM NEXT.  FINIS"
do nn = 1 to next.0
   --parse token from next.nn --
   --35 lines of binary search thru first. --
   if notfound then do
      --diagnostic message--
      end 
   else do
      --process the match--
      end
end 

Not only have I seen this kind of code, I have written this kind of code.  A startling revelation, an epiphany, has rocked my world.

The 'revelation' is this:  in MVS, the first time you say 'READ', you don't just read one record; you read five buffers.  If you're talking about modern DASD and 80-byte records, you've just 'read' something like 1400 or 1500 records. They're all in main storage.  Whatever operation you do to them, you do at main storage speeds.  And "EXECIO * DISKR" doesn't stop at five buffers;  it reads all of it.  All the heavy lifting, the SIOs, has already been done.  You've just spent $11 to read from DASD, and now you propose to save four cents by doing a binary search.  Are we all nuts?

In a situation like this, we should leverage the power of REXX's data-associated arrays by spinning through one of those stems and parsing everything we can find, then use the data-association to the second file to know (intuitively) whether there is or is not a match.  It's all in main storage, right?  You would need highly-sophisticated and very expensive equipment to discern how much time you saved by doing a binary search over using a sequential process.  The cost of having a programmer write those thirty-five lines of binary search will never be paid back by the time saved.

Edsger Dijkstra once proposed that a programmer worrying about how fast (or slow) hir code would execute was worrying about the wrong thing.  "Get a faster computer", he advised.  Easier said than done in many cases, but always the optimal solution.

That's not, however, what we're seeing here.  This is truly "penny-wise and pound-foolish" to do all that I/O and then waste the advantage of having it all immediately accessible (let's face it) for free.

I think I may have written my last binary search.  What do you think?

Friday, August 10, 2012

Embedding ISPF (and other) assets

When I write a tool for myself — for my own use — I will typically include ISPF assets at the bottom of the code and have software extract them as part of the initialization phase.  I rarely load panel text to ISPPLIB or skeletons to ISPSLIB.  There are several advantages to keeping your ISPF assets 'local':

  • I/O is reduced, sometimes very substantially reduced
  • changes made to these assets are reflected immediately without having to be in TEST-mode and certainly without having to leave ISPF and restart it
  • there is no doubt about the identity of subsidiary elements
  • there is no danger of duplicate member names
  • When distributing or installing, there is only one element to distribute or install: the enclosing REXX code

When ISPF is invoked by a non-developer (call it 'standard mode') its habit is to cache any panels, skeletons, or messages that it uses.  It keeps them in storage so that if the same element is re-used, ISPF can get it from the cache rather than doing I/O to get a fresh copy.  Obviously, if you're modifying that element, saving it to its library won't do a thing for your current session.  To get that new panel, you have to exit ISPF to READY-mode and restart ISPF.  That takes a lot of I/O because on start-up, ISPF opens and reads ISPPLIB, ISPSLIB, ISPTLIB, ISPMLIB, and ISPLLIB so that it knows all the available membernames and where they're located — in case you ask for one of them.

Developers who work with ISPF services generally invoke ISPF in TEST-mode when developing because in TEST-mode, ISPF caches nothing and always does I/O to handle service requests.  If you've just saved a change to a panel and you're in TEST-mode, your next DISPLAY request will retrieve the new version.  The penalty you pay for this is that every service request is handled via I/O.

Embedding your ISPF assets gives you the best of both worlds:  because ISPF caches elements based on DSN+membername, re-extracting ISPF assets at execution-time creates a new dataset (in VIO) and ISPF recognizes that this member XYZ is not the same XYZ as that in the cache, so it reloads a fresh copy.  All other service requests are handled via the cache — because you don't use TEST-mode.

It gets better:  When you invoke the enclosing REXX, it all gets read into storage immediately, and that includes your panels and skeletons.  Extracting them thus happens at 'core speed' and if they're written to VIO datasets, that happens at 'core speed' as well.  No need to read the panel(s) from ISPPLIB or the skeleton(s) from ISPSLIB separately.  It's already here.

You no longer have to search through the libraries to ensure you're not using a duplicate membername — because your data is not going to live in any of those libraries.  The elements embedded in your application are extracted and loaded to a library which is then LIBDEF'd into a preferential position to all others.  If you use a name replicated elsewhere, you will only use your element;  the other will be 'masked' by virtue of being too far down the concatenation.  Of course, when the application ends, LIBDEF tears down those purpose-built libraries and the environment is restored to its pre-invocation state.  Neat.

When you install the application, you install one element.  All the other panels and skeletons used by the main REXX routine do not get separately installed — there's no need to formally install them because they will be regenerated dynamically as and when needed.

If anyone out there can find a 'down-side' to any of this, I'd be very interested to hear it.  To me, it all looks like 'up-side'.

Code for extracting ISPF assets can be found on my REXX Tools page and a short example of how it's implemented at ALIST.