Tuesday, November 29, 2011

INTERPRET??  WDNNS INTERPRET!

(For those who don't know what 'wdnns' means, it is a reference to a misquote of a line from Treasure of the Sierra Madre and for the rest, you can look it up on the web.  The expansion is "We don' need no steenking ..."  'Nuf said.)

It is my firmly held conviction that INTERPRET is almost never required for ordinary REXX processing.  Les Koehler has come up with one or two scenaria where it is useful but they are so bizarre that 'esoteric' is a risible understatement.  Most uses of INTERPRET run something like this:

/*  'parm' looks like  ' name=Smith '   */
parse var parm  tag "=" tagval
interpret tag "=" tagval

I hope I got that right.  Since I never use INTERPRET and I don't have a mainframe to test this out, I have to guess as to what the code would actually look like.  If I were writing this code, I would use VALUE instead.

parse var parm tag '=' tagval
rc = Value(tag,tagval)

I have also seen entire commands constructed piece-by-piece and then INTERPRETed to cause the command to be executed.  Unnecessary.

interpret "ISPEXEC TBCREATE" tblnm blah blah blah

I have never seen a case where the INTERPRET couldn't simply be converted to a straight execution:

"ISPEXEC TBCREATE" tblnm blah blah blah

Anyone with a true instance of a necessary INTERPRET is welcome to present the same here.  I'd love to be proven wrong.

Monday, November 28, 2011

When TRACE just isn't enough

Back on November 1, I exhorted all REXX programmers to always make it possible to turn TRACE on via a parameter rather than making a code change to the program itself.  There are many good reasons to support that position philosophically, but there's one that makes all others pale in comparison.  When you can turn TRACE on remotely, you can trap the entire output of your program as it runs in TRACE.

You set up an OUTTRAP, issue the command, and close the OUTTRAP when it finishes.  'Issuing the command' in this case means calling the routine in such a way that TRACE is on.  If you can't turn on TRACE with a parameter, you add an entire layer of complexity that doesn't need to be there.  If you're in tune with REXXSKEL, it's as simple as

rc = Outtrap("OUT.")
(TSOCMD) "(( TRACE R"
rc = Outtrap("OFF")

Now, the entire TRACE output is stored in stem 'out.'  (here's hoping you didn't blow out of your TSO region in the process...)  and all that's left is to allocate a dataset to hold it, then

"EXECIO" out.0 "DISKW TRAPOUT (STEM OUT. FINIS"

If you allocated the dataset large enough (I always ask for 'CYL,(5,5)') you now have a browseable dataset showing every last line of execution right up to the point of failure.  You'll thank me for this when your routine crashes trying to process record number 11,728.

Over on my REXX page you'll find 'TRAPOUT' that implements this simple, straight-forward method for anyone whose code can be coaxed into running in TRACE :-)

Sunday, November 27, 2011

ISPF Tables — Copying

For any number of reasons, you may decide you need a copy (usually partial) of some ISPF table.  Possibly, the subset of the table you wish to operate on may not be easily parsed on the basis of the table's data content.  You will have to build a temporary table and load it from the original table, perhaps row-by-row.  Careful, now...

TBQUERY will tell you what an existing table looks like.  As soon as the table is opened, TBQUERY can be invoked to give a list of the keys and names.  From this, you can TBCREATE a replica table.

    "TBOPEN SAMPLE"
    "TBQUERY SAMPLE KEYS(knames) NAMES(nnames)"
    "TBCREATE NEWCOPY KEYS"knames "NAMES"nnames 

The column names returned from TBQUERY are enclosed within parentheses.  Unless you have other uses for the lists, it isn't even necessary to trim them off.  At this point, you are ready to write a duplicate or subset table, save for one easily overlooked aspect.

Remember 'extension variables'?  Unless you ask when reading each input row you will not know whether there are extension variables attached to the row, nor will you know their names.  Not knowing their names means not being able to specify which names should be attached to each output row.

   do forever
      "TBSKIP" intbl "SAVENAME(xvars)"
      if rc <> 0 then leave
      "TBMOD " outtbl "SAVE"xvars
   end

Notice that the extension variables themselves did not have to be operated on in any way.  We did not have to assign them, for instance.  Because the row was read (the NOREAD parameter was not specified), all the row's variables: keys, names, and extension; were populated to the variable pool.  They are therefore immediately available to be loaded to the output table row.

In fact, because of this behavior: populating every variable regardless of type; it might be good practice to never read a table row without acquiring the list of extension variables.

You can get fancier than this, of course.  For an idea of just how much fancier you can get, take a look at TBCOPY over on my REXX page.  TBCOPY doesn't use TBQUERY.  It relies on an external subroutine, TBLGEN, which itself relies on a special table in which is stored the DNA of all permanent tables, sort of a data dictionary for tables.  TBLGEN is much more elaborate than is strictly necessary, because it is intended to do all of an installation's TBCREATEs for permanent tables.  TBCOPY is likewise much more elaborate than is strictly necessary because it is intended to do all TBCOPYs for permanent tables within an installation.

Saturday, November 26, 2011

ISPF Tables — Extension Variables

Sometimes we just have to deal with tables we didn't design (and would have designed differently had we had the opportunity).  Sometimes we just need a tiny little tweak to make that table perfect (reg., U.S. Patent Pending).  Sometimes we want to let everyone know that this table row is unusual either for what it has or for what it hasn't.  Enter the extension variable.

Every table row in a given table has room for every KEY and NAME field originally specified when the table was TBCREATEd.  Flexible little devils that they are, table rows also have unlimited* space for other stuff, and every table row can be unique as to what other stuff it holds.  Observe:

    "TBCREATE BILOMAT KEYS(BMITEM) NAMES(BMDESCR BMONHAND BMECOQTY)"

our Bill-of-Materials table.  It is keyed by the item number (BMITEM), and has fields for a description, the number on hand in inventory, and the economic order quantity.  Wha...?  Where are the 'materials' that make up the 'bill'?  They're in extension variables, naturally.

    "TBVCLEAR BILOMAT"
    bmitem   = 'ER4590T'
    bmdescr  = 'Whatsis, large, left-handed, tan'
    bmonhand = qty.whatsis_l_lh_tan
    bmecoqty = ecocalc.whatsis_l_lh
    xvars    = 'DD114 DF33 DF34 DF36 DF38 EM2030'
    dd114    = 6
    df33     = 1
    df34     = 1 
    df36     = 1
    df38     = 2 
    em2030   = 4
    "TBADD BILOMAT SAVE("xvars")"

The row for 'ER4590T' in table BILOMAT has all four canonical fields filled, and there are six additional extension variables, one each for the six components of every ER4590T.  When the supplier for the EM2030 component advises you there will be an unavoidable 40% increase in the price of that component, you're going to want to know which of your products (BMITEMs) is going to be affected and by how much.  The catalog entries for each of those products will have to be adjusted to correctly show the new price of everything that relies on the EM2030, right?  Of course!

    "TBVCLEAR BILOMAT"
    em2030  = 0
    "TBSCAN BILOMAT ARGLIST(EM2030) CONDLIST(GT)"
    "TBDISPL BILOMAT PANEL(SRCHPART) AUTOSEL(NO)"

TBVCLEAR zaps all the 'regular' names mentioned in the definition of table BILOMAT, variable 'EM2030' is set to zero, TBSCAN specifies that we seek all rows where an extension variable 'EM2030' has a value greater than zero.  Panel SRCHPART should, if coded correctly, display a scrollable list of all rows (all BMITEM keys) associated with subassembly EM2030.

Zounds!  This is almost DB2!  Except that it didn't take us eight months and require the approval of two vice-presidents.  And if they decide later that they really do want to implement it in DB2, you already have the proof-of-concept ready to demonstrate.

 

(*) — You didn't think when I said 'unlimited' I actually meant 'without limit', did you?  Tsk.  Of course there are limits.  It's just that you're unlikely to hit them before your table is so large your system chokes trying to open it.  I've seen that happen with complex tables as small as 6,000 rows, but I have also seen 50,000-row tables that worked just fine, and I suspect they could have grown much larger without causing a problem.

Friday, November 25, 2011

ISPF Tables — Searching

There's no telling how many programmers have simply given up using ISPF tables because 'the @#$% table services never work the way you expect them to work!'  They actually work just fine.  It's the @#$% IBM manuals that don't make their operation clear — don't make the operation of some very complex services as clear as they ought to be.  Whether this is simply a matter of writing style or whether IBM does so deliberately, the delicate interrelationship between TBVCLEAR, TBSARG, TBSCAN, and TBDISPL rarely gets to a programmer's conscious level.  The lack of examples showing TBVCLEAR, TBSARG, and TBDISPL working in concert also doesn't help.

The most important ISPF table service related to table searching is neither TBSARG nor TBSCAN;  it is TBVCLEAR.  The IBM manuals (almost) completely gloss over the fact that when searching an ISPF table, every element that has a value, whether it is part of the explicitly-named search argument or not, participates in setting the search argument.  Every element that has a value.  The service that makes sure elements do not have values is TBVCLEAR.

If you carefully read the manual text for TBSARG, there are subtle clues to this behavior as it relates to setting search/scan arguments:

A value of null for one of the dialog variables means that the corresponding table variable is not to be examined during the search.
and
To set up a search argument, set table variables in the function pool to nulls by using TBVCLEAR.
and
The search arguments are specified in dialog variables that correspond to columns in the table, and this determines the columns that take place (sic) in the search.

You could be forgiven for thinking that specifying this variable or that one in a TBSARG might have some bearing on which variables get used for setting the search argument.  Alas, it has less 'bearing' than might seem obvious.

Let us assume that we have a table, TBL, and that this table is defined as

    "TBCREATE  TBL  KEYS(KEY1 KEY2) NAMES(NAME1 NAME2 NAME3) WRITE REPLACE"

We have read a particular row, either by TBGET or TBSKIP, and now we wish to find all the rows in TBL with the same NAME2 value.  These will be displayed via TBDISPL for further processing.

    "TBSARG TBL NAMECOND(NAME2,EQ)"

is insufficient.  It is insufficient because KEY1, KEY2, NAME1, and NAME3 all have values and are not null.  In fact, a subsequent TBSCAN or TBDISPL will return a single row, the one that was just read.  That's the only row on the table with a matching KEY1 and KEY2.  In order to expand the search to the remainder of the table, we must zap all those variables:

   nval = name2     /* preserve the value */
   "TBVCLEAR TBL"
   name2 = nval     /* load NAME2 */
   "TBSARG TBL NAMECOND(NAME2,EQ)"

In fact, since "EQ" is the default and all table variables that are not null participate in the search (and NAME2 is the only non-null variable at this point), it should be sufficient in this case to cast the TBSARG as

    "TBSARG TBL"

Since we're working in REXX, some may be tempted to DROP the elements that are not to be searched-for.  A REXX 'drop' is not the same thing as setting a table variable to null via TBVCLEAR and the results may be disappointing.  Likewise, setting the variables to null via 'closed quotes'

    parse value "" with key1 key2 name1 name3 .

also won't get you what you need.  This will cause the search to be limited to those rows for which KEY1 and KEY2 are empty.  There can only be one of those and there may not be even that.

Before TBSARG, before TBSCAN (if that's how you're setting the search argument), TBVCLEAR.  Voila!

Wednesday, November 23, 2011

ISPF Tables — Closing

How do you like that?  I wrote this and never published it.  This was supposed to be the blog entry for 11/22, and it seems to have been forgotten.  Ah, well...  better late than never...

When you've finished doing whatever needed to be done with the table, it must be TBCLOSEd.  As with TBOPEN, a LIBDEF is needed, but not to ISPTLIB.  ISPTLIB is the input side of table processing.  ISPTABL is the output side of table processing.  Since data will be written to ISPTABL, ISPTABL cannot be a concatenation.  Only one dataset may be allocated to ISPTABL:

   if noupdt = 1 | sw.0tbl_chgd = 0 then "TBEND  " $tn$ /* don't save */
   else do
        "LIBDEF  ISPTABL  DATASET  ID("isptabl")  STACK"
        "TBCLOSE" $tn$
        "LIBDEF  ISPTABL"
        end 

If there have been no changes to the table, it is not necessary to TBCLOSE (which writes the table to disk).  A simple TBEND is sufficient.  Both TBEND and TBCLOSE flush the in-storage copy of the table, but TBEND doesn't write to disk first.  Since it doesn't write to disk, the LIBDEF for ISPTABL is not necessary for TBEND.

As with other LIBDEFs, this one only has to exist long enough to close the table.  After that, it can be (and should be) released immediately.

ISPF Tables — Defining and Building

Building an ISPF table is done by the TBCREATE service.  The table may or may not have a key or keys.  The table may or may not have one or more name (data) fields.  It will almost certainly have at least one of either, but 'almost' is not 'absolutely'.  In fact, an ISPF table may be TBCREATED with no specifically-named fields at all...  but you had better know what in Hell you are doing in that case.  Very likely, before doing so, you would very definitely know what you had in mind.  The uses for an 'extension-variables-only' table are breathtakingly esoteric.

    "TBCREATE" table_name "KEYS(" key_field_names ")",
                         "NAMES(" data_field_names ")" other_table_qualities
    "TBCREATE TEMP  [ KEYS() NAMES() ] WRITE REPLACE"       

To populate a new table row, each of the named fields, whether KEYS or NAMES, is assigned a value (or not — more later) and the TBADD service is called.  The row is created with all KEYS and NAMES fields, whether populated or not, plus all extension variables referenced by the SAVE parameter.  Extension variables are potentially unique to each row, and are only loaded to the table if specifically requested in the TBADD/TBMOD service call.

    "TBCREATE ZORK KEYS(INDEX OFFSET) NAMES(PLACE TYPE VALUE)"
    index   = index + 1
    /* OFFSET not populated */
    place   = left
    type    = normal
    value   = total
    oper    = userid()
    time    = Date("S") Time("N")
    "TBADD ZORK SAVE(OPER TIME)"

This added one row to table ZORK.  All fields except OFFSET have values, and there are two extension variables, OPER and TIME, which are not part of the canonical definition.  There is a field for OFFSET, but it is null, and here null has a different meaning than it does in REXX.  Here, it means: doesn't exist.

That's it.  That's all there is to populating a table.  Updates to the table thereafter can be done via TBADD (for new rows) or TBMOD (for replacement rows — or new rows if the key does not presently exist).

Monday, November 21, 2011

ISPF Tables — Opening

When using an ISPF table for input, the table library must be available via ISPTLIB.  The library containing the table to be used has to be either part of the original ISPTLIB allocation or LIBDEF'd into a preferential position.  If LIBDEF'd, the LIBDEF can reference either a DATASET or a LIBRARY.  If a LIBRARY, the library itself must have been previously ALLOCATEd under a unique DDname.  This latter method is fraught with hazard and I do not recommend it.  I recommend instead using the DATASET route as being much less likely to give you an ulcer:

   "LIBDEF  ISPTLIB  DATASET  ID("isptlib")  STACK"
   --- use the dataset ---
   "LIBDEF  ISPTLIB"

Once the dataset is in place, you must open it with TBOPEN...  unless it's already open...  unless it's not even present.  Well, heck, how do we know that?  TBSTATS:

   "LIBDEF  ISPTLIB  DATASET  ID("isptlib")  STACK"
   "TBSTATS" $tn$ "STATUS1(s1) STATUS2(s2)"
   if s1 > 1 then do
      say "Table" $tn$ "not available."
      zerrsm = "Table" $tn$ "not available."
      zerrlm = "Table" $tn$ "not found in the ISPTLIB library chain"
      sw.0error_found = "1"
      end; else,
   if s2 = 1 then do                   /* table is not open          */
      "TBOPEN "   $tn$   openmode.NOUPDT
      if rc > 4 then do
         sw.0error_found = 1
         zerrsm = "Table did not OPEN"
         zerrlm = "Table" $tn$ "cannot be opened due to prior",
                  "enqueues."
         "SETMSG  MSG(ISRZ002)"
         end
      end
   else "TBTOP" $tn$
   "LIBDEF  ISPTLIB"

STATUS1 generally addresses the data-content of the library.  STATUS2 references the OPEN-state of the named table, in this case, variable "$tn$".  A value of "1" in STATUS1 indicates that the named table exists in ISPTLIB.  A value greater than one signals trouble:  the table you've named isn't where it can be opened.  This certainly constitutes 'an error'.

A value of "1" in STATUS2 signals that the named table is not presently open or otherwise in use.  If this is not the case (the table is already open - STATUS2 returned a value greater than "1") there is no need to re-open the table.  ISPF will ignore such a request.  With the table presently in a not-open state, a TBOPEN can be issued for the table.  The example here shows that the mode of opening is determined by the current setting of variable "NOUPDT" (set in REXXSKEL's TOOLKIT_INIT section).  The table will be opened for WRITE if NOUPDT is off, and NOWRITE if it is on.

A table which is already open need only be TBTOPped to make it ready for our use.

As soon as the table is determined to be open, the LIBDEF for the ISPTLIB is no longer needed — a full copy of the table now exists within the user's region.  It's good practice to immediately drop the LIBDEF the instant it has served its purpose.

That's it.  That's all there is to opening an ISPF table.  From here on in until the table is TBCLOSEd or TBENDed, the complete contents are available for your use.  Unfortunately, that makes them unavailable for anyone else's use if the table was opened WRITE, so be a good user: get in, get done, get out.

If you as the programmer discover that some of your users are hogging the table, you may have to take drastic action to prevent that.  'Drastic action' may include TBCLOSEing the table after every update, TBENDing it after every inquiry, and consequently repeatedly TBOPENing it for each new cycle.  In between, you will have to maintain a record of what the original looked like when it was first fetched so that you'll know whether it was changed in the meantime.  Lots of CPU cycles wasted because of undisciplined users.  You may even wind up rewriting the app to eliminate ISPF...  but you will have had the experience of prototyping it (easily) in ISPF and will get the benefits Dr. Brooks promised when he said:

When planning a program, plan to do it twice.  You're going to do it twice;  you might as well plan for it.

Saturday, November 19, 2011

SYSVAR

I was going to blog about SYSVARS, a routine I wrote which lists all the SYStem VARiable information available in MVS when I realized I didn't have an example of the output.  Bummer!  So I went on the web to see if there was anything like a manual that would describe the output in a way I could use.  I found, instead, (really... how could I forget?) David Alcock's MVS freeware page.  Not only does it have screenshots of the output, but his code is probably better than mine, anyway.

Why have SYSVAR at all since all it does is display stuff you have no control over?  When you're writing code and you need a quick reference, nothing beats being able to see the actual values that will be delivered by a SYSVAR call.  Alternatively, when your SYSVAR call isn't working exactly the way you thought it would, you can quickly check that what's being returned is (or isn't) what you thought.

Little utility/demonstrator programs like SYSVAR can save precious time when you don't have much of it to waste — that's 'all the time' for most of us.  You could do worse than to collect a toolbox-full of such things.

Friday, November 18, 2011

ISPF Tables

Most of our uses of ISPF tables are very simple.  Every once in a while, however, we might bump into a relatively-more-complex situation where the uses are not so simple.  Without fail, such times are opportunities for us to learn some very difficult and painful lessons.  It's much better to learn them on a blog than while sitting before a terminal pumping code in a vain attempt to make a deadline.  The most difficult lesson is this: reading a table row populates any REXX variables that have matching names (and creates them otherwise).  This is most dangerous when you have to work with more than one table simultaneously.  Let me illustrate:

   address ISPEXEC 
   "TBSKIP ORDERS" /* sets vendor, product, date, qty, invoice */
   "TBGET  INVENT" /* keyed by 'product', sets qty, descrip */ 

The value for 'qty' from 'ORDERS' has been overlaid by 'qty' from 'INVENT'.  To preserve the original 'qty', it has to be copied to a different variable name.  Code-writing time is the wrong place to have to suddenly rename variables wholesale.  That is why I recommend that every table variable name should have a two-character prefix unique to the particular table:

   address ISPEXEC 
   "TBSKIP ORDERS" /* sets orvendr, orprod, ordate, orqty, orinvc */
   "TBVCLEAR INVENT"
   inprod = orprod
   "TBGET  INVENT" /* keyed by 'inprod', sets inqty, indesc */ 

There is now 'orqty' and 'inqty' and they can be compared, perhaps to see whether it's time to reorder.

Problem solved?  Hardly.  ISPF tables have more tricks up their sleeve(s).  Behold... extension variables.

Extension variables are unique to each row, and only the programmer exercising care in their naming can prevent clashes such as above.  As with ordinary table variables, extension variables also populate REXX variables when the row is read, and there is no warning that a simple TBSKIP may have clobbered other variables... unless the programmer is smart enough to ask if any extension variables were transferred with the rest of the row:

   "TBSKIP ORDERS SAVENAME(xvars)"
   parse var xvars  "("  xvars  ")"

which lets us know that 'ornotes', 'ormgrnam' and 'ormgrtel' were set without warning.  If we were to update that row without also commanding that those three extension variables be included in the update, they would simply disappear, also without warning.  It's a quick way to shoot holes in table data.  That's not us, thankfully.  We're smart enough to:

   "TBMOD   ORDERS  SAVE(" xvars ")"

thus preserving the data in those extension variables.

As with everything else, a few small proof-of-concept routines can be an invaluable learning aid, and (of course) you need to test the logic to death.

Thursday, November 17, 2011

Associative arrays

In REXX, arrays do not have to be indexed by numbers.  They can be indexed by anything... I think.  Certainly they can be indexed by names or other non-numeric tokens, and they certainly don't have to have values for every possibility.  Les Koehler calls these 'content addressible arrays' because it is data-content which serves as the index-value for the array.

It's common to see such arrays in matching routines where an array might be initialized to '0' and set to '1' for each of several dozen/hundred/thousand values as they are encountered in the course of processing data.  The value of the array-element tells us, in that case, whether or not we have encountered the index token before, or in the case of an actual counter, how many times we have encountered that token.  If you are, for instance, trying to eliminate duplicate data in a file and the duplicates might be scattered randomly throughout the data, this is just what the doctor ordered:

exists. = 0
do (over the entire input stream)
                  /* get the next token */
   if exists.token then iterate   /* discarding this input */
                  /* process this input data */
   exists.token = 1               /* discard all future */
end

This logic causes each token to be processed only the first time it is encountered.  All following occurrences are flushed.  If, instead, we wished to know how many of each token were present, the code would simply increment a counter each time the token were encountered.  However, in such a case, it will be nigh-on impossible to retrieve the counts at the end of processing unless the program has separately kept a list of the tokens it found, because the tokens could be anything... literally.

WARNING!  One aspect of content-addressible arrays has caused me endless grief because I continually forget the lesson so painfully learned the last time it happened.  To the extent those index-values are alphabetic, they are exclusively upper-case.  That is: if you set a value as

fullname.Tom  = "Tom Swift"

you must be careful when retrieving it that you specify the index-value as "TOM".  Stated another way, you cannot have separate values for 'fullname.Tom' and 'fullname.TOM'.  They occupy the same space.  Only the last set value will exist, and it will be indexed by 'TOM'.  It is a worthwhile 'exercise for the student' to write a small demonstration program to expose this behavior.

Wednesday, November 16, 2011

The Stack

The best part about REXX (after PARSE, of course) is the stack.  Knowing how to exploit the stack can save you mountains of trouble.  Once you get to appreciate the power of the stack, it opens doors to a world of capabilities.  Remember when we used to write line-mode dialogs: prompt then PULL the response?

say "Enter the dataset name:"
pull dsn

But you can't do that if there's already something on the stack.  You can't do that if there might be something on the stack.  Just in case, you should always phrase the above as

"NEWSTACK"
say "Enter the dataset name:"
pull dsn
"DELSTACK"

The same is true if you're dealing with a subroutine that returns its answer on the stack (my favorite mode of inter-module communication).  Let's say you have a subroutine that does some function for you and returns its result via the stack:

"NEWSTACK"
"CKSTATE" volid
pull state
if state = "MULT" then
   do queued()
      pull state   area
   parse value  volst.0+1   state  area    with,
                zz          volst.zz   1  volst.0  .
   end 
else volst.1 = state   "00"
"DELSTACK"

That is, if your subroutine returns more than one value, the first line of the returned stack is "MULT" and all the returned values follow.  If there's only one returned value, it's on line number 1.

The beauty of "NEWSTACK"/"DELSTACK" is that you can protect any material you've squirreled away against damage by other data that also needs to use the stack.  This is especially important when that 'other data' comes from the keyboard in response to a prompt.

Lastly, the use of "NEWSTACK"/"DELSTACK" guarantees that the current stack is empty.  If you want to be 100% sure you're getting a response from the keyboard and no place else, "NEWSTACK" is it.  When I want to put out multiple pages of data (HELP-text, for instance, or a screenful of output) and wait for the user's permission to go to the next page, I insert this line where I want to wait for the user to finish reading:

"NEWSTACK"; pull ; "CLEAR" ; "DELSTACK"

(Your installation may use some other routine to clear the screen.  Just substitute your local flavor.)

Saturday, November 12, 2011

How to empty a sequential dataset

If site security rules prevent you deleting and recreating a sequential dataset but you can write to it (in ACF2 terms, you are "READ(A) WRITE(A) ALLOC(P)"), you can still clear it of records.

"ALLOC FI($TMP) DA($TMP) SHR REU"
"EXECIO 0 DISKW $TMP (OPEN FINIS"
"FREE  FI($TMP)"

I'm about to run out of quick tips.  They'll either get longer and more elaborate (and thus lots less 'quick') or they're going to become fewer and not quite so 'daily'...

Thursday, November 10, 2011

Is it a Leap Year?

Well, really, the only reason we care is to know whether February has 28 days or 29 days, right?  Everything else is the same, because knowing that fact also tells us whether the year is 365 days long or 366.  So, the problem devolves to simply knowing the number of days in February, and that depends strictly upon the question 'what year is it?'.

   days.   = 31                        /* jan mar may jul aug oct dec*/
   days.04 = 30                        /* apr                        */
   days.06 = 30                        /* jun                        */
   days.09 = 30                        /* sep                        */
   days.11 = 30                        /* nov                        */
   days.02 = 28 + (ccyy//4=0) - (ccyy//100=0) + (ccyy//400=0)

(Presuming you have a 4-digit year available for the calculation)  the number of days in February is 28 plus...

One if the year is evenly divisible by 4  (ccyy//4 = 0),  unless...

The year is also evenly divisble by 100  (ccyy//100 = 0)  unless...

The year is also evenly divisble by 400  (ccyy//400 = 0)

Couldn't be simpler.  2000 was a leap year, and February had 28 + 1 - 1 + 1 (=29) days.  1900 was not a leap year;  it had 28 + 1 - 1 (=28) days.

Wednesday, November 9, 2011

Invoking an ISPF-dependent routine from READY

So you've written this peachy routine that uses all sorts of ISPF facilities.  It runs great from any command line just about anywhere within ISPF, but when you try invoking it from READY, it crashes because there is not an ISPF environment available.  You could simply start ISPF whenever you want to run it, run it from the primary option panel, and flush ISPF when it's done...

...or you could rig it so that it will run from READY as easily as it does from within ISPF.  That way, you never have to remember which of your routines need ISPF and which don't.  The ones that do will pave their own road ahead of their need for it.

WARNING: the code that follows presumes you have all the capability provided by REXXSKEL.  For more information about REXXSKEL and what it does, pop on over to my REXX page and take a look at the write-up titled "How To REXXSKEL".  Of particular interest in that regard is the subroutine SWITCH and the flow of control from the mainline to TOOLKIT_INIT through LOCAL_PREINIT.

TOOLKIT_INIT in a REXXSKEL-enabled routine sets, among other things, "sw.inispf" which tells us whether or not an ISPF environment exists.  (Yes, I know it should be "sw.0inispf", but REXXSKEL was built a long, long time ago in a galaxy far away...)  When an ISPF environment does not exist, we can generate one via ISPSTART, but we have to be prepared to rip it down when we're done with it.  During the early mainline of your well-structured program, you will need to restart the routine in a newly-invoked ISPF environment and to halt the current execution as soon as that restarted execution completes:

if /sw.inispf  then do                 /* after TOOLKIT_INIT return  */
   arg line
   line = line "((  RESTART"           /* tell the next invocation   */
   "ISPSTART CMD("exec_name line")"    /* Invoke ISPF...             */
   exit 4                              /* ...then bail out           */
   end

When the restarted routine ends, it should end with a non-zero return code to skip processing for any generated LOG or LIST datasets written by the newly-invoked ISPF environment:

if sw.0Restarted then,                 /* just after DUMP_QUEUE      */
   exit 4

To determine whether or not this execution was restarted, LOCAL_PREINIT (part of initialization, but customized routine by routine) has checked the parameter string for the presence of the token "RESTART":

   sw.0Restarted = SWITCH("RESTART")

That's it.  That's all there is to it.  When you fire this routine from READY, it discovers that it is not in ISPF, restarts itself (via ISPSTART).  The restarted invocation ends non-zero, and returns to its calling point in the original, whereupon the original exits back to the operating system.

Tuesday, November 8, 2011

RXVSAM — VSAM-enabling REXX routines

RXVSAM is a package originated by Mark Winges.  Mark is out of the programming game these days, preferring to devote his time to music, but he left behind a very useful piece of software that you can have for free.  It can be found on file 268 of the CBT Tape.  You will have to assemble the RXVSAM load module from its supplied source, but it doesn't have to be located in an authorized library, so anyone can have a VSAM-enabled REXX routine, and let me assure you that is something very useful to have on occasion.  What follows will be a very 'surface' treatment of RXVSAM just to illustrate how powerful it is.  It will not be a recitation of the RXVSAM manual. 

Once you have RXVSAM available to you as a callable load module, you have to make it available to your REXX routine:

   "LIBDEF ISPLLIB DATASET ID(...the loadlib with RXVSAM...) STACK"

and you have to allocate and open the VSAM dataset(s) you are going to work with:

   "ALLOC FI($VS) DA("ocompds") SHR REU"
   rxv_rc = RXVSAM("OPENINPUT","$VS","KSDS")

A word of warning:  RXVSAM is very fussy about its parameters.  They must be exactly the right length.  Quoted literals are best whenever possible.  With the dataset allocated and open, VSAM I/O operations may commence:

   rxv_rc = RXVSAM('READ','$VS',component,'AD0104')
   if rxv_rc > 0 then do               /* ...oops                    */
      sw.0error_found = 1
      end
   else do
      parse var ad0104  appl 9 component 19 currver 29 bits 30
      ...
      end

Here, a READ command is issued against a KSDS dataset previously opened as INPUT ($VS) using key 'component'.  The result of the READ is returned in variable AD0104.  Note carefully what is quoted and what is not quoted.

When the VSAM dataset is no longer needed, it must be closed by RXVSAM and FREEd:

   rxv_rc = RXVSAM("CLOSE","$VS")
   "FREE  FI($VS)"

and, of course, you must (eventually) release the LIBDEF for the RXVSAM library:

   "LIBDEF ISPLLIB"

Virtually anything you can do to or with a VSAM dataset in COBOL or PL/I you can now do in REXX.  If, for instance, an ISPF table has grown beyond the limits of practicality for tables, the data can be configured as a VSAM KSDS.  To service an inquiry, read-with-key in the VSAM KSDS, and load a selection of the dataset's records to a temporary ISPF table.  All the facilities for handling ISPF tables will then be available for use on that subset of the VSAM file — the best of both worlds.

I have a large number of examples of operations using RXVSAM.  If you have questions, feel free to drop me a line.

Monday, November 7, 2011

Adding a row to a stem array

Les Koehler taught me this trick that I have put to wide use since the mid-90s.  Yes, adding a row to a stem array is a fairly simple process:

   zz     = log.0 + 1
   log.zz = msgtext
   log.0  = zz

That is an example of using "a bigger hammer" even though many REXX programmers will look at it and exclaim "There's nothing wrong with that!"  Indeed, there's nothing wrong with it...  except that it's slow.  If you're doing it several thousand times, you'll probably want something a little quicker.  In fact, after you've found that 'something quicker', you may well decide to always use the quicker method.  Those who don't follow REXXpertise will cock their heads to one side as if to ask "What in the world are you doing?"  You'll get a chance then to explain the process to them ;-)

   parse value  log.0+1  msgtext     with,
                zz      log.zz    1  log.0   .

Here we first construct a value-string composed of "log.0 + 1" and "msgtext".  This is parsed as "zz" and "log.zz".  Since "zz" is set first (from the value of "log.0+1"), "log.zz" now points at the next available slot.  The location pointer is reset to "1" and the parse continues, loading "log.0" with the incremented value.  The remainder of the line is discarded.  Once you understand the protocol, it makes perfect sense.

Sunday, November 6, 2011

(TSO) ALTLIB = (ISPF) LIBDEF

Back on Novenber 2, I did a short introduction to ALTLIB, the TSO command that allows you to insert one or more datasets into a preferential position so that (in that case) REXX routines could be found for execution.  ISPF has an equivalent command that, if you write ISPF dialogs, and especially if you write them in REXX, can come in very handy.  Let's quickly review how ALTLIB works:

"ALTLIB   ACT APPLICATION(EXEC)  DA('........') "
if rc > 4 then do
   say "ALTLIB failed, RC="rc
   exit
   end

.... call one or more routines stored there ....

"ALTLIB DEACT APPLICATION(EXEC)"

This obviously takes place in 'address TSO'.  ALTLIB ACT names a dataset to be searched ahead of the normal search order for (in this case) SYSEXEC.  A non-zero return code indicates the ALTLIB didn't happen, and the process ends with either an 'exit' or a 'return'.  If the ALTLIB happened, the process can use the code stored there, and when the task is complete, an ALTLIB DEACT undoes the work of the earlier ALTLIB ACT.

ISPF has an almost-exactly-equal process called LIBDEF.

   "LIBDEF  ISPTLIB  DATASET  ID("isptlib")  STACK"
 .... use the contents of the ISPTLIB ....
   "LIBDEF  ISPTLIB"

This operation must take place in 'address ISPEXEC'.  The LIBDEF parms are (1) the DDName to be altered, (2) 'DATASET' or 'LIBRARY', and (3) the identity of that dataset or library.  There are other parameters which can be passed including, as shown here, 'STACK'.  Note that when using 'LIBRARY' the ID-portion is the DDName under which the library or libraries have been allocated;  that implies a prior ALLOC for those assets.  When using 'DATASET', no prior ALLOC need have been done.  Programmers who complain that some ISPF asset is tied up and can't be released are often referring to a 'LIBDEF ... LIBRARY' which has gotten tangled somehow.  Avoid entanglements;  use 'LIBDEF ... DATASET'.

When the asset is no longer needed, it is good practice (you should consider that required) to release the LIBDEF by specifying the active DDName alone (as shown).

Between the two LIBDEF operations, the asset is as much a part of the active DDName as if it had been part of the original DD during LOGON processing, and this is true whether the active DDName is ISPPLIB, ISPTLIB, ISPTABL, ISPMLIB, ISPSLIB, or any other.  Every ISPEXEC command will search the appropriate ISPxLIB allocations in reverse order, original allocations last, until it finds the asset it's looking for.

Now...  STACK.  If the original LIBDEF did not specify STACK, the following LIBDEF, the undoing LIBDEF, not only releases this LIBDEF, but all prior LIBDEFs whether done with STACK or not.  In other words, if process "A" LIBDEFs an ISPTLIB into place and then calls process "B" which LIBDEFs a further ISPTLIB into place before calling process "C" which LIBDEFs a third ISPTLIB into place, it is possible for process "C" to eliminate all three LIBDEFs before processes "A" and "B" can use them.  Processes "A" and "B" will likely fail because the proper asset is not present, although something worse could happen: they might find their data in the wrong asset libraries and use those, thus appearing to have operated correctly when, in fact, they did not.

Saturday, November 5, 2011

Masking and Matching

When you have to find things in a list that match things in another list, that's fairly easy.  You simply spin through one list (the shorter one) and for each element there, you see if "WordPos" delivers a match in the other list.  WORDPOS' processing is probably faster by orders of magnitude than anything you can code in REXX using the Bigger Hammer method.

What do you do when you have to find elements in one list that are merely like elements in another list?  Say you need to find all the membernames in a PDS that look like 'BRT**4'.  That's a different problem.  Yes, you could just go get a bigger hammer, or you can reach for the jeweler's screwdrivers.

REXX has two built-in functions, BITOR and BITAND, that can double-team that problem.  BITAND operates on two strings by ANDing them at the bit-level.  BITOR operates on two strings by ORing them at the bit-level.

BITAND returns a 1-bit in every bit position for which both strings have a 1-bit:  BITAND( '73'x , '27'x ) yields '23'x.  '0111 0011' & '0010 0111' = '0010 0011'.

BITOR returns a 1-bit in every bit position for which either string has a 1-bit:  BITOR( '15'x , '24'x ) yields '35'x.  '0001 0101' | '0010 0100' = '0011 0101'.

If there are characters for which it doesn't matter whether they match, those characters can be BITANDed with 'FF'x and BITORed with '00'x.  The result in each case is that the "I don't care" characters are returned intact by the BITAND/BITOR.  For the other characters, the ones for which it does matter whether they match, only an exact match will deliver the proper bit-pattern when they are both BITANDed and BITORed.  Like this:

      memmask = Strip(memmask,"T","*")
      maskl   = Length(memmask)
      lomask  = Translate(memmask, '00'x , "*")
      himask  = Translate(memmask, 'FF'x , "*")
      do Words(mbrlist)                /* each membername            */
         parse var mbrlist mbr mbrlist
         if BitAnd(himask,Left(mbr,maskl)) = ,
             BitOr(lomask,Left(mbr,maskl)) then,
            mbrlist = mbrlist mbr
      end                              /* words                      */

In this case, we have been given 'memmask' which may look like 'BRT**4'.  MASKL is '6'.  LOMASK gets '00'x characters in place of the asterisks.  HIMASK gets 'FF'x characters in place of the asterisks.  We spin through the list of membernames stored in MBRLIST.  For each word ("MBR") in that list, we BITAND it against the HIMASK and BITOR it against the LOMASK.  When the comparison is exactly equal, we have a match.  In this case, we merely attach the matched membername at the back of MBRLIST.  When we have iterated this loop "Words(mbrlist)" times (once for each original word), MBRLIST will contain only those membernames matching "BTR**4".

As with all 'tips and tricks', this is merely one way to 'skin the cat'.

Friday, November 4, 2011

Boolean Switch-setting

'Switches' are an important logic element in many languages.  They allow the programmer to quickly and easily direct the flow of logic.  The classic IF-THEN-ELSE is even represented on flowcharts by the diamond-shaped 'switch box'.  The result is either true or false, '1' or '0'.  In fact, that classic IF-THEN-ELSE is very often used to set switches for later use when the values may have been changed by subsequent processing.  Switches often get set like this:

if A = B then sw.0equal = 1
         else sw.0equal = 0
There is another way, I think it's a better way, and it certainly cuts down on the keystroking, but it's not immediately obvious to newbies what's happening:
sw.0equal = A = B
Got it?  When A and B are equal — when 'A = B' is TRUE — sw.0equal gets set to '1', TRUE.  sw.0equal takes on the truth-value of the proposition 'A = B'.

My old friend, Ramon Faulk, taught me that trick back in the 1970s.  You have to be writing in a terse language like REXX or PL/I to be able to use it.  Good thing we're working in REXX, huh? ;-)

Note also that the content-addressible array "sw." is here suffixed with the glyph "0equal".  "0equal" is invalid as a variable name, so no maintainer can come along behind you, use "equal" as a variable, and dynamically redefine the meaning of "sw.equal".  I learned that one from Les Koehler, the first American to write a REXX program.  Unless you intend that content-addressible array suffix to be changeable, start it with any character that will prevent its use as a variable.

Thursday, November 3, 2011

Cylindrical arrays

I use the term "cylindrical array" for any collection of things you have to process as "18, 19, 1, 2 ..." or "3, 2, 1, 22, 21, ...";  that is, when you get to either the upper end or the lower end, you continue processing at the other end as if the array were wrapped around a cylinder.  There aren't many cases in ordinary data processing where this technique is called for, but when it is, it's always an opportunity to "use a bigger hammer".  (Think of pounding a square peg into a round hole: if it doesn't fit, you get a bigger hammer.)

Luckily, it is dead-bang-easy to implement something like this, and you'll see it wherever (for example) an application calls for being able to shift a panel right or left.  Now, ISPF facilities do not allow the shifting of panels right and left except for some very special IBM-originated panels as are found in Browse and Edit.  A good friend, Chris Lewis, introduced me to a technique for simulating the effect so that it looks like the panel is shifting sideways.  The logic was a little bit "bigger-hammerish" so I smoothed it down.

You start with the knowledge of how many items there are in the list (the item-count).  To advance to the next item in the array (or list, if you prefer) you MOD the current item-number with the item-count and add one:

next_item = (item // item_count) + 1
To retreat to the prior item, sum the item-number and item-count and subtract two, then MOD the result with the item-count and add one:
previous_item = ( (item + item_count - 2) // item_count ) + 1

Here's how it breaks down for a list-of-five:

   To go to the next style:  (style//stylect) + 1:
          1   2   3   4   5
          1   2   3   4   0    (mod stylect)
          2   3   4   5   1    (add one)
   To go to the prior style:  (style+stylect-2)//stylect + 1
          1   2   3   4   5
          6   7   8   9  10    (+stylect)
          4   5   6   7   8    ( -2 )
          4   0   1   2   3    ( //stylect)
          5   1   2   3   4    ( +1 )
and this even works when the number of items is one.

At some time in the future, I'll blog about ISPF tables and table-displays and this topic will certainly come up again.  If you'd like to see a preview of how it's used, pop on over to my REXX page and take a look at PKGREQS.  It may not make much sense at first, but that's just because this blog isn't long enough... yet.

Wednesday, November 2, 2011

FB80 v. VB255, and what to do about it.

Very many installations have their SYSPROC and SYSEXEC libraries set up as FB/80 datasets, and I've seen too many with blocksizes of 800.  Why in the world would any sysprog do such a thing?  The answer is that that's the way IBM delivers SYSMODs, and it's easy to simply load them up whichever way IBM delivered them, never mind how much disk space such a thing eats.  You did remember that IBM was in the hardware business, didn't you?

So, here you are, a lowly application programmer, and your opinion as to how stupid it is to waste all that space doesn't carry an awful lot of weight.  If you want to get your CLIST or REXX code into a 'supported' dataset, it had better be amenable to living in a fixed-block-80-byte world.  Or does it?

What do you do with all those REXX execs that you wrote 124 characters wide because you happened to be using a model 5 terminal emulator?  Rework them so there aren't any lines longer than 80 bytes?  Not me.  I'm not putting all that effort into anything so unnecessary.  I load up a driver and put the real code into a VB/255 library where it belongs.  Interested?  Yeah, I'll bet you are...

/* REXX    Driver
*/ arg argline
address TSO
parse source . . exec_name .

"ALTLIB   ACT APPLICATION(EXEC)  DA('........') "
if rc > 4 then do
   say "ALTLIB failed, RC="rc
   exit
   end

(exec_name)  argline

"ALTLIB DEACT APPLICATION(EXEC)"

exit 

Here's how it works:  The 'arg' statement takes a snapshot of all the original parms for later use.  The 'parse source' identifies the name of this exec.  Issue an ALTLIB for the dataset that holds the real code, the VB/255 copy.  If you get a non-zero return, the ALTLIB didn't work for some reason.  If it did, you can now call that copy, and you do so with the original parms.  When it finishes, it will return right to this spot and a second ALTLIB releases the VB/255 dataset.

Whatever is the name of the working code in the VB/255 dataset, plant this driver in the 'supported' FB/80 dataset with the exact same member name.  Now, when someone invokes that routine, they get the driver, the driver snags the list of parms, ALTLIBs the dataset containing the working code into a preferential position, and re-issues the original call.  That re-issuance catches the working code, passes the parms to it, it executes (or not, as the case may be), and finally returns to the driver for clean-up.

There's another secondary benefit to this, too.  That 'supported' dataset doesn't hold anything except a 'stub' pointing to the real code.  If your sysprogs keep it locked, and your real code is in that locked library, fixing it in an expeditious manner may not be very easy.  Remember, they're not being measured on how fast you shoot your bugs.  If what's locked up is just a pointer to the real code... well, you don't want to fix the stub, do you?

For environments that have multiple levels — development, testing, production, and possibly others — this technique is easily adjusted to locate the proper environment and execute from it, excluding the others.  In that case, called subroutines will be located from the same environment, if they exist there, so that all processes use elements from the same stage of the software development life cycle.  Neat, huh?

Tuesday, November 1, 2011

The Zenith of Tracing

Zenith used to make TVs, and they may still; I haven't shopped for a TV in years.  Back in The Good Old Days (tm), Zenith had an advertising slogan:  "The quality is built in, not bolted on".  The implication, for those who could read between the lines, is that quality can't be either forgotten at the factory or removed at a later date.  I have the same attitude when it comes to tracing and debugging.

How many times have you seen this in a REXX program:

/* REXX   --  a program to do some stuff.   */
/*  Trace("R")   */
How many times have you written something like that?

This is an invitation to maintainers to de-comment that line when they need to trace the flow of the program.  This is "bolting quality on", not "building it in", and there are a host of problems that it generates.

First, de-commenting that line constitutes "changing the program".  Yes, it's a small change and (I admit) unlikely to have any noticeable effects beyond the fact that the program now runs in trace-mode.  The (philosophical) point here is that you are not really running the program that failed, you're running a program that was changed after it failed.

Then there are other, less philosophical and much more serious, problems:  are you changing the production copy or something else?  If you're changing the production copy, will everyone running this "in production" suddenly find themselves accidentally (and surprisingly) producing reams of output?  If it's not the production copy, are you sure it was an exact duplicate of the version that failed?  Where is your test copy stored?  Is the concatenation sequence the same for your test copy as for the failing version?  Are you sure?  What about called subroutines?  Will your version pick up true copies of all necessary subroutines?  After all, the failure might have been in a subroutine and caused by bad data being passed back to the caller.

So many problems.  One solution.  Build the trace capability in, don't bolt it on.  As with oh-so-many-things, there is more than one way to skin this cat.  All that's necessary is to make it clear, and to do it consistently, and to design it so that the program doesn't have to be changed in order to start a TRACE.  Did I just say "parm"?  Yes, I believe I did.

/* REXX    -- a program to do some stuff
*/ arg argline
address TSO
arg parms "((" opts            /* 'opts' are rightward of a double-open-paren */
parse var opts "TRACE"   tv  . /* looking for "TRACE ?r" or similar           */
parse value tv "O" with tv .   /* if tv is empty, it becomes 'O'              */
rc     = trace(tv)             /* set whatever trace-value was loaded         */
You now can turn on TRACE with any form of TRACE without any change to the program.  You will be running the program that failed, not a copy of it.  Instead of calling it as
TSO BLAH43M for Monday using custfile4 crossfoot subtotals
you issue instead
TSO BLAH43M for Monday using custfile4 crossfoot subtotals (( trace ?r

You can, of course, get much fancier than this, but it's not strictly necessary unless you have an articulable reason.  I do get fancier than this (see REXXSKEL on my REXX page) because much of the parsing code I use is pre-packaged and thus doesn't have to be coded for every routine I write.  Whether you choose 'simple' or 'elaborate', it really doesn't matter.  Just no more de-commenting TRACE commands, okay?  Please?