Very many installations have their SYSPROC and SYSEXEC libraries set up as FB/80 datasets, and I've seen too many with blocksizes of 800. Why in the world would any sysprog do such a thing? The answer is that that's the way IBM delivers SYSMODs, and it's easy to simply load them up whichever way IBM delivered them, never mind how much disk space such a thing eats. You did remember that IBM was in the hardware business, didn't you?
So, here you are, a lowly application programmer, and your opinion as to how stupid it is to waste all that space doesn't carry an awful lot of weight. If you want to get your CLIST or REXX code into a 'supported' dataset, it had better be amenable to living in a fixed-block-80-byte world. Or does it?
What do you do with all those REXX execs that you wrote 124 characters wide because you happened to be using a model 5 terminal emulator? Rework them so there aren't any lines longer than 80 bytes? Not me. I'm not putting all that effort into anything so unnecessary. I load up a driver and put the real code into a VB/255 library where it belongs. Interested? Yeah, I'll bet you are...
/* REXX Driver */ arg argline address TSO parse source . . exec_name . "ALTLIB ACT APPLICATION(EXEC) DA('........') " if rc > 4 then do say "ALTLIB failed, RC="rc exit end (exec_name) argline "ALTLIB DEACT APPLICATION(EXEC)" exit
Here's how it works: The 'arg' statement takes a snapshot of all the original parms for later use. The 'parse source' identifies the name of this exec. Issue an ALTLIB for the dataset that holds the real code, the VB/255 copy. If you get a non-zero return, the ALTLIB didn't work for some reason. If it did, you can now call that copy, and you do so with the original parms. When it finishes, it will return right to this spot and a second ALTLIB releases the VB/255 dataset.
Whatever is the name of the working code in the VB/255 dataset, plant this driver in the 'supported' FB/80 dataset with the exact same member name. Now, when someone invokes that routine, they get the driver, the driver snags the list of parms, ALTLIBs the dataset containing the working code into a preferential position, and re-issues the original call. That re-issuance catches the working code, passes the parms to it, it executes (or not, as the case may be), and finally returns to the driver for clean-up.
There's another secondary benefit to this, too. That 'supported' dataset doesn't hold anything except a 'stub' pointing to the real code. If your sysprogs keep it locked, and your real code is in that locked library, fixing it in an expeditious manner may not be very easy. Remember, they're not being measured on how fast you shoot your bugs. If what's locked up is just a pointer to the real code... well, you don't want to fix the stub, do you?
For environments that have multiple levels — development, testing, production, and possibly others — this technique is easily adjusted to locate the proper environment and execute from it, excluding the others. In that case, called subroutines will be located from the same environment, if they exist there, so that all processes use elements from the same stage of the software development life cycle. Neat, huh?
I REALLY like this neat trick !!!
ReplyDelete