Let's face it: programs fail, and if they're like my programs, they'll fail at the most inopportune time, generally around 2 in the morning.
A thought just occurred to me: if you live or work in an environment where you are dependent on a compiler — a PL/I, COBOL, or ForTran environment, for instance — you need to be saving your compiler listings. "OMG! You can't be serious! That would take up mountains of disk space!" Yes, I'm serious. The mountains of disk space are peanuts compared to the time your programmers will spend at 2am recreating — if it's even possible — a compiler listing from before the last change to the compiler.
When some piece of compiled code FOOPs in the wee hours, the last thing you need is to have to locate the code and compile it. The compile listing you get from doing that may not, in fact, be an accurate representation of the load module that failed. Comforting thought, no?
The only plausible solution is to save the compiler listing for the module you will eventually roll into production. Whenever you last produce a load module that is destined for production, that compiler listing must be saved to place where, when you need it at 2am, you can find it quickly and easily. How you pull this off is likely dependent on the system programmers who are in charge of your infrastructure, the processes you use to get software into the production libraries. The greatest challenge here is to get those process-oriented folk to understand that the process isn't just one step, that actions taken in the past have serious consequences for actions that will take place in the future. Apologies in advance to sysprogs this doesn't apply to, but my experience is that most of them are fundamentally incapable of thinking three moves ahead. The compiler listing must be saved now in case it is needed in the future. So...
If your protocol is that you always compile at 'acceptance test' time and move the resultant load module to production when it's accepted, you must capture the a/t compiler listing. If you re-compile before the move to production, that compiler listing is the one you need to preserve. You need a large compiler-listing dataset whose members will match one-to-one with the production load library. You will also want to capture the linkeditor's output since that forms a critical mass of information about the load module, and therein lies a problem: the DCB of the LKED listing is potentially different than that of the compiler output. I got yer solution right here:
Go to the REXX Language Assn's free code repository and get COMBINE. In the JCL for doing the compile/link, insert an IKJEFT01 step after the LKED has run and execute COMBINE there specifying the DDs for the compiler output and the LKED output. COMBINE will combine (naturally) those two files and produce a single file that can be saved to your listing dataset. See the HELP text for COMBINE for the calling sequence, but it will generally be something like:
COMBINE COMPILE LKEDwith //COMPILE DD referencing the compiler output and //LKED DD referencing the LKED output. Your combined output will be on //$PRINT DD.
Start today. Nothing moved to production before you start doing this will be included, so the sooner you start, the better.