Although the logic table codes and some of the features above have changed, most of the above I learned during the years I lived in Sacramento. At the conclusion of that time, I had a thorough understanding of how to apply SAFR to problems. I had concluded the pharmaceutical data warehouse, the cookie manufacturer, the computer chip manufacture and a year of work at the insurance company. I had been on the testing, technical support, project, sales, and documentation teams, and was starting to prove that I could contribute to the architecture team.
Yet the constant travel, the demands of the computer chip manufacturing job even though it was in town, three small children at home, and questions about the real prospects of the tool becoming more widely known all combined to make me question if I was in the right job. After the chip manufacturing project was over, I returned to the insurance company. There, I had chances to work with Doug in the same location on a daily basis. I shared with him quite openly these questions on a two hour ride to the airport in March 1998.
The response I received from Doug that day was very consistent with the message I would receive over the next few years whenever I went to him for advice. Whenever I would ask something like, “What do you think I should do,” I almost always received the answer, “It depends on what you want.” I know I wasn’t the only one who had this experience.
It wasn’t that he left me feeling like I hadn’t received good instruction at the end of the conversation. It was simply that he was very careful in steering in any one direction. In a sense, he often helped me uncover what I wanted by simply asking me questions and reacting to my responses. In this conversation, and another two months later, he perceived the fatigue and was sympathetic to the causes. He didn’t have the answers for how to solve it, but expressed confidence that he knew I would find the right answers no matter what those answers ended up being. I always left feeling that his friendship wasn’t dependent in any way on what I chose to do.1
That summer I also had a chance to sit down with Rick for a couple of hours and talk, and I expressed the similar feelings to him. In a similar way, Rick was very understanding and supportive of whatever decision I wanted to make. I was very fortunate to have two mentors with such concern for me as a person; it allowed me to navigate some difficult personal waters over the next couple of years.
Looking back at what happened over the next few months, though, I have wondered if Doug, with Rick’s support, wasn’t a bit less passive in his response than I perceived at the time. As I have noted, SAFR at the time had a character based user interface which was completely functional, but not at all technologically hip in any way. I think Doug perceived that to keep interest of the younger team members and attract more customers, it was time to invest in the tool to change that. Two months later I found myself in Dallas for two days to work with Sherwood Daniels, a PricewaterhouseCoopers (PwC) consultant Doug had met on that project, to mock up a different kind of user interface.
A few weeks later I worked at home, and I decided to simply use Excel macros to mock up some of my own ideas. I did so over about two days. I shared them with Doug. I was a bit surprised at how positive his reaction was to it. I don’t believe it was because he wanted me to feel good, but rather he thought they were very interesting and he was impressed how quickly something could be put together. I think that experience might have been misleading about what the effort involved in building such a user interface would be. He told me years later that he had been warned by another consultant that it was a massive effort, and it turned out to take years to complete.
A few weeks later, he sent me out to Philadelphia to meet Troy Deck, owner of Wingspan Technologies. One of the team members in Dallas had suggested Troy to Doug as someone who was in the business of developing the kinds of user interface we were thinking of. I enjoyed the creativity.
This work went on to become part of a large investment bank SAFR project that created the UNIX version of SAFR, with the new user interface. Having put down this foundation, I concentrated on innovating with the scan engine.
In the fall of that year, I felt I had to finally make a decision about what way to go with my work. I was looking for some sort of external validation as to the approach to take when one morning I met with Lloyd Jackson, IT employee with a deep history with an insurance company with whom I had been working for months now. After the meeting Lloyd told me, “Kip, every project has its formal leadership, and its informal leadership. You need to be the project leader on this, and we will continue to make progress. We need your leadership to get this done.” His words hit me right between the eyes. It felt like it was exactly what I should be doing.
My wife agreed to move again so I could eliminate the 8 hour one-way commute. This meant I could better balance the demands of the project with being with my family. For the next few years, I helped create SAFR processes which did what we thought were pretty incredible things. It stretched the machine and software in some pretty amazing ways.
A couple of months after Doug created the SAFR feature called piping, I remember we tested for the first time SAFR’s ability to run a massive set of views simultaneously for the allocation process. The team had constructed programs to generate SAFR views. The way the rules were defined and the programs interpreted those rules resulted in 26 thousand views. I don’t remember for sure, but I suspect the record for views run in a single execution of SAFR was in the hundreds, perhaps low thousands. I set up the JCL for the GVBMR95 execution while Doug stood behind me and watched. We ran the process. SAFR read the logic table and generated the machine code. It then attempted to execute the machine code, but we received an operating system error. Each view called a user exit multiple times to perform a special allocation function that SAFR didn’t do natively. The operating system would only allow 32,767 instances of a single program in a single address space; somewhere in the operating system there was a binary half word containing a counter of the number of instances of the program and we had exceeded what that counter could hold. We were trying to create over 67,000 instances of the program.
To get around this, I took the user exit load module and copied it two other times, and gave each a unique name. I then went to the logic table and edited it, causing less than 30,000 calls to each one of the newly copied load modules. I kicked off the extract program again. It ran to completion. Because the event file only had 5,000 records in it, it took only a moment to complete. I remember Doug commenting, “I can’t believe it worked.” To think that the arrays, length of pointers, and other elements of the program had allowed that many views to be generated amazed him.
I looked at the GVBMR95 control report and noted that the logic table of 3 million rows had caused the program to generate 3 megabytes of machine code. I remember going to the insurance company production load module library, where all written programs over all the years for all mainframes in the company were compiled. The total library size of all those programs was less than 3 megabytes. We had generated a lot of machine code.2
On another occasion, I don’t remember the specific set of views, but Doug and I were doing some performance tests. I was responsible for setting up the tests after the views had been created and Doug was perhaps finished with some feature, a SAFR exit, or in other cases just observing. I would adjust GVBMR95 parameters like the READBUF and WRITEBUF parameters which specify how much memory is allocated to reading and writing. Sometimes in doing these types of performance tests I would go so far as to make sure that the input files and output files were on separate disks and controllers so that we wouldn’t get IO contention.3
I had set up the test, and for some reason Doug and I were back in our individual rooms at the hotels late in the evening when it was time to run it. That might have been because the machine utilization was very low so we wouldn’t be impacted by other processes on it. Again, this was before cell phones and we might have been in different hotels so we had to dial into the system on the hotel phone line and talk to each other with 115 character messages using the TSO “send” command a little like using instance messaging. Doug could watch my job run at the same time I did. It showed us things like how much CPU time it was taking, how many IOs had been requested, etc. I remember the CPU utilization was shown as a set of asterisks in a row, each asterisk representing the processes was using 10% of the machine capacity or something. The machine was a fairly powerful machine for the time, with 8 or 10 processors in it.
I kicked off the job and went to watch it process. I suspect I sent Doug a message saying, “Job nnnn is now running.” GVBMR95 starts pretty slowly as it loads the logic table, the VDP, and the reference files into memory and generates the machine code. I remember Doug once watching with me and telling me when it was generating the machine code because all the IO stops for a few seconds as it makes multiple passes through the logic table that was in memory. It then began parallel processing, where multiple event files are read in parallel all at the same time. Doing so means that multiple CPUs can be used as well.
Parallel processing was not a very common thing at the time. Typically non-parallel programs, like COBOL, might use 5 to 10% of the CPU at any one time. As the GVBMR95 kicked in, the percentage of the CPU started to climb, 30, 60, 80%. These weren’t single spikes in processing, but sustained rates for a number of minutes. As it bounced between 80 and 90% and periodically spiked at nearly 100%, I got a message from Doug saying, “Look at it go!!!!!!!!!!!” For all of his experience, I am not sure Doug had ever seen a mainframe utilized like that by one process. GVBMR95 was nearly consuming an entire mainframe for perhaps 10 to 15 minutes.
When I was living in Sacramento a couple of times during lunch I went over to the California State Railroad Museum near the office. The last exhibit is one of the largest steam engines ever built. It was a cab forward Southern Pacific engine 42944 built in the late 1800’s. It was built to haul the trains over the incredibly steep Sierra Nevada mountains on the cross-continent journey. The cab forward design was created because it passed through very long wooden snow sheds, more like tunnels, built to keep the snow off the tracks. With the cab at the back of the engine, the smoke and ash from the boiler would fill the cab and nearly choke the engineers. The solution was to put the cab on the front of the engine, which must have given the engineers very dramatic views through the mountains.
It is a massive engine, and for its time was the height of complexity and power. As I sat and looked at it, I would think of those engineers, with their hands on the controls and the wind rushing through their hair as they controlled this combination of thousands of parts all working in harmony in rapid succession hour after hour after hour.
Through those years, at times I was privileged to be the engineer at the controls of this incredibly powerful data processing engine Doug had constructed. As I have contemplated that night and thought of the incredible harmony of billions of machine instructions like those engine parts, independent parallel processes utilizing an entire mainframe over a sustained period of time like the drive wheels of the engine, performing millions and billions of joins similar to pistons firing, and mile after mile of event data in front and extracted records in the rear, with a hint of virtual wind as they passed by, I am grateful to Doug, Rick, the team and my family for allowing me to experience it.
Parent Topic: Part 5. The Programmer