After three years of in depth study and application of accounting and financial reporting, I was immersed back into the world of computers. My training picked up where my college training had ended. The training began in Chicago for three weeks and then continuing for nearly 3 months in Tampa, Florida.
Mainframes and Batch Processes
My study in Chicago began by reviewing the concept we discussed back in Computers. Remember we likened a computer to a meeting, with the data written in a binder being like the data written on the hard disk; the white board similar to the computer’s memory, and the people in the meeting being the CPUs or processors using the data on the white board. We suggested the meeting language and procedures determined what had to be done and how fast that gets done. We suggested there is consistent pace at which the meeting works. In my training, I reviewed all these basic concepts by reading manuals.
We didn’t touch a computer in any way while in Chicago. Reading about large computers called mainframes in Chicago didn’t make them any more real to me than when I had read about them in college. Unlike personal computers, I couldn’t touch, see or feel one. Perhaps if a PC is a “personal computer” then anything else must be an “impersonal computer”. Although said in jest, a person could work on these machines for years and never actually see one. Over the years, they have become much less impressive to see. Whereas they used to fill up rooms, they now look more like a 6 foot tall, black, double door modern styled refrigerator. In some cases, they’re half that size.
As I learned about these machines, I have gained a sense of connection with those heroic stories about the advent of computing I learned in my first computer class. I gained a sense of why certain programs are still written in 80 character strings: because in 1928, IBM patented 80 character punch cards.1 It isn’t too much to imagine these same size files being used 100 years after their invention. Thinking of each journal entry line as being written on an individual card so that a computer can read it might help to envision the earliest computer programs.
Data input for the earliest business systems was not done using a keyboard attached to a terminal; it was more like a typewriter that punched holes in cards. The first programs written read these cards in “batches”; thus the programs were called batch programs.
Because there are few such processes on a personal computer, most people today have little experience with batch programs. An antivirus or malware scan or hard disk defrag are perhaps the closest things. When an antivirus scan starts, it reads the first file from the hard disk and checks the file for computer viruses. It then reads the next file and repeats the check until all the files have been read.
The early batch programs read all the journal entries for a day in one stack of cards, all the ledger balances produced yesterday in another stack, and it created a new stack of cards with today’s updated ledger balances. Although the stacks of cards were replaced long ago by files on disk, the basic processing paradigm exists in tens of thousands if not more processes in businesses around the world. Mainframe computers were designed from the ground up and have a great many tools to support these types of processes. Batch programs, by and larger, consume computing resources based upon the number of records read and written.
Servers and Online Processes
Mainframe computers2 didn’t adapt very quickly to the advent of graphical user interfaces we have on personal computers. This allowed another type of computer called servers to develop and ultimately to dominate the Internet. They often run an operating system called UNIX.
These computers were developed to serve on-line computing. On-line programs are very different in structure from batch programs. In my training class I coded programs in CICS, another sixties era technology, but the structure still holds true for today’s Internet enabled programs. Every time the program is invoked, it is executed from top to bottom. In other words, there is very little looping or repeating of steps in this type of program. Each time the user hits the enter key or clicks the mouse, the program starts at the top, determines where the user clicked, and then does those things the user asked for. It can potentially use all the data entered on the screen, and performs all the functions against that data. It might store the data in the file if it is valid, or sends it back to the screen with an error message if it is not. If in error, the program then stops, and waits for the users to press enter or click the mouse again. If the input was OK, the data is stored to disk somewhere, a new program—and thus a new screen—is likely invoked and shown to the user.
So although the program could do quite a bit of work each time a user hits enter, it is unlikely to run as long as a batch program. Resource consumption—in other words computer usage—for this type of computing is driven more by the number of users hitting enter or clicking a mouse than by the number of records.
Online systems are more about data entry and creating journals; batch systems are more about processes and posting them to the ledger. But what of reporting? Financial systems need to produce reports. How has this changed over time? To understand this, we need to attend one of Rick’s lectures on business system IT architecture.
Previous: Chapter 11. Consulting
Parent Topic: Part 3. The Partner