In the fall of 1993 after being on the test team for a number of months and learning the system, Rick asked if I would go up to Oregon to help them understand the upgraded software that could be applied to their existing application. This was one of my first solo consulting assignments, although I remember Jay, who lived nearby, sitting in on my sessions. Jay was the fatherly figure who helped make sure I didn’t drive the car off the road on my first trip.
I remember their reaction when they learned that in addition to creating reports, the latest version had the ability to write out files as well. This turned the product from a reporting tool to a potential ETL or processing tool.
To this point in the SAFR method we have assumed all business events needed for producing reports have been generated by other systems and presented to the reporting environment. As noted in Operational Versus Informational, that may not be the case. Certain SAFR projects have had those characteristics. The Cookie Manufacturer which resulted in the steps outlined in the prior three chapters did. All data needed for reporting was generated inside the ERP environment. The extracted business events were used purely in a reporting process managed by SAFR.
Other projects require additional business events be generated. Most often these business events are generated through the use of rules or parameters; with the exception of adjustments discussed below, they do not require a person capturing the business event in any way. The work we have done thus far helps to determine what events need to be generated.
This work really doesn’t wait until all the prior steps have been completed. In fact, the first deficiencies in the business events available for reporting likely showed up when we attempted to find more detailed events. Turning detailed transactions into journal entries often includes processes that happen after the transactions from the operational systems have been summarized. Thus in attempting to balance the detailed events to the journal entries, we would have found that only portions of the file could be balanced: business events for some accounts would likely be completely missing.
So the work of generating additional business events begins by determining what events trigger additional events. In the case of the detailed operational file missing particular offsetting entries, the triggering event is probably one of the transactions in the source system.
SAFR could be used to generate these events by defining a file format output view, filtering for the specific transactions, and then using column filtering and column constants or lookups to form a new output record. When these new records are combined with the input records into one file, fully formed journal entries might be created.
Balance Based Processes
Remember that some processes, such as currency revaluation, are dependent upon some event that may happen so frequently that using an arbitrary cut off for generating the events may be more appropriate. These types of processes require greater attention to issues of volume and scale because they demand a balance as of a point in time and that requires accessing history. This is in contrast to generating offsets for journal entries which only requires reading the set of dependent business events.
Requiring balances as of a point in time is dependent upon the same decisions and analysis we performed in assessing reporting needs, estimating the data basis and defining summary structures. Instead of reporting needs, we concentrate on processing needs. All the same rules apply, plus few additional rules. In addition to analysis of transaction volumes, data structure complexity, and number of reports and level of detail, we have to be cognizant of (1) the dependencies between processes, and (2) the outputs from these processes.
The order of operations must be understood for processes to work effectively. These processes generate new business events, which may be the input or triggers for other business events. For example currency translation, the process of converting a balance or transaction into another currency for reporting, must be done before that balance can be revalued. In other words going back to our example in Operational Versus Informational where we had a UK bank account we needed to include in our US dollar denominated balance sheet, we have to convert the pounds into dollars before we can include it in our balance sheet. Reflecting the changes in exchange rates on the income statement happens when we produce the next balance sheet. So in analyzing processes which need to generate new business events, determine what the triggering event, often called the driver file in batch programming, is for each.
Remember that all balances are composed of individual movements; this can open up processing opportunities. For example, supposed early in the morning the daily process could start with the summary file of the balance from the last day, but the balances will be incomplete. They have not yet been updated with yesterday’s transactions. There may be little reason to summarize all of the transactions from the source system into the summary file before generating the new events based upon those balances. Mathematically, generating two business events, one based upon the balance as of yesterday and another based upon only that portion of the balance that was updated yesterday (yesterday’s transaction or movement) the results are the same. This allows that all of the business events, both those received from the source system and those generated in the finance system processes can be applied or posted to the summary structure at the same time. We’ll show how this is done in Common Key Data Buffering.
The standard SAFR View process puts out one record for each record selected in record filtering. This is adequate for producing reports from fully formed business events in an event repository. However, this may or may not be adequate for business event creation processes. At times a single input record may need to generate multiple output records.
For example, originating a loan may result in a single transaction from the loan origination system. This record may reflect simply the increase in bank assets from making the loan. However, to make a fully formed journal entry an offset may need to be created to reflect the disbursement of funds for the loan typically recorded in a different system.
The original entry, the first row above, came from the source system. The second and third records are offsets. These are the amounts required for IFRS standards. These three rows are then reversed for the USGAAP view of the data, and the USGAAP entries are made. At report time, if someone wants an IFRS view of the data, they only select the IFRS rows of data. If they want a USGAAP view, they select both the IFRS and USGAAP view of the data.
Using SAFR to accomplish this could be accomplished by making a view for each type of output record that needs to be created. In other words, there could be a view to reformat the original input transaction from the source system to make IFRS Asset 1 entry, another for reading that same event file and creating IFRS Contra Asset 1 record, another for IFRS Asset 2 entry and so on. In total there would be eight views, thus one record into the extract engine becomes eight records out.
The downside of this approach is that a great deal of logic may be replicated in each view. For example, the logic to populate the business unit and cost center in each of the views would likely be the same. This becomes a maintenance problem.
To overcome this, SAFR has a special logic text construct called the WRITE verb described in Piping, Tokens, and the Write Verb. This instructs the Scan Engine to write the record formed so far by the view to a specified file. Each view can contain many write statements. Thus with this verb, a single input record to a view can become multiple output records.
SAFR has been used in other ways to solve these types of problems. To understand those approaches, we should first discuss the structure of the rules.
The word rule in this sense means “A determinate method for performing a mathematical operation and obtaining a certain result”.1 In my mind, it is difficult to distinguish between any program logic and rules or parameters those programs use; they all are determinate methods of obtaining a certain result. However, there are some generalizations that can be made about what is called processing rules.
Rules tend to change more frequently than programs do. They also tend to be maintained by non-IT people. In a sense, they are those parts of programs that the end users want to control without having the entire burden of being programmers.
As SAFR has been used as a processing engine, various means have been constructed to allow end users to control their rules. For simple processes users can change values in the views themselves. However, for more complex processes where the views become something closer to programs, users want control without all the programming responsibilities.
Using reference files updated by users is a very common way of accomplishing this. Another more sophisticated approach is to build screens for the user to define their rules, and then create a custom SAFR process which generates SAFR views. For example, if a screen were created that allowed the user to maintain the list of journal entries that must be generated above, the eight required views could be created by a program that runs prior to the SAFR Scan Engine. The Scan Engine Select Phase API accepts XML in a SAFR defined schema allowing custom workbenches to create the eight required views.
Errors and Adjustments
Reporting systems which require the highest degree of accuracy require that errors can be corrected and adjustments made. Adjustments might be made to correct errors but also to capture business events which are not automated. This portion of the system is a transaction processing application, and typically requires all that implies. The outputs from it, however, should be architected to the system as if they are simply another type of source system; the business events should be captured and processed as if they came from a completely independent system.
Insurance Company Allocation Engine Project Results
The following is a sample of a process created using SAFR as the processing engine to emulate ERP financial cost allocations. I was on a call with the ERP vendor when the client asked them if their system was capable of generating 10 million output transactions in the space of a couple of hours. There were quite a few caveats attached to the answer. The client projected that their volumes could actually be 10 times that size. So they agreed to build a SAFR process that produced the results in a much shorter time.
The team created custom programs which generate over 6,000 SAFR views based upon over 7,000 allocation rules maintained in the ERP package. SAFR executes these views to scan the Financial ODS selecting records eligible for allocation. It then allocates these costs through four allocation layers, such as products and geographical units.
The following chart depicts the steps of this process.
- Standard ERP rules define all allocation processes
- Basis for High and Low-level provided by SAFR Statistical ODSs
- Standard ERP allocations retrieves data from and returns results to journal tables
- High-level results are extracted to SAFR Financial ODS
- ERP allocation rules are used to generate SAFR processes
- Low-level allocates high-level results and returns detailed low level results
- Summarized results are returned to ERP journal tables
During the first year of implementation, the year end allocation process read over 50 million records selecting nearly 3 million that were eligible for allocation. These 3 million records were exploded into 186 million allocation results. The process ran in 7½ hours wall clock time and 28 hours of CPU time. SAFR generates approximately 210,000 records per minute, compared to the 13,500 per minute for the ERP package. The solution satisfies every business requirement in an acceptable timeframe and the business users were able to have their data represented exactly as they wanted2.
I have outlined only the basic considerations for defining processes. Doing these steps requires a significant amount of time. However, care should be taken that this prospective work of looking to what the new system should do does not overshadow the more daunting, difficult but critical step of finding business events and building reference data. If those steps continue apace, definition of the processes will be better informed.
Similarly, data modeling of the repository, as opposed to the summary structures, is necessary. That is the next step in our process, first considering the joins that might be involved.
Previous: Chapter 32. Define Summary Structures
Parent Topic: Part 4. The Projects