On August 3rd 2006, Jeff Wolfers, a new project executive, sat down with me, said I seem to be central to a lot of what was going on with the project, and asked how I was doing. In response, I asked him where he had been four months earlier in the midst of the discussions about the accounting rules which really took a toll on me. He laughed. I noted that his question and the subsequent discussion did lift a certain amount of weight off my shoulders, giving me a feeling something on the project really could change.

This temporary relief quickly wore off a week later when Rick e-mailed me that Jeff had decided to put me at the center of fixing the project Jeff told me I could do whatever needed to be done to make the system work, but I had to make it happen quickly. As a postscript to his e-mail, Rick added “God help you.” I sensed I would need that help.

This began a significant transformation on the project, with a very serious focus to get it done. I was overwhelmed with the amount of work there was to do. The team was reorganized, and attempted to construct every part of the system in parallel.

Surrounding the mainline posting process is a set of support functions, including reference data, adjustment and reconciliation processes. Working on these areas was very frustrating at times for the teams involved because we knowingly were breaking one of Rick’s fundamental principles: do things in the right order.

Error Handling

For example, a team was deployed to build a system for error handling. The mainline flow of data through the system wasn’t yet determined. The team responsible for error handling would schedule meetings with the mainline flow team and start with the same question every meeting: what errors do we need to handle? The mainline team would respond by saying they didn’t know yet.

The number and types of errors that can be imagined are nearly unlimited. However, the number of errors that actual occur is much smaller; certain data conditions simply do not happen. Thus establishing mainline processes before attacking support processes is critical. Learning from preliminary testing what error conditions should be handled is the most efficient way of determining requirements for error handling.

In the end, the system employed a few simple principles for handling errors:

  1. The vast majority of error conditions that can be handled in an automated way occur in the front end of the system, in either the Technical Transform Layer or Accounting Rules Engine. At the end of the ARE, data must be in a state that can be posted into the AL and GL. Detecting and preventing errors before posting is much easier than after posting. Once data is posted into the AL and GL, clean up processes are much more involved.
  2. Large numbers of errors typically indicate the wrong file was processed. The most effective means of handling this condition is to obtain the right source system file and reprocess through the ARE.
  3. Individual record errors typically are caused by accounting rules not updated for new values received from the source system. In these cases, preserving visibility to the values from the source system is critical to updating the rules to rerun the data. A reject handler system was used to store the records from the source and the rejects out of the ARE. These were recycled on a daily basis.
  4. The end result of almost all automated error handling is to reprocess data through the front end of the platform.

Another set of errors have nothing to do with the source data itself. These errors are caused by incorrect reference data values. When these errors occurred, it resulted in wholesale reruns of the platform. Over time, the team constructed more and more validity checks on the reference data even before the sources were processed through the engines.

Reference Data

Trying to define all the reference data as an independent exercise was equally frustrating. I use the term reference data to encompass any parameter not maintained in IT script files which affects the way the system works. The problem with gathering reference data requirements is they are all completely dependent upon every other component of the system. The functionality must be defined in all other components, then that functionality broken down into what is configurable and what is not, before the total sum of reference data can be known. This is all true for reference data that is maintained on the platform; reference data maintained external to the platform is handled similar to a source system, which is interfaced to the platform.

That is not to say there are no major components to be built as infrastructure to manage reference data. In the end, the solution had the following major characteristics:

  1. All rules related to the Ledgers (GL and AL) were stored in a set of tables called Reference Data Maintenance Facility (RDMF). This allowed the users to define the domain values for specific attributes in the system like the business unit, cost center, and nominal accounts. This also allowed them to specify various financial processing rules like revaluation and translation by organizing the accounts into trees. Other relevant information for ledger processing like the platform processing date, exchange rates, accounting period control tables were stored within the RDMF.
  2. All rules related to accounting rules engine were grouped into a reference data component called Rules Table Maintenance (RTM). This was basically a repository that handled the table constructs (spreadsheet like structure) required to translate the source system attributes to the finance platform attributes. This repository had the ability to let finance maintain these rules in the form of ‘keys’ and ‘answers’. This allowed them to enter the rules that formed the cornerstone of creating journal entries out of the business events.
  3. All rules related to performing extracts (outbound) and inbound functions on the SAL records were stored in a special facility called Rules Maintenance Facility (RMF). This was again a user-facing repository that allowed the users to specify the input structures they would like to operate on and the rules that were needed to be applied on the platform structures in order to create the output structures required by downstream applications.
  4. A publish/subscribe approach was used to introduce reference data changes into the production environment. As noted above, the parameters can fundamentally affect how data is processed in the environment, and thus need to be introduced in a controlled manner.
  5. Particular care must be taken in designing the control of the platform processing date, a parameter used for consistency across all processes in the platform. Although a reference data component, it is not typically updated manually. Rather, its update cycle is carefully tied to the overall schedule of processes.
  6. A hierarchy of maintenance facilities was created which published to lower level facilities that ultimately publish to individual processing instances. This allows for management of maintenance functions at higher levels (regional, corporate or global) of certain tables. The individual processing instances were tied to source system cutoff schedules. This allowed the system to run worldwide, rolling with the sun as periods were cut off.
  7. Accounting and extract rules also have special characteristics because they connect values external to the system (both sending and receiving system code values) to platform values. Therefore the platform values must be validated against platform listings (in RDMF), but source system codes must be validated against external system listings. These systems also typically have a decidedly “local” flavor to them because they must be connected to local data sources and uses.
  8. Changes to reference data must be date effective. The ability to recreate a view of the financial data as of a point in time requires retaining history, and the ability to do date effective joins. Certain reference data can be logically deleted only, because it is typically integral to the structure of the repository itself.

Adjustments

As noted in Errors and Adjustments, some would like to believe that all data should be captured correctly by source systems, and data in reporting applications or data warehouses should never be adjusted. The idea ignores reality and, if adhered to as a principle, means users will live with inaccurate and uncorrectable data. In a very simple sense, adjustments may simply be thought of as another source system; in fact certain business events are not automated because they happen infrequently or are of low value. The results of these acts of commerce may only be reflected in the reporting repository as manual “additions” in fact, rather than adjustments.

The adjustment facility becomes more useful if it has what might be termed a very simple reporting aspect to it, in that it allows users to select existing records from the repository as templates. The resulting process should not perform an update, but rather create new records, reversing the sign on the amount to back out from an existing location, and posting the amount to the correct set of attributes. Thus the output from the adjustment process is a new set of records, capturing the “business event” of adjusting the repository.

Although the outputs from the adjustment facility should simply be another set of records – more journal entries – that make marginal corrections to existing data by adding new records, these records do not go through all the steps of the accounting rules. Remember that the accounting rules translate from source system values into platform values. Adjustments are typically made in platform values; no translation is required. Special care must be taken in determining how these records relate to the accounting rules triggers for reclassification.

The facility should enable adjustment for any attribute in the repository. If constructed in a very forward thinking manner, a similar facility can be used for both reference data maintenance and adjustment. When I state any attribute should be adjustable, I include Standard Arrangement Layout (SAL) records as well, creating new date effective versions to reflect appropriate customer/contract attributes.

Having recognized the need for adjustment does not mean adjustments should be enabled anywhere in the architecture. The key to consistency is consistent inclusion of adjustments in any output that might be affected by them. Thus making adjustments in the AL, and allowing them to flow to the GL and any report or output necessary, means the business event of adjustment is made once, and reflected correctly everywhere.

This does not mean that every adjustment must be made at the lowest level of detail. The AL is capable of holding the summary level GL data, and adjustments can be managed in the same way. Certain attributes on adjustments are simply null, not being defined. There will always be adjustments which cannot be attributed to a particular customer/contract. For example, a portfolio of arrangements may be known, in the aggregate, to be less than the value of the individual instruments. Making an aggregated adjustment is completely acceptable. When including the entire portfolio in a report, this adjustment record should be included as well, even though it may only be shown under the attribute of “other” because is has no specific arrangement attribution.

Creation of an adjustment facility is creation of a limited use on-line transaction processing system. It should be approached as such.

Reconciliation

The last major support process is that of reconciliation. Reconciliation is necessary when two master files exist, two sources of the same data. If we had unlimited computing capacity and the ability to manage unlimited processing complexity, we would do all computer work in one system. We have neither of those, so we create multiple systems. We are then required to reconcile the systems to ensure data is not missing or inaccurate.

The first reconciliation that must occur is between the source system and the AL. As discussed under the accounting rules, if the source system provides end of day balances (or periodic cut off balances if the cycle is not daily), these balances can be processed through the ARE, thus being translated from source system values into platform values. These records can then be used to compare to the corresponding balances in the AL. Differences indicate an error in processing of some kind.

Another reconciliation point is between the AL and the GL. In this instance, the level of detail maintained in each is not the same. Thus the AL balances must be summarized to compare to the GL balances. If differences are found for any one record in the GL, all the corresponding detailed records from the AL are displayed. In this manner, evaluation of the causes of the out-of-balance can begin immediately. This same set of principles are used in multiple areas in the platform.

A couple years after the system went live I heard Dave Willis say that one of the smartest things done on the platform was building the automated reconciliation processes. Managing the detail involved in the platform is challenging, and not using the power of the machine to spot where and when differences occurred would make it nearly impossible. Until our ability is increased to scale to even higher amounts of data needed for analytical processes, reconciliation will continue to be an important function in maintaining the quality of the reporting environment.