Two weeks ago we talked about how Reconciliation drives financial system costs. Last week we discussed Data Fragmentation‘s impact. This week we consider that in response to both of these, we spend a great deal of effort in Data Integration and Data Curation functions, another major cost driver for the finance department.

At times I describe much of finance work to get to transparency–answering why the results are what the results are–as manually running the computer backwards.

Our summary reporting systems (like the General Ledger) give us an audited result, something we trust, but they do not tell us why we ended up with that result. To understand why we most frequently need more detail.

Many people in finance spend a great deal of time in going back to the originating systems, the “source systems” or transaction processing systems; they take dumps of the originating transactions. They put them in a spreadsheet, and begin to add or delete them to recreate the summary result they already knew. Once they have recreated the result, then analysis can finally begin to answer why that was the result.

The funny thing (or not so funny thing) is that our systems had those details available when they created the summary to begin with. But because our ledgers were automated when computing costs were so high, we threw them away as fast as we could. Thus we are left with running the computer backwards–manually.

Our fragmented ledgers and data impede our finance processes today. Using our increased computing capacity to create an Instrument Ledger, which maintains the needed details (note this is NOT a data warehouse) is something that has proven effective at a number of organizations.

This is Episode 130 of Conversations with Kip, the best financial system vlog there is.