On a large project one time a CFO had heard something about troubles in the project developing from using a “fully normalized data model” and asked me what that meant, and why it created problems. I had an idea of how to explain it in a simple way that would make sense to anyone who has worked with multiple tabs of a spreadsheet.

Scope of Financial System

The financial system for an organization has a very broad scope. It effectively accumulates the impact of every business event in the organization into a set of standardized reports. And it is supposed to do this consistently over time.

So business events as different at paying an employee wages, and receiving inventory at a manufacturing site, and estimating how many customers won’t pay their bills, all have to be stated in a set of similar characteristics that can be categorized. Not a small task.

Data Structure Affinity

Yet all these business events are captured in transactional systems which each have their own important elements and attributes. As financial reporting has grown to require greater and greater transparency, the desire to have more and more details available in the financial system have grown as well.

So what do most projects do?

Well, they begin by assigning a team to each one of the source systems, to extract the transactional data for use in the financial system. That makes sense to do; the project gets some parallelism by having each of these teams work on each system at the same time.

But what target do they use for where the data should go?

Here’s where the trouble usually begins. Typically, the reporting needs are not well developed early on in the project; and so the ultimate uses of the data are not known. For the team to make progress, it is suggested that all the data they can find be pulled into the new reporting system; that way, when the reporting needs become clear, the data will be available to support whatever might be required.

This means the foundational data in the new system typically looks much more like the source system than the consolidated financial reporting it is meant to provide.

Excel as an Example

So, to make this a bit more simple to understand, imagine each team is given one tab of the spreadsheet. Having complete control over that tab, the team gets to define the column headings and values in each row. There might be some attempts to standardize values across the tabs, but for the most part each tab tends to look much more like the source system than the ultimate financial reports that will be required.

Now, once all the data is loaded into each of the tabs for each of the source system, it’s time to begin to develop those reports. So, one adds a new tab to the spreadsheet for the report, and puts on it the columns of the report to be developed.

And then the fun begins: One has to now put a formula into the first cell that starts to pick out and combine the right data from each one of those source system tabs. If there is anything more than 2 or 3 sources, imagine the complexity of the IF statement in each of those cells.

All the complexity of gathering the data is embedded in those formulas.

The system will never work.

Conclusion

Data structures matter, particularly in financial reporting. But it isn’t about modeling all the diversity of the source data. No, it is about finding the commonality required across all those sources. That’s one key to making enterprise financial reporting systems work.

This is  Episode 223 of Conversations with Kip, the best financial system vlog there is. Literally learn more about ledgers and financial systems at  LedgerLearning.com.

Watch all episodes in order at the Conversations with Kip Playlist