“I wouldn’t give a fig for the simplicity on this side of complexity; I would give my right arm for the simplicity on the far side of complexity.” – Oliver Wendell Holmes.

On May 9, 1996 I called Eric Denna, my college professor nearly 10 years before and a friend. I was working as a consultant implementing large financial reporting systems. As a professor, Eric had done consulting work, but had recently accepted a position as the CIO of a publishing company. We caught up for a few moments, and then I asked him, since he no longer did part-time consulting, what advice he would give a consultant. He pondered for a moment, and then said, “Study history. Every day I am in some meeting and a decision is made and I wonder why it ended up that way. As I dig a little deeper I learn reasons going back multiple years that influence things today. Study history.”

This book is in part a history book, describing the principles behind financial reporting. But it is a history book with an eye towards the future. In the fall of 2005, I was having a steak dinner in Buffalo with Rick Roth, my consulting partner at the time and another principal character in this book. As we discussed our current project and some of the problems we were facing, he made this observation about our role on the project, “The true test of an expert is the ability to predict.” Rick, who is also a licensed pilot, understood that predicting requires first accurately assessing a position, but second, understanding trajectory and speed by knowing a second position. Knowing where we have been, and where we are will help predict where we are going. “Study history.”

Principles are very powerful things. They are neatly packaged nuggets of truth that can be applied to a multitude of situations. Laws which are founded on truth do nothing more than predict outcomes. Early in my education Eric exposed me to a theory called the REA Accounting Model. Although not completely accurate or precise, I refer to this theory as business events-based accounting or business events-based insight.1 I have found this theory expresses some very powerful principles for financial reporting based upon some natural laws of computing and information systems. The principles aren’t limited to financial reporting but instead apply to many types of business reporting, and the computer systems that support it. It could be considered perhaps an alternative approach to the traditional accounting model which goes back hundreds of years. Our computer systems have automated the traditional approach, without considering the implications of alternative approaches.

The principles involved in the theory have been discovered and applied in other areas of computing, but they are not often articulated as principles, or if they are, they are not well-known. Perhaps they are not well-known because accounting and computer theory are both mind-numbingly boring. To help the reader endure such tedium, I have chosen to describe these principles through the story of my career. The lessons come from four primary teachers: Eric L. Denna the Professor, Richard K. Roth the Partner, the project team headed by Jay R. Poulos, and Douglas F. Kunkel the Programmer.

If Jay Poulos is anything, he is practical. At times, I have found him frustratingly practical. But that quality, shared by Rick and Doug, has kept my experiences grounded in what is really possible. So although this book is about a theory, it is not theoretical; it is not academic. This book describes real implementations of a significant portion of the theory over numerous years for well-known companies. It is about the grinding and refining of the theoretical as it meets practical problems, exposing new insights.

“Study history” Eric said. It is good advice because principles are usually additive; it is important to understand the basics of computing before understanding large-scale system implementations. Being clear about bookkeeping is important to understanding the accounting profession. I have determined to write this book so that anyone that has had a basic accounting course and created a spreadsheet can understand the concepts and implications of the ideas.2 Of course the reader will determine if I have been successful.

I am aware that because of this goal, perhaps the book will please no one. Accountants may find my review of accounting tedious and simplistic at times. Computer scientists may likely make the same complaint about information systems. I doubt anyone will find that it is not comprehensive enough, even though it cannot cover all computer processes or accounting procedures. Some scholars now argue that post-reductionism analysis provides greater insight into the implications of new innovations.3 Rick has always held that the most effective people are those that are experts in more than just one discipline. Fully understanding the intersection of accounting and computers will yield many more fruits than learning more and more about less and less of either field. People with deep capabilities in multiple disciplines produce the most creative results. This book attempts to expose new principles.

If the basics begin to bog down or bore, it is possible to skip selected sections. For example, the REAL Analysis Method describes how to build a business event based system from the ground up as well as some aspects of IT methodology for creating systems. This idea is at the far end of the potential implications of the system. In the balancing act of my career I haven’t seen the theory applied at this extreme. Even so, I think it important to understand, particularly if we are attempting to chart where we might be headed. It is also not necessary to read a computer dump as explained in Abends, but I do believe it is important to appreciate how to do it. Care should be taken, though, at thinking too much material is optional. Many computing textbooks contain made-up computer languages to demonstrate principles. The SAFR details in Part 5 might profitably be approached in this way.

At a dinner with Jay, Doug and Rick in Oregon in the summer of 2008, I said I was exasperated by the constant bickering between the business users of the systems being built and the IT department in charge of building them. Jay said, “Kip, if I was in Finance, I would be angry at the service from IT. Years ago finance was consistently served by IT people who understood accounting. But then IT decided they were responsible for a lot of other things that had no direct benefit to the business organization, and disbanded these knowledgeable groups of people. That was a tremendous disservice to the business.” Applying the technology effectively requires understanding the problems being solved.

Doug, as the programmer rather than professor, is not a man taken to thinking much about theory. I was surprised one day to hear him comment with tremendous conviction that the only hope for truly flexible business system architectures lays with the adoption of the principles of a business event based systems. The principles involved are quite wide ranging in their implications. They find connections to data warehousing, business intelligence, ERP systems, parallel processing, code generation, legacy systems conversion, subsystem architecture, and even systems development methodologies.

What was it that Doug saw that was so convincing after years of building systems? Why should anyone invest the time to understand this theory? Let me state briefly what I hope you will understand at the end of this book.

Part 1 – The Pearl introduces the story line and gives a summary of the implications of the theory by recapping the basic problem and solution. If you want a short summary of the problem and the solution, read The Problem and The Solution. Everyone interested enough in reading some of this book should read this part.

Part 2 – The Professor describes my time in college; introductory courses in computers using the analogy of a business meeting, and basic bookkeeping and accounting using a personal financial system example. It shows that Eric Denna’s explanations of what we consider as a very straightforward way of keeping track of money and information actually has a great deal of variability and flexibility to it. Building upon the personal financial system, it shows that multiple choices about how to record the same transactions and produce the same outputs are possible. It proves that a very different approach than debits and credits can be used. It then shows the methodology Eric taught to design a computer system which could actually go even further to redefine the accounting system. People interested in really understanding the heart of the solution need to understand this part.

Part 3 – The Partner introduces Rick Roth, and describes the reasons why the theory isn’t implemented more widely. The reasons include confusing the means for the ends, as traditional ways of record keeping in accounting are confused for accounting standards; neglect of a long-known processing paradigm and development of tools to support long-forgotten aspects of it—the batch program as opposed to on-line processing; the adoption of the traditional accounting system or subsystem as the basic architecture for business systems at the expense of scale; how reduction of processing patterns and subsequent industry tool hegemony creates barriers to alternative approaches. It lays the foundation for how an alternative system could be constructed which would balance the demands of today’s accounting requirements, but allow the theory to be implemented in a much more full sense, with wide-ranging implications. It shows the consequences of ignoring or being ignorant of these principles. People interested in understanding why traditional approaches to the problem won’t work need to understand this part.

Part 4 – The Projects introduces Jay Poulos and the broader SAFR team, and describes the general pattern of how the system has been implemented over the years. It introduces the Scalable Architecture for Financial Reporting, or SAFR, not because the tool is necessary to implement the theory, but because it’s development was guided by the theory. Thus understanding it exposes elements of the practical application. This part also discusses what might be termed the SAFR method. This includes practical ways to identify business events from legacy systems and the importance of focusing on the rows of data, not the columns; a practical approach to data cleansing and creating new reference data to support new insights from the events; data capture and management of rules and reference data for loading and extracting data, as well as other steps. Anyone trying to apply the theory to a particular problem needs to understand the steps outlined in this part.

Part 5 – The Programmer describes the solution programmed at its core by Doug Kunkel over 25 years. It begins by outlining the fundamental patterns and functions of a reporting system including select, sort, summarize, and format. It then discusses how to address performance, including single-pass architecture, parallelism, code generation, and process piping. It covers customization, common key data buffering of data, and information generation. Anyone interested in architecting a SAFR solution needs to understand this part.

Part 6 – The Platform describes a working system for one of the world’s largest financial institutions built by the next generation of men and women involved in exploring, building for and implementing these concepts. It includes an accounting rules engine, which brings data from source systems and allows Finance to create the additional data needed specifically by Finance; an arrangement ledger, with balance-based and allocation-type processes; general ledger processes, providing control at a summary level and arrangement and general ledger extract processes feeding data to an on-line version of SAFR. It also describes ancillary and supporting functions, including reference data maintenance, platform adjustment and process monitoring, error handling, time zone and scalability factors.

Part 7 – The Plan closes this book with a few words about where all this might be headed. The true test of an expert is one’s ability to predict.

Obviously I have been tutored by many people. Yet if this book contains errors, I alone am responsible for them.

Next:  Part 1: The Pearl

Previous:  Dedication

Table of Contents

1 Although various other terms are used throughout this book to describe McCarthy’s theory, this book is about the Resources-Events-Agents (REA) account model. As McCarthy noted in this paper’s abstract: “Researchers often equate database accounting models in general and the Resources-Events-Agents (REA) accounting model in particular with events accounting as proposed by Sorter (1969). In fact, REA accounting, database accounting, and events accounting are very different. Because REA accounting has become a popular topic in AIS research, it is important to agree on exactly what is meant by certain ideas, both in concept and in historical origin. This article clarifies the intellectual heritage of the REA accounting model and highlights the differences between the terms events accounting, database accounting, semantically-modeled accounting, and REA accounting. It also discusses potentially productive directions for AIS research.” Cheryl L. Dunn and William E. McCarthy, The REA Accounting Model: Intellectual Heritage and Prospects for Progress The Journal of Information Systems (Spring 1997), 31-51, or at (Accessed May 2009).
2 David McCullough, in writing of the self educated American Revolutionary War general Nathanael Greene said, “It was a day and age that saw no reason why one could not learn whatever was required–learn virtually anything–by the close study of books…” McCullough, David G., “1776” page 23, (Simon & Schuster, 2005).
3 James Burke, Twin Tracks : The Unexpected Origins of the Modern World, (Simon & Schuster, 2003).