After performing some tests to determine run times for reports given a potential hardware environment (as described last week in Step 3), the next step is to evaluate the results.

Remember, I’ve suggested that the ultimate in report flexibility comes from doing all reporting from transactional detail:  If we capture the attribute on the transaction, then we can use it in a report if we want.  But doing so can mean very large data volumes need to be scanned to produce the reports. So ultimate flexibility comes at a price of computing capacity.

What is that price?  That’s what step 4 in the estimating process is about.

Note there are two variables though; one is compute capacity costs, the other is time.  Although not always true in the extremes, one can be traded for the other: a small compute capacity can be exchanged for a long production time.

In the video I give examples of report processes that trade these two variables.  One example is a daily reporting process I witnessed where the initial test of one of the critical daily reports ran for four days after the new ERP system was installed, and was then canceled without successfully producing the report.  Obviously not adequate for a daily process.  And also not for the other dozen critical daily reports that are required as well.

If the reports can be produced in the given hardware environment and time, then the system can be constructed using the parameters gathered, using transactional data.  If not, then we’ll talk next week about how to evaluate what posting processes to make to produce the reports using aggregated data.

This is Episode 137 of Conversations with Kip, the best financial system vlog there is.