Performance / Storage Comparisons : MONEY vs. DECIMAL
As you may already know, I am not a big fan of the MONEY data type, because of its inflexibility, accuracy problems, and the expectations the name of the type evokes in new users. If I had my way, MONEY would become a synonym for DECIMAL in SQL Server 2008 (allowing for specific precision and scale), and be removed in the following version. Of course there are people out there that either don't feel as strongly as I do, or feel the opposite — that MONEY should be here to stay.
After a recent discussion about the pros and cons of using MONEY vs DECIMAL for storing currency (and even non-currency) data, curiosity got the better of me. One of the arguments for the MONEY data type was performance. No supporting data was provided, of course. So I decided to conduct some tests myself. I wanted to measure how MONEY compared to DECIMAL data types both in their original implementations and using new technologies available in SQL Server 2005 (VARDECIMAL storage format) and SQL Server 2008 (page and row compression).
The person arguing for MONEY showed the space used by MONEY compared to the same information stored in a DECIMAL(20,4) column. Not all that surprisingly, the latter was slightly larger. But is that the whole story? No, for two reasons. One is that the *performance* of these choices was not compared, and the other is that DECIMAL(20,4) is not a very realistic requirement for storing currency data. Unless you are storing the pricing information for luxury yachts or aircraft carriers, in which case you can probably drop the decimal places altogether and use INT or BIGINT. For the rest of us, a better choice would be DECIMAL(8,2) or DECIMAL(10,2).
I created 11 databases, each with a single table containing a single column:
Keeping these tables in separate databases allowed for isolation of several factors and measurements, including database level settings, log growth, data file size and even backup time.
Next, I populated the table in each database with approximately 390,000 rows of varying length decimal data (based on calculations against object_id from a triple cross join of sys.objects on itself), and measured the insert times and storage requirements. Here is how they stacked up:
Then I performed an update that affected all rows, making sure that roughly 20% of the rows would have a significant change in significant digits (e.g. by adding 1,000,000). Here is the performance comparison, as well as how the data and log were affected:
Next I compared the time and cost of performing a SELECT COUNT(*) with a WHERE clause against the column:
And finally, I performed native and compressed backups of each database, comparing execution time and output size:
The following chart summarizes everything performance-wise. The orange with the dot means that database performed the best; the x on the red background means it performed the worst.
And this chart summarizes all things size-wise:
Of course, there is nothing overly definitive here. DECIMAL(10,2) with row compression enabled got the most "first place" metrics, while MONEY with no compression and VARDECIMAL types never finished near the top of the class. But you can judge from the results for yourself, and make decisions based on your own priorities.
Alex asked for some metrics on more complex operations like SUM(). I ran some tests using both SUM() and AVG(). The logical reads of course are the same as all the others, and the scan costs remained unchanged as well. But as for the observed performance of both calculations (compute scalar cost was identical for both operations), see the following chart:
Again, this was an average over 10 tests. Note that I did not append these results to the summary charts I delivered above. And sorry about the slightly different-looking screen shot.
Please take into account that these tests were performed on a dual-core laptop computer, and the database files were created on external storage. There are many other tests I could have run to glean more performance and storage data, against a much larger data set, and using production-class hardware, but for the scope of this post I just wanted to glance at the most basic operations. I repeated these tests 10 times from start to finish, so each metric taken is an average of 10 tests (in a lot of cases they were the same every time).
This was a very tedious exercise to perform. If you would like to perform your own tests, with your own sample data, and on your own hardware, I will more than gladly share my scripts. I'd post them here right now, but they are scattered and not distribution-friendly at this point.