A fellow MVP mentioned today one of the few engine enhancements we'll be seeing in SQL Server 2008 R2: Unicode compression. You can read more about the topic in Books Online, but basically what is going to happen is that nchar
and nvarchar
columns (but not nvarchar(max)
) columns, in objects that are row- or page-compressed, can benefit from additional compression, where realistically you can cut your storage requirements in half, depending on the language / character sets in use.
But I'm the kind of guy who has to see it to believe it! So, I mocked up a quick test of storing some Finnish Danish/Norwegian (?) characters, and ran it both on a SQL Server 2008 instance (SP1 + CU3, 10.0.2723), and a SQL Server 2008 R2 instance (10.50.1092).
USE tempdb; SET NOCOUNT ON; GO CREATE TABLE dbo.test ( foo nvarchar(2048) ) WITH (DATA_COMPRESSION = ROW); GO INSERT dbo.test(foo) SELECT CONVERT(varchar(32), RAND()) + REPLICATE(N'øååøæ', 300); GO 50000 EXEC sys.sp_spaceused N'dbo.test'; GO DROP TABLE dbo.test;
If you compare those results on SQL Server 2008 vs. 2008 R2, the difference is astounding: a space savings of roughly 60%, FOR FREE. That's right, this is an enhancement you get just by upgrading… I did not do anything differently about the creation of these tables except continue to use compression. (And note that I also performed this test with page compression, and the results were identical all around.) Keep in mind that if you upgrade at some point (you can't upgrade to R2 in its present form), you will need to rebuild indexes in order to implement this new compression method across the entire table.
And of course, by "free," I am not talking about the licenses. As with row and page compression, this feature is only available in Developer, Enterprise, and Enterprise Evaluation editions, and will be the case until – I predict – SQL Server 2016 Service Pack 1.
Now, I'll admit, the test is not super-realistic, and is biased toward good compression (since the same pattern is repeated over and over again on every row and on every page), and as such it demonstrates something pretty close to best-case scenario. But even worst-case scenario is not exactly "bad" — you may not see a gain at all, but you can't lose anything either, because the compression algorithm in SQL Server 2008 is smart enough to know when it is actually going to *lose* space by implementing compression, and won't do it in that case.
At some point I will test the performance of writing, reading, and seeking against a Unicode compressed table (and I will come up with more plausible test data at that point). Because nothing is ever really free, is it? Stay tuned to find out.
Agreed, I am definitely interested in seeing how this affects performance and if there is any noticeable impact over and above page/row compression. I think it all depends on the nature of the data and the decisions the engine has to make as it compresses/decompresses.
In the long run, we're I/O-bound, not CPU-bound, so we're probably better off getting any I/O gain we can, even if it costs us a little CPU. YMMV.
One issue is that compression brings a lot of CPU overhead; in my tests I've seen up to 2x the amount of time required for retrieval of hot pages. I wonder if UTF8 support would be just as beneficial without having as much overhead?
Denis;
> unfortunately I almost use no Unicode at all at the moment
But that means you are saving space already 🙂
255 came at us in a few cases as a "requirement"… wherever 64 was not enough, they pushed for 255. Their magic number, not mine. 🙂 And the 64, well, that was legacy… it was there long before I ever got my hands on the schema.
<Sarcasm>
Mmmm, that 255 number sounds familiar, did you upgrade this from Access with the upgrade wizard?</Sarcasm>
I do have some unicode but it is in lookup tables, for example 道琼斯第一财经中国600指数
I use a lot of Unicode, since we have to support several foreign languages and symbols like Euro and pound. All of this data gets entered via a Web UI. Unfortunately, while about 1/3 of these columns are NVARCHAR(64 or 255), the rest are NVARCHAR(MAX), which won't benefit from this compression at all… even when the data is stored in-row.
That is a nice storage saving, unfortunately I almost use no Unicode at all at the moment