Increased limits for columns / indexes / statistics
March 11th, 20088
Increased limits for columns / indexes / statistics
March 11th, 20088
 
 

I am still on the fence about whether this is a good thing or a bad thing.  However, in the next build of SQL Server 2008 you will be able to get your hands on, you will find that some element limits have changed, just a bit.  With the introduction of sparse columns, you can now jam 30,000 columns into a table.  No, that is not a typo; I really meant to type 30,000 columns (up from 1,024).  You can also create 30,000 statistics (up from 2,000), and 1,000 indexes (up from 250).

This room comes at a cost, though.  For tables with sparse columns, the maximum row length drops from 8,060 bytes to 8,018 bytes.

I question whether there is a practical application for this in the real world.  There probably is, but I must be in the wrong market segment to understand what it could possibly be.

In any case, I'm sure this news makes someone out there happy.

By: Aaron Bertrand

I am a passionate technologist with industry experience dating back to Classic ASP and SQL Server 6.5. I am a long-time Microsoft MVP, write at Simple Talk, SQLPerformance, and MSSQLTips, and have had the honor of speaking at more conferences than I can remember. In non-tech life, I am a husband, a father of two, a huge hockey and football fan, and my pronouns are he/him.

8 Responses

  1. Raj says:

    Hi Aarton,
    /*
    This room comes at a cost, though.  For tables with sparse columns, the maximum row length drops from 8,060 bytes to 8,018 bytes.
    */
    I guess its not true. The total data space occupied by sparse columns should be less than 8018. However, table's row can be greater than 8018 but less than 8060 as usual. The script provided below will show the same.
    drop table [sparse_col]
    GO
    CREATE TABLE [dbo].[sparse_col](
    [dt] [datetime] NOT NULL,
    [value] [int] NULL,
    data char(520) null,
    sparse_data char(7500) sparse  null,
    )
    GO
    insert into [sparse_col] Select GETDATE(),0,'char','sparse data'
    GO
    select * from sys.dm_db_index_physical_stats(7,null,null,null,'Detailed')
    where OBJECT_NAME(object_id) like '%sparse%'
    Go
    Regards,
    Raj

  2. Edward W. Stanley says:

    I think alot of changes in MS Db's is done to keep with other db's feature sets, not
    whether its useful or not.  This above comment will probably keep me off
    their evangelical hr lists.
    I woud have preferred larger row space.
    I hate the R.
    But 2k8 could be part of the effort to build db features into OS's, the table columns would probably need to match the file 'aspects' options in vista.  
    If you look at the variables there is quite the number of options, from what was the favourite colour of the author, to what the price of tea was when the file was saved.
    E

  3. Denis Gobo says:

    If you listen to
    SQL Down Under show 35 – Roger Doherty – SQL Server 2008 for Database Developers
    you will find out that it is all SharePoints fault  ðŸ™‚

  4. cinahcaM madA says:

    Get ready for SELECT * and XML manipulation…

  5. AaronBertrand says:

    But Paul, if you do store data there, are you never going to have to write a SQL statement that queries it?

  6. Paul Nielsen says:

    The intention of sparse columns is not for typical OLTP databases, but for applications that add thousands of user-defined (or occasional) columns – SharePoint docs, document management, etc. A sparse column should be empty for most rows – without having to store a null.
    Internally a spase column is a lot like a entity-value pairs pattern, but handled by the storage engine instead of procs and functions. So it's not really a new feature for our apps, but instead moving the capability to deeper within the engine.

  7. AaronBertrand says:

    I haven't seen anything explicit, but yes, it stands to reason that at least the INSERT limit would have to go up.  I don't think they would necessarily *have* to support naming all columns expicitly, and support only a subset.  Though, unless there is a technical reason not to, they will likely cover that as well.  And since I could feasibly be performing a join between four tables that each has 30,000 columns, the limit should be much higher, of course…  ðŸ™‚

  8. Denis Gobo says:

    I assume this went up too?
    Columns per SELECT statement 4,096
    Columns per INSERT statement 1,024
    How else would you insert?
    The only use I see is to store some monthly sales data transposed back to 2000BC  ðŸ™‚
    Imagine you need only half the columns of that table? Good luck with that SELECT statement