SQL Server v.Next (Denali) : Changes to system objects in CTP3

If you have access to both a CTP3 instance and a CTP1 instance, and the CTP3 instance can see the CTP1 instance, you can run all of these queries simply by creating a linked server on the CTP3 instance that references the CTP1 instance (and be sure to run in the context of master). I named this linked server "OlderCTP" so you will see that referenced in several of the queries in this post.

New objects in CTP3 that weren't in CTP1

This is the simplest of the queries in this post, and simply returns the list of schema_name/object_name combinations that we find in CTP3 but not in CTP1.

SELECT
    new_obj = QUOTENAME(SCHEMA_NAME(new.[schema_id]))
      + '.' + QUOTENAME(new.name),
    new.type_desc
  FROM 
    [master].sys.all_objects AS new
  LEFT OUTER JOIN
    OlderCTP.[master].sys.all_objects AS old
    ON  SCHEMA_NAME(old.[schema_id]) + '.' + old.name
      = SCHEMA_NAME(new.[schema_id]) + '.' + new.name
  WHERE old.name IS NULL
  ORDER BY new_obj;

Results:

I talked about a few of these objects in an earlier blog post (since some were also introduced to SQL Server 2008 SP2 and/or SQL Server 2008 R2 SP1). Some others have functionality that is pretty obvious by name alone (e.g. the new INFORMATION_SCHEMA view for Sequences). A few other notes:

  • sys.sp_db_increased_partitions is used to check the status of the increased partitions feature (which I talked about previously). The code is also written to turn the setting on and off, but turning it off will never work in Denali (an error is raised) – this functionality is only meant to work on previous versions.
  • sys.fn_MSxe_read_event_stream looks like an interesting and maybe even tempting function to call within Management Studio, but it really is a stream that is not meant to be consumed as a singleton. Go ahead and run this query:
    SELECT * FROM sys.fn_MSxe_read_event_stream('system_health');

    When you're tired of waiting, click stop to understand a little bit about how this function works, or at least how it works inside the confines of a Management Studio query window. 🙂

  • If you've done any troubleshooting with Service Broker in the past, you may be surprised to see sys.dm_broker_forwarded_messages in this list, since it also existed in previous versions of SQL Server. It actually disappeared in CTP1 (and while this turned out to be intentional, I reported this as a bug on Connect), but it has now re-appeared in CTP3.
  • I noticed that they placed yet another new system table into the dbo schema (see the next section for details about a similar issue they just fixed).

Now, there are in fact two objects in this result set that I haven't shown above, because I'm going to treat them differently:

Objects that have changed schema between CTP1 and CTP3

Thanks to some persistence by fellow MVP Jonathan Kehayias, an unfortunate decision in CTP1 to store XEvent mapping information in the dbo schema has been reversed. So with this query…

SELECT
        old_obj = QUOTENAME(SCHEMA_NAME(old.[schema_id]))
          + '.' + QUOTENAME(old.name),
        chg = ' => ',
        new_obj = QUOTENAME(SCHEMA_NAME(new.[schema_id]))
          + '.' + QUOTENAME(new.name)
  FROM
    [master].sys.all_objects AS new
  INNER JOIN
    OlderCTP.[master].sys.all_objects AS old
    ON new.name = old.name
  WHERE old.[schema_id] <> new.[schema_id]
  ORDER BY new_obj;

…the following two tables show up (they are not catalog views, at least according to sys.all_objects.type_desc):

Many code samples already out on the web use the dbo. prefix, so if you run the following code (or have already written code against CTP1), you will now get errors in CTP3:

SELECT * FROM dbo.trace_xe_action_map;
GO
SELECT * FROM dbo.trace_xe_event_map;
Msg 208, Level 16, State 1
Invalid object name 'dbo.trace_xe_action_map'.
Msg 208, Level 16, State 1
Invalid object name 'dbo.trace_xe_event_map'.

So if you have any of this code or you come across this error, you can fix it simply by changing the prefix on these objects to sys. If you have a ton of code already using the dbo prefix, you could create a synonym, so that you don't have to change all of your code in one swoop:

CREATE SYNONYM dbo.trace_xe_action_map FOR sys.trace_xe_action_map;
CREATE SYNONYM dbo.trace_xe_event_map  FOR sys.trace_xe_event_map;

Objects that existed in CTP1 but have been dropped as of CTP3

 SELECT
    old_obj = QUOTENAME(SCHEMA_NAME(old.[schema_id]))
      + '.' + QUOTENAME(old.name),
    old.type_desc
  FROM
    OlderCTP.[master].sys.all_objects AS old
  LEFT OUTER JOIN
    [master].sys.all_objects AS new
    ON  SCHEMA_NAME(old.[schema_id]) + '.' + old.name
      = SCHEMA_NAME(new.[schema_id]) + '.' + new.name
  WHERE
    new.name IS NULL
    -- filtering out objects that have changed schema
    AND old.name NOT LIKE 'trace_xe%'
  ORDER BY
    old_obj;

Results:

The stored procedure sp_dboption has been listed as a deprecated object going back to SQL Server 2005. sp_MSgetversion is an undocumented stored procedure, so it won't be documented as deprecated, nor will it show up in Upgrade Advisor or BPA stuff – so if you're using that stored procedure today, it might be time to change. Not sure about the hadr view that is dropped (probably functionality covered by all the new availability group DMVs), or why a new constraint was added to master..spt_values.

Columns that have been added between CTP1 and CTP3

To find columns that have changed between CTP1 and CTP3, we need to first eliminate all new and dropped objects, focusing only on those that exist in both CTP1 and CTP3. Then we can grab all of the columns for both the new versions and old versions of the objects, and the types from sys.types. We'll re-use this beginning CTE structure (before the "– re-use the above CTEs" comment) in the rest of the column-based queries in this post:

;WITH [common] AS
(
   SELECT 
     n = SCHEMA_NAME(old.[schema_id]) + '.' + old.name
   FROM
     OlderCTP.[master].sys.all_objects AS old
   INNER JOIN
     [master].sys.all_objects AS new
   ON 
      SCHEMA_NAME(old.[schema_id]) + '.' + old.name
    = SCHEMA_NAME(new.[schema_id]) + '.' + new.name
),
old_columns AS
(
   SELECT
     n = SCHEMA_NAME(o.[schema_id]) + '.' + o.name,
     [column] = c.name, c.system_type_id, 
     c.[precision], c.scale, c.max_length
   FROM
     OlderCTP.[master].sys.all_columns AS c
   INNER JOIN
     OlderCTP.[master].sys.all_objects AS o
     ON o.[object_id] = c.[object_id]
   INNER JOIN
     [common]
     ON SCHEMA_NAME(o.[schema_id]) + '.' + o.name = [common].n
),
new_columns AS
(
   SELECT 
     n = SCHEMA_NAME(o.[schema_id]) + '.' + o.name,
     [column] = c.name, c.system_type_id, 
     c.[precision], c.scale, c.max_length
   FROM 
     [master].sys.all_objects AS o
   INNER JOIN
     [master].sys.all_columns AS c
     ON o.[object_id] = c.[object_id]
   INNER JOIN
     [common] 
     ON SCHEMA_NAME(o.[schema_id]) + '.' + o.name = [common].n
),
[types] AS
(
   SELECT system_type_id, name = name + CASE
       WHEN t.name LIKE '%char%' THEN
           '(' + RTRIM(n.max_length / CASE
           WHEN t.name LIKE 'n%' THEN 2 ELSE 1 END) + ')'
       WHEN t.name LIKE '%binary' THEN 
           '(' + RTRIM(n.max_length) + ')'
       ELSE '' END
       -- this CASE is not complete but I happened to know
       -- that there were no NUMERIC/DECIMAL/DATETIME2/TIME
       -- or other columns that require additional handling
     FROM sys.types 
     WHERE name <> N'sysname'
)
 
-- re-use the above CTEs
 
SELECT
   obj = n.n, n.[column], [type] = t.name
   FROM 
     new_columns AS n
   LEFT OUTER JOIN
     old_columns AS o
     ON o.[column] = n.[column]
   INNER JOIN
     types AS t
     ON n.system_type_id = t.system_type_id
   ORDER BY 
     obj, n.[column];

Results:

Any of these new columns could cause the same breaking changes as I talked about previously, regarding DBCC LOGINFO; I recommend checking through your code to make sure you aren't making any assumptions about the output of these DMVs or functions (and if you haven't done this for CTP1 previously, it can't hurt to check out all the changed views I pointed out there, as well – though I haven't done an equivalent comparison between Denali and 2008 or 2005).

Columns dropped from system objects between CTP1 and CTP3

Please take the CTEs from the above section and insert them in place of the "re-use" comment here in order to run this code:

-- re-use CTE from above 
 
SELECT
   obj = o.n, 
   o.[column],
   [type] = t.name
 FROM 
     old_columns AS o
   LEFT OUTER JOIN
     new_columns AS n
     ON o.[column] = n.[column]
   INNER JOIN 
     types AS t
     ON o.system_type_id = t.system_type_id
   ORDER BY 
     obj, o.[column];

Results:

Like added columns, dropped columns can break existing scripts as well… for example an INSERT (…existing structure based on today's metadata…) SELECT * will fail because of mismatched column count, and obviously an explicit reference to a non-existent column will cause a runtime error. This may not be noticed on upgrade unless you routinely recompile all of your procedures and refresh all of your views on every upgrade.

Columns with changed data types between CTP1 and CTP3

Again, please put the CTEs from a couple sections back in for the "re-use" comment before running this:

-- re-use CTE from above 
 
SELECT 
   obj = o.n, o.[column], 
   [old type] = ot.name,
   chg = ' => ',
   [new type] = nt.name
 FROM 
     old_columns AS o 
   INNER JOIN 
     new_columns AS n
     ON o.n = n.n
     AND o.[column] = n.[column]
   INNER JOIN
     [types] AS ot 
     ON o.system_type_id = ot.system_type_id
   INNER JOIN 
     [types] AS nt 
     ON n.system_type_id = nt.system_type_id
   WHERE 
     o.system_type_id <> n.system_type_id 
     OR 
     (
       n.system_type_id = o.system_type_id 
       AND 
       (
         n.[precision] <> o.[precision]
         OR n.scale <> o.scale 
         OR n.max_length <> o.max_length
       )
     )
     ORDER BY 
       obj, o.[column];

Results:

And finally, yes, changes to data types can break existing scripts… if you create a #temp table, @table variable, or archive table based on today's metadata to store DMV data, for example, upon upgrading to Denali, the data in the DMV may no longer fit. The changes to database_id are very curious – one might speculate that they are gearing toward some changes to tempdb or the overall system, perhaps for Azure, where we could have more than 32,762 user databases on an instance? (32,767 – (4 system DBs + resource DB.)) The change to sys.dm_exec_requests was likely due, at least in part, to Maciej's bug report: Connect #272296. I'll admit I'm a little baffled by the change to sys.dm_xe_packages – why do we have a GUID column that uses nvarchar(60)? I'm sure there's a reason but on first glance it looks quite strange indeed.

Big thanks to Jens Gyldenkærne Clausen (@jensgc) for helping out with a copy of the CTP1 metadata.

Aaron Bertrand

I am a passionate technologist with industry experience dating back to Classic ASP and SQL Server 6.5. I am a long-time Microsoft MVP, write at SQLPerformance and MSSQLTips, and have had the honor of speaking at more conferences than I can remember. In non-tech life, I am a father of two, a huge hockey and football fan, and my pronouns are he/him. If I've helped you out, consider thanking me with a coffee. :-)

1 Response

  1. Saravana Kumar says:

    Hi Aaron,
    We are migrating from SQL Server 2008 R2 to SQL Server 2012, in our application we have a set of databases which will be created through scripts, where we use OSQL utility. During this course after tables are create we use SP exec sp_tables to get the list of tables, views etc.. to grant permission. what i noticed in SQL Server 2012 tables trace_xe_action_map and trace_xe_event_map are getting added in to our DB's by default and we are not able to assign permission for these tables. Can you kindly provide clarity on why the tables trace_xe_action_map and trace_xe_event_map are getting added in to our DB's are getting added into a DB created in SQL Server 2012 also it would be great if u provide me the significance of these 2 tables.