One of our teams has been periodically running an aggregate query against a half dozen tables, of varying sizes, across four databases. One of the tables will not stay permanently; it is a small heap (300K rows, 10MB) with a LOB column (
nvarchar(max)) being used temporarily to filter SKUs.
Every once in a while, the query causes this obnoxious stack dump:
Process ID: 22984
Description: Should never happen
Msg 3624, Level 20, State 1, Line 5
A system assertion check has failed. Check the SQL Server error log for details. Typically, an assertion failure is caused by a software bug or data corruption. To check for database corruption, consider running DBCC CHECKDB. If you agreed to send dumps to Microsoft during setup, a mini dump will be sent to Microsoft. An update might be available from Microsoft in the latest Service Pack or in a Hotfix from Technical Support.
Msg 596, Level 21, State 1, Line 0
Cannot continue the execution because the session is in the kill state.
Msg 0, Level 20, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
This is not good.
a software bug or data corruption and, after
CHECKDB shrugged at all relevant databases, I ruled out corruption. Plus, the issue is intermittent; if there were exception-causing corruption on a page that gets read by a query, the exception should happen every time the query runs (and should get noticed by logical checks).
So, let's roll up our sleeves.
While the errorlog has some info about the dump, and the .mdmp file has all of the details, all you really need is
SQLDump0nnn.txt. For deep analysis, you will probably really want to load the .mdmp file into WinDbg. But WinDbg is tedious. Microsoft's Arvind Shyamsundar has been tirelessly maintaining SQLCallStackResolver, as well as PowerShell scripts to download the right symbol files (the most frustrating part of using WinDbg IMHO).
The .mdmp and companion files are found in SQL Server's errorlog directory, which you can parse out of:
Boring, Aaron. To the dump, already!
I could have just forwarded everything to Microsoft support, and I still plan to do that properly this week, since it does need to be investigated by them to get an actual fix into the product. But it's definitely fun to spelunk on your own in the meantime. I downloaded the symbols for the relevant build of SQL Server 2017, and pasted the call stack from the text file into the application.
On the left is part of the raw dump, and on the right (well, unless your screen is small) is the symbolized call stack, matched line-for-line, with relevant highlighting courtesy me:
(And remember, the call stack is upside down, meaning newest event listed first.)
This tells me it has something to do with a spool and copying a LOB value into temporary workspace. I looked into three similar issues (including this example; they are either not solved, solved in a build we already have, or seem unrelated.
It's intermittent, so running the query again a minute later might work, or it might not. But telling the team to just twiddle their thumbs and try again later is not a solution.
I ran the query and, sure enough, when it doesn't produce a stack dump, it has these spools:
One important thing to note is that the
nvarchar(max) column didn't need to be defined that way, since the longest string there is 8 characters (and it joins to an existing
Can we reproduce on a different table?
Unsure if the problem was the heap, the LOB column, or the combination, and still confident this wasn't corruption, I created copies of the table, populated with the same data, to try to isolate the issue:
- AB_HeapMax – a heap matching the original table structure.
- AB_CIXMax – a clustered index (but problematic column still
- AB_HeapN8 – a heap, but the problematic column is now
nvarchar(8)(matching the join column).
- AB_CIXN8 – a clustered index (problematic column is now
Then I ran the same query dozens of times, and for each set substituting these four table names in for the original heap. Immediate observations:
- AB_HeapMax still generated the stack dump (intermittently), about 20% of the time, further suggesting corruption isn't the issue. Interestingly, when my copy of the table caused the assertion, running the query against the original table immediately also caused the assertion.
- AB_CIXMax generated the assertion far less often – about 5% of the time, suggesting the heap alone is not the problem, just that it makes it more likely. The spools remained.
- Neither of the tables without the max column generated a single exception, and in those cases, the spools were gone.
Now, this can't possibly ensure that the issue would never happen again. And while you might suggest the solution is as simple as changing the data type, I think the clustered index has additional benefits over and above making the issue happen less frequently. There are gains performance-wise and, also, the smaller column can now be a part of the formal key (useful for joins and sorting). Here is the breakdown of metrics from a typical run:
We see a minor gain in both cases when changing from a heap to a clustered index, but the really big win is when we move away from the heap and use the right data type. We eliminate the spools (and the writes involved with those), halve the reads, and gain benefits from parallelism; combined, this takes the original 80 second runtime down to 8 seconds, a 10X improvement.
This table should never have been a heap, and the column shouldn't have been a LOB. Those two factors alone should of course not lead to assertions, stack dumps, and misplaced warnings about corruption, but that is a bug we can pursue with Microsoft independent of making changes to the original table. In the meantime, we can work around it (and improve performance) by changing the table, without the users having to change their query, and without having to wait for a fix.