Aaron Bertrand

Q & A from our Parameter Sniffing webinar series

SentryOne Newsletters

The SQLPerformance.com bi-weekly newsletter keeps you up to speed on the most recent blog posts and forum discussions in the SQL Server community.

eNews is a bi-monthly newsletter with fun information about SentryOne, tips to help improve your productivity, and much more.

Subscribe

Featured Author

Itzik is a T-SQL trainer, a co-founder of SolidQ, and blogs about T-SQL fundamentals and query tuning.

Itzik’s Posts

The past two Wednesdays, we have hosted a two-part webinar series treating parameter sensitivity issues:

  • Stored Procedures, Parameters, Problems…
    Kimberly L. Tripp and Aaron Bertrand
    January 24th
    Missed it? Register to watch it now!

  • Tackling Parameter Sniffing Using SentryOne
    Aaron Bertrand, Kimberly L. Tripp, and Andy Mallon
    January 31st
    Missed it? Watch it now!

Some questions came up during both webinars, and I thought I would compile them and answer them here (some of the answers came from Andy during the webinar).

In an issue we've seen recently, we're seeing plans getting dropped out of the cache very quickly. We're not performing anything you describe (DBCC FREEPROCCACHE etc.); could memory pressure also cause this to happen?

Yes, memory pressure could be a factor (see this post), and I know there are some investigations into potential problems with SQL Server's memory management in this regard too.

From an attendee: "Not a question, but a comment to the user asking about the many times his plan cache is emptied. We had that too and, indeed, it was memory pressure. We had the max server memory wrongly configured, that was fixed using the formula mentioned here, and then we had the procedure from this article running every 10 minutes (we have tons of dynamic SQL, only used one time)."

What if you use OR in the where clause instead of AND, would the issue persist?

Typically if you use OR in this type of pattern, you will get all rows every time, unless every single parameter is populated with values that filter rows. This changes the semantics of the query from "all of these things have to be true" to "any one of these things can be true." Still, the plan that is compiled for the first set of parameters will still be cached and persisted for future executions, whether your clauses use AND or OR.

Is that 1=1 flag a good approach? Isn't is better to only add the parameters that are provided and therefore avoid the ugly 1=1?

The 1 = 1 is virtually ignored by SQL Server, but allows all conditional clauses to be added with AND so that you don't have to treat the *first* one differently. Here is the alternative:

SET @IncludedWhereClauseYet bit = 0;
SET @sql = N'SELECT cols FROM dbo.Table';

IF @param1 IS NOT NULL
BEGIN
  IF @IncludedWhereClauseYet = 0
  BEGIN
    SET @sql += N' WHERE ';
    SET @IncludedWhereClauseYet = 1;
  END
  ELSE
  BEGIN
    SET @sql += N' AND ';
  END
  SET @sql += N' @param1 = @param1';
END

IF @param2 IS NOT NULL
BEGIN
  IF @IncludedWhereClauseYet = 0
  ...
END
...

The 1=1 allows you to simplify, by letting you always prefix any clause with AND. The above code becomes:

SET @sql = N'SELECT cols FROM dbo.Table WHERE 1 = 1';

IF @param1 IS NOT NULL
BEGIN
  SET @sql += N' AND @param1 = @param1';
END

IF @param2 IS NOT NULL
BEGIN
  SET @sql += N' AND @param2 = @param2';
END

You could perhaps use a different initial clause to avoid all the conditionals, like WHERE PrimaryKey > 0 or WHERE PrimaryKey IS NOT NULL, and then every subsequent clause could start with AND. But 1 = 1, while ugly, is harmless, and IMHO is no less ugly than adding a *real* but meaningless clause, except that a *real* clause could affect the plan.

Remember that when you are constructing T-SQL code with T-SQL, you have two aspects of "ugly" to think about – sometimes you'll be troubleshooting the code above, and sometimes you'll be troubleshooting the query that comes out of it. Be careful about sacrificing one for the sake of the other.

WHAT?! I've totally missed that … WITH RECOMPILE. I thought that emptied the plan, but it leaves it alone for just this execution… that's very important to know!

Just make sure you are aware of the downsides as well.
See this great post by Paul White.

Is OPTION OPTIMIZE FOR @parametername UNKNOWN no longer preferred in more recent SQL versions?

I don't think it's any better or worse in modern versions than when it was first introduced in SQL Server 2008. As far as I know, even with all the changes to the optimizer and cardinality estimator, that bit still behaves the same way.

Is there any load on the server, if I enable capturing Procedure stats and Query stats in SentryOne?

The Procedure & Query stats collection should be on by default. All data collection comes with a cost, but SQL Sentry is quite careful about how much cost is incurred by the collection.

The seek on RS was not using it as a residual predicate, it was seeking on something else I couldn't see.

Thanks, I'll revisit that example, and blog about the demos separately, making sure to include any relevant details that weren't obvious from the plan diagram alone.

Isn't it true that adding some of the columns needed as INCLUDEs don't actually make the index any more effective because the key lookup won't be eliminated? I'm thinking that the percentage shouldn't change unless you actually eliminate the key lookup.

Strictly, yes, that is true. The original query was a paramount bad example, using SELECT * and an index missing a hopeless number of columns. The point I was trying to make is that the Index Analysis tab encourages you to both (a) improve the query and (b) make the index cover. The score is there to entice you to do either or both – if you change the query so you need fewer columns, the index comes closer to covering the query too. If you're going to create a new, separate, covering index, you also have the information about which columns are required to cover this specific query. Technically, you're right, adding one include column but still requiring a lookup for 4 others isn't going to make this specific query perform any better, and isn't going to make the index better, but it does indicate that you're getting closer. The hope is that you don't stop at just adding one include column and ignore the rest. We don't know when you're going to stop, so I don't know that there's a perfect solution – we certainly don't want to discourage users from making their indexes better suited for their queries.

Why are we seeing queries using the first name parameter and last name parameter summarized under a statement using only a last name parameter?

UPDATE: This is intentional. The grouping under Show Totals groups the same procedure called with all of the various parameter combinations. So you can use it first to determine which parameters tend to cause the worst performance, then within that, drill in to whether or not there is data skew. A parameter that leads to a search against an unindexed column, for example, will probably bubble up to the top pretty reliably, and you can see that in combination with other parameters that are passed and also compared against all the calls where that parameter wasn't passed.

Having said all that, we'll look to fine-tune this grouping behavior as we finish up changes currently in-flight for the Top SQL screen.

Is there any documentation on how to use a plan guide? I currently would have no idea how to do that.

This is something else I've been meaning to blog about, but Microsoft has some topics here in the meantime (and check out all the related links in the sidebar).

Do I need to enable something to get Query History Chart ?

No, this should be enabled on all modern versions of the SentryOne client application. If you're not seeing it, try Tools > Reset Layout; if that doesn't work, contact support@sentryone.com.

Are there cases when forcing last known good plan using Query Store when a plan regression is found a bad idea? Will that tend to hide issues that are better addressed by changing the statement like you've shown?

Forcing a plan is often a last resort kind of option, and I tend to reserve it for cases where you really, really, really can't fix the statement (or change the index). Forcing a plan can always lead to the wrong behavior, because it's still a human making that choice, and you could be making it based on bad info. The regression might be because of a plan change, but if you're judging it a regression because the runtime was longer, have you investigated other possible reasons? For example, let's say the system was rebooted or there was a failover, and got a new plan because the old one was evicted, and maybe stats have also changed in the meantime, but now the query runs longer not because the plan is worse but rather because the buffers were empty. So yeah, I certainly would not suggest forcing a plan on every regression.

SentryOne is not always capturing data or parameters all the time, so I do not have enough information. How do I make sure SentryOne captures parameters and execution plans all the time?

You really can't because this is all down to how your queries are executed, how we capture them, and how quickly they run. Often your queries do not run long enough to get captured fully, and we must rely on SQL Server's aggregated query/procedure stats views, which do not collect parameter information. You can change collection settings for Top SQL Source to capture more and on a more frequent interval, but you need to balance the amount of data you collect with how much additional information it buys you.

Can I query for the information so I can automate and generate reports?

We don't have anything out of the box for you to do this, but let me take it back to the team and see what kind of options we can come up with. One thing I toyed with for this webinar was to build an Advisory Condition to catch the kinds of regressions we're looking for, but time became a factor.

How do we decide when to use OPTION (RECOMPILE), as every day we get different plans for different parameters?

I would say start with the queries that fluctuate most with parameter sensitivity. If I have a query that sometimes takes 2 seconds but sometimes takes 30, and another that ranges from 4 seconds to 6 seconds,
I'm going to focus on the first one.

Which one is better to use, OPTION (RECOMPILE) or QUERYTRACEON, in the case of parameter sniffing.

I prefer OPTION (RECOMPILE) for two reasons. One, it's self-documenting; nobody reading the code will wonder what it's doing, but not everyone reading the code will have memorized TF numbers like 4136. Two, it doesn't require elevated permissions – try using QUERYTRACEON as a peon.

Is it possible to alert or report on procedures taking longer than usual? Most interested in high count procedures.

Absolutely, you could use an Advisory Condition, but it can get a little complicated because – for procedures that even sometimes run below the collection threshold – you would need to compare snapshots of the procedure stats DMV. I've added a reminder to blog about this one too, as it is something I've helped customers implement in the past.

Microsoft is making Automatic Tuning the default for Azure SQL Database, including automatic plan correction. Does that seem like a good idea to you?

I'll reserve judgment until I (or some customers) have played around with it. Deciding how to tune is challenging enough for mortals; mortals writing software to tune for you seems at least as challenging, if not more so. When Andy saw this question, he mentioned to me that it reminded him of SQL Server 2000 – the marketing pitch then was that it was so self-tuning we wouldn't need DBAs anymore. That claim has not aged well.

Being able to select the two dots on the Query History chart and compare would be nice.

I agree.
Stay tuned.