Connect Digest : 2009-09-04

I am heading to my home town for a week; we rented a cottage on Lake Nipissing, and are going to relax.  No, really : no howling beagles, no computers, and no schedule.  Sounds like a week of bliss to me.

So, as you might imagine, there won't be a digest next week.  I think I have some interesting items to point out to you this week to hold you over.

Users won't stop shrinking, so take the option away

Even with boatloads of advice to the country, a LOT of people are still shrinking their databases as a whole (as opposed to per file), and even doing so as a regularly scheduled maintenance plan.  While Microsoft has advised us that settings like AutoShrink will be removed in some future version, personally I just don't think that's enough.  People don't need to be tempted in so many ways to shoot themselves in the foot and destroy their database performance, never mind to set the system up so it shoots their feet off automatically.

#486614 : Deprecate all "shrink database" options

Make it harder for users to raise 600 GB log files

Steve Jones formalized a thought I alluded to in an earlier blog post: make the default recovery model simple.  The reason this will be better for users is that if you choose full by mistake, you don't realize until your transaction log has taken up your whole disk — and then you're back into the need to shrink files again.  If you choose simple by mistake, you will realize your error the very first time you attempt to take a log backup, and at that point corrective action is simple (no pun intended) and immediate.  Do I think that simple recovery model should be how most databases end up?  Absolutely not.  But until they make other options mandatory (e.g. setting up a backup / log backup plan), I think default = full is more dangerous than default = simple.  And as I mentioned, correcting for a default of simple when you really wanted full is much more obvious and immediate.

Make the SQL Server Agent / Jobs node more manageable
I probably have 50+ Connect items on how to make the SQL Server Agent components in SSMS easier to manage.  But after setting up a Utility Control Point (see my most recent "under the hood" post), it became obvious how cluttered and ugly the Jobs node in Object Explorer can become.  And when you look at the screen shot, kind in mind: this is before I've created a single user-defined job.  Blecch.  I'd like to see, at the very least, folders for each category with at least one job (this way we could control where our jobs appear simply by changing their category), with uncategorized jobs appearing at the root.  But user-defined hierarchies (even if only one level deep) would be much better.

Prevent slipstream from failing silently
Earlier this week, I created a slipstream install: SQL Server 2008 with SP1 (2531) and SP1 CU3 (2723).  Little did I know that, when I grabbed the SP1 binaries off of our file server, I grabbed the pre-release CTP of SP1 (build 2520 vs. 2531).  Oops.
The slipstream process itself is handy but pretty manual, and a great deal of thanks is owed to Peter Saddow of Microsoft for providing so many helpful blog posts with information.  But, as a manual process, it is certainly error-prone.  In my case, running a slipstreamed install yielded these results:
  1. Build 1600 (RTM) was installed.  Neither 2520 nor 2723 made it to the server.
  2. The log files did not mention any problem, AFAICT.
Both of these outcomes are undesirable, in my opinion.  If setup was not able to patch to the highest CU build I had attempted, the process should fail completely.  Failing that, it should patch to the highest build possible (in this case 2520), and then make it *VERY* obvious (I'm talking sirens, flashing lights and dancing girls here) that it did not complete the full process.
Only when I attempted to run the installers from the PCU and CU folders did I realize what had happened, since the log files were of no help.  I installed what I thought was SP1, and then when I tried to install CU3, it told me the minimum build # was 2531, and that I only had 2520.  Face palm time.
This sequence of events should not be allowed.  Since the CTP of SP1 was public, I doubt I am the first person who tried this, and I doubt I will be the last.  But the more dangerous problem is that a slipstream install can *look* like it was successful, but fail silently.  I noticed immediately that the server was still at 1600.  But what if I didn't pay as much attention to these details?  If I was relying on protecting my server from an exploit, say via some security patch in an SP or CU, I'm going to run this setup and walk away from the server, thinking I am protected when I am not.

Some polishing in R2 still required
I know it's pre-release material, but unhandled exceptions are never cool, and bugs I've submitted in the past were not always fixed in time for the initial release.  So I'd like to help avoid these from being in the product this time around.  When using the Utility Explorer for the first time, I came across a pretty serious and easily reproducible unhandled exception, as well as an instance of unintuitive behavior.
The unhandled exception one is really troubling.  There are several other unresolved Connect items where SSMS raises unhandled exceptions, and it really makes me wonder why TRY/CATCH use isn't more widespread.  I guess SSMS gets the same kind of error handling priority as SQL Server itself.  =)  Sorry, I couldn't resist.

Aaron Bertrand

I am a passionate technologist with industry experience dating back to Classic ASP and SQL Server 6.5. I am a long-time Microsoft MVP, write at Simple Talk, SQLPerformance, and MSSQLTips, and have had the honor of speaking at more conferences than I can remember. In non-tech life, I am a father of two, a huge hockey and football fan, and my pronouns are he/him. If I've helped you out, consider thanking me with a coffee. :-)

1 Response

  1. AllenMWhite says:

    But your beagles howl so nicely!  Have a great weekend.