I realize that there are some cases where, in an emergency, you need to shrink a log file because it took over the drive. In SQL Server 2005 and earlier, you could get out of the jam quickly, by issuing a BACKUP LOG WITH TRUNCATE_ONLY (followed by DBCC SHRINKFILE). In SQL Server 2008, this "feature" within BACKUP LOG is no longer available, and with good reason; however, there are other – shall we say, "clever" – ways to achieve this. But too many people pass this advice off casually, and this gives people the impression that it's okay to change their recovery model from full to simple and then back to full, in order to allow them to shrink their log file(s). This article, for example, is getting "high fives" from other sites, even though it is giving terrible advice (IMHO) without any context whatsoever:
How about explaining WHY the command has been deprecated? Too many people were completely shooting themselves in the foot by using this command and destroying their backup chain, possibly not even knowing they had done so, just so they could reclaim a few GB of disk space (in almost all cases, temporarily!).
How about explaining WHY they might need to use this code? SQL Server is very good at managing its own log file(s). If your log file expands and fills up your disk, you are doing at least one thing wrong:
- you are in full recovery mode, but you are not taking log backups often enough (or at all);
- you are in full recovery mode, but should be in simple, because you do not need point-in-time recovery (usually this is true for Dev/Test/QA systems, but not production);
- you had a very bizarre and atypical log-expanding transaction; or,
- you did not plan correctly for your log and recovery needs.
How about showing them how to do disaster recovery right in the first place? Picking the right recovery model, determining tolerance for data loss and downtime, making sure to not just back up to the local disk and leave it at that, sizing the log files appropriately, testing recovery plans, etc. There is a lot more to it than just creating a database, and then reacting to a full disk by shrinking the log file.
I strongly recommend that before taking code samples like this and running off and "fixing" all of your disk space issues, you read these articles from like-minded SQL Server professionals:
- Tibot Karaszi : Why you want to be restrictive with shrink of database files
- Mike Walsh : Don’t Touch that Shrink Database Button!
- Mike Walsh : Shrinking is a Popular Topic
- Me : Why is disaster recovery an afterthought?
- Ted Krueger : Do not truncate your ldf files!
- Brent Ozar : Stop Shrinking Your Database Files. Seriously. Now.
I'd also recommend spending some time looking through the Recovery Models Overview in Books Online.
I have used shrink database. my database allocate size is 80 GB and this processes is taking 3 hours but these process is not completed. and i need to stop before it finish. then i stooped and checked data base … there is working properly……
I was wondering whether you could help me. I have a SQL 2008 databse where the log file is 80GB. The disaster recovery plan is to restore the previous nights full backup, not restore to point in time required (client's choice not mine). For some reason the database was set to full recovery mode and not simple, am i right in saying that given the clients requirements they woudlnt need recovery mode set to full?
Anyway the cleint has run out of disk space and we need to shrink the log file however after running the shrinkdatabase command the log file is still the same size (note i did set the recovery mode to simple before trying).
Greatly appreciate your help with this. Thanks
Aaron,
Been a while, but I wanted to make sure you were aware I did "publish" your comment. I wasn't holding it due to your feedback. I just hadn't approved it yet. I actually linked back to your article in my updated post.
Take care.
Here's a question about log backups interacting with whole disk cloning.
We have a method available to rapidly clone the whole disk structure of the sql server in an application consistent state.
Is there a simple way to persuade SQL that this event was a SQL full backup?
Reason being that there is a need for transaction log backups and also a recurring need to temporarilly switch to simple recovery mode.
What I did to correct my bad log file situation was to create a new database, then I restored the problem one into the new one.
I just changed the log growth settings on the new database so it would not become a problem ever again.
I have the logs growing by MB instead of percent.
So is it then realistic if you have a very bad log file situation that you are trying to correct (min size stuck at 30GB – cannot shrink below this point – a second log file resulting from attempt to migrate and Remove the 30GB log file – which doesnt work either btw) and you really do just want to start over with just the mdf and a reasonably defined ldf, to go through any series of steps that will not lead to the db crashing or long recovery process like is described?
Thank you for the insight.
I have removed the shrink/reorganize/reindex tasks out of the maintenance plan and I have added the Ola reorganize/reindex jobs instead.
With what frequency should I run the Ola jobs? Is it safe to run them every night before the full backup job takes place?
(1) change the auto-growth from % to fixed MB size (can't tell you the optimal value, but it may vary for different databases and/or different disks). % is bad because it takes longer and longer as the database gets bigger. Ideally you plan for capacity and the autogrow setting is just a safety net. If you find that you are growing a lot right now, then perhaps it might be time to consider using a maintenance window to grow the file substantially to allow for future growth. Then you control when an autogrow event occurs because when this happens in the middle of a transaction, everyone waits.
(2) take the shrink nonsense out of your maintenance plan. A shrink operation should be a very rare event in response to something out of the ordinary or catastrophic. (That was the whole point of this post.)
(3) take the reorganize / rebuild index options out of your maintenance plan, and replace it with smarter logic. You can use either Ola's or Michelle's solution, I've used both and they are excellent:
http://ola.hallengren.com/
http://sqlfool.com/2010/04/index-defrag-script-v4-0/
Great article and a real eye opener. Can you help me out with one thing?
I have several DB's on SQL 2005 and one of them concerns me.
What can I do to 'normalize' this DB so that its log file stops blowing up every night?
The log file for this DB keeps bulging every night by several gigs. Autogrowth for all DB's is on for 10% and unrestricted. None of the other DB's has a log file larger than a few megs.
This DB is set to simple mode recovery. I do nightly full backs and hourly differentials.
I have a maintenance plan that runs every night. Here are the actions it takes in the order at which they occur: check db integrity, shrink db, reorganize index, rebuild index, update statistics, clean up history, full backup and maintenance cleanup.
Skowronek, I did not "blast" your article. I merely pointed out the important parts that you left out of your initial draft (and the silly notion that log files are irrelevant). I would say in a development environment, where there is schema churn and there probably isn't a rigorous backup process in place (and probably not a rigorous source control process in place, let's be honest), log files are QUITE relevant.
But I can't review what I wrote anyway, because you chose not to publish my comment – which kind of defeats the purpose of having a comment form in the first place. You are fully within your right to not post comments from your readers, but I suggest removing the comment form (and just have them send you e-mail) if you're going to pick and choose which comments you're actually going to let stand.
Aaron,
This is apparently a very touchy subject for you. Thanks for the feedback and recommendations to my article. Though I disagree with the intensity at which you deter the use of this method for clearing space (especially in certain development environment scenarios), I agree with eluding to the scenarios and effects of clearing the logs.
Take care.
Now for my shameless plug of the article (modified) you blasted:
http://www.skonet.com/Articles_Archive/How_To_Shrink_or_Clear_Out_Large_SQL_Server_Log_Files.aspx
It sounds like a challenge of provisioning the database initially. It's very easy to setup a new database for 'Full' recovery, but nothing forces you to circle back around to backup the logs on the database (all these know-nothing application developers trying to build a quick solution 😉
And SQL Server is the best I've worked with at ease of provisioning, but perhaps what would be ideal is the ability to specify the recovery model and backup routine at the same time (i.e. Full Recovery – Daily Backup, Full Recovery – Hourly Backup, Simple Recover – Daily Backup, etc.)
Deleting a log file = asking for a damaged and unusable database. You shouldn't be messing with the log file, especially not renaming and deleting.
I suppose you had no backups either. If not, get some set up.
Great Venera, but I would definitely count myself lucky were I in your shoes. Please get in the habit of having proper disaster recovery procedures in place, and becoming familiar with SQL Server's files so that you don't accidentally rename / delete log files in the future. I think it's worth repeating that you were very lucky this processed worked for you this time; next time, you may not be so lucky.
an update:
Gila's advice helped. I now fully restored my database! You made my day!
Thanks a million!
Thank you both for responding so fast. Unfortunately, what Aaron suggested would work if I had had only one LOG file.. I will try to do what Gila recommended and see if it works…
Again, thank you for your support..
Aaron, if you don't mind me plugging my own blog here….
Venera, if you don't have a backup (and if not, why not), have a look at this: http://sqlinthewild.co.za/index.php/2009/06/09/deleting-the-transaction-log/
Venera, let's play this like a "choose your own adventure" book:
step 1: try using sp_attach_single_file_db:http://msdn.microsoft.com/en-us/library/ms174385.aspx
if step 1 fails, go to step 2.
if step 1 succeeds, go to step 4.step 2: restore from the most recent backup.
if step 2 fails (or you don't have a backup), go to step 3.if step 2 succeeds, go to step 4.
step 3: update your resume.
step 4: read the links in the above post VERY THOROUGHLY. The log file is important. It can't just be deleted / renamed at the drop of a hat. Tread very lightly if you don't take the time to read and understand why the log is important and why you can't just mess with the files when SQL Server doesn't have them locked. It keeps them locked for a reason!
I created a huge problem, which I don't know how to solve. Any help will be appreciated.
Here is my situation:
I had a database with two log files (don't ask, because i don't know why), one of which was in use and the other was just sitting there.. I decided to take someone's advice and detach and rename the LOG file that was in use. But what i did next was to delete the second LOG file. Now I cannot attach the database back without that second log file. What should I do? thanks!
I don't think I have to say how much I agreee with the viewpoint of this blogpost… 🙂
As for backup to nul (and I'm sorry if I state the obvious here):
1. It isn't the same as setting to simple, checkpoint and back to full. Backup to nul means that SQL Server read the data and output it to the OS (and the OS will discard it). I'm not 100% certain, I hasten to add, but I would be surprised if MS has code in SQL Server to handle a file named nul in some special manner.
2. Reason for it to be nul instead of null would be OS history. This is defined in Dos/Windows. Going back to the very earliest versions of DOS we had "files" such as NUL, PRN, CON. I don't know if they had to be three letters, or that was some convention from those who wrote DOS . but that is the history of the NUL backup destination. Remember the pre-defined backupdevice "diskdump"? I recall somebody who did backup to this for a number of years and now had to do restore… :-(.
Wow. Let me re-state, as I thought I had already done so: of course there are emergencies. Once again, which I already stated multiple times, I am not opposed to explaining to people how to cope with these emergencies; what I am opposed to is blind blog posts, with absolutely no context, that say, here is how you can shrink your log file (without also saying, but please be careful, as it will destroy your backup chain, compromise your disaster recovery, affect log shipping, etc.). What this leads to is, the next time you are setting up a system, you pay absolutely no attention to log file size and growth settings, because you have a kludge you can use to deal with it later if it becomes a problem (emergency or not).
What if you inherit a situation with a huge log file, or you have to run a one-time operation like a huge data import that leaves you with a massive log file ?
>>I am trying to stop people from posting blog articles that give bad advice about what to do with the log files when they get too big
That is the key! I think this short and to the point write is excellent. All of us at one time have found ourselves starting to type the word, "TRUNCA….". It's our job to leave the message of what we've learned in our experience of what the horrors are from that statement and log maintenance all together.
The bottom line is, if properly trained, proper log maintenance, recovery model strategy designs and landscape structures prevent this from every happening. There is no argument for, it was an emergency. If it was then it was wrong from the start if you ask me. Not everyone has a seasoned DBA on hand and I realize that. In the face of that however it's everyones job to fully understand the system(s) they are designing, implementing and maintaining.
SteveO, I am not trying to crap on the people who have no idea what the logs are for. I am trying to stop people from posting blog articles that give bad advice about what to do with the log files when they get too big. They are not helping the situation at all by saying things like, "just switch to simple then back to full again, then shrink your log file to 1 MB." This kind of advice does more harm than good, and prevents the user from learning why the log is there, why it gets big, and how they can manage it more proactively.
I read this with a mixed bag of feelings. Some people out there are clueless to what the logs are for. Others understand part of it but not the whole story.
My current contract did the trancate_only nightly just before a full backup. I am cool with that.
I activated a dbcc shrinkfile for the logs because they were on a 1/2 db file in size for their data and now that I had LS going I didn't need 20 gigs for a log that was writing 15 meg every 15 min or there about.
Saves a lot of backup time as well. 🙂
In the shrinkfile if you allocate enough disk space you are fine for the long run for that log file. If you just nuke it to nothing then you are really no better off with auto grow issues.
Part of the problem is that most SQL Server users (devs especially) have no real understanding of how SQL Server uses disk – so it's sadly not too surprising that they're always trying to reign in how much disk-space is being used without understanding why. It's kind of like all of the forum posts out there with people complaining about SQL Server using so much memory…
So yeah, great post as this message just needs to be reiterated over and over.
I recall one site running nightly jobs that would bloat the log, they didn't want to change the recovery model just for the night/job, and wanted the space back,
anyways the shrink did not disrupt anything
I'll give you 5 stars on this one. This has to be one of my biggest pet peaves on the MSDN forums. I'll never understand the short term gain mentality behind shrinking files every night, when they just grow again immediately.
Hey there – I just found a good section in the BOL that explains this better: ms-help://MS.SQLCC.v10/MS.SQLSVR.v10.en/s10rp_4deptrbl/html/5a9e4ddf-3cb1-4baf-94d6-b80acca24f64.htm.
Check out the section on Database Maintenance.
Cheers,
kt
Hey there Michael/Aaron – Logging in the simple recovery model is the same for BULK_LOGGED/FULL for almost every operation (of course except for the minimally logged operations). If replication is enabled that means that transactions will be (probably) active longer in the log so that replication's log reader has a chance to pick them up. So, while it doesn't really change the type of loging for most operations it might make them stay longer in the log. Now, for operations that would have been minimally logged – if they are against tables that are replicated (i.e. part of publications) their operations will need to be logged fully so that replication can pick them up.
Typing relatively quickly 🙂 but I hope that makes sense!
And, Aaron – EXCELLENT to keep pounding this in. I'm always surprised at how often I hear this as "advice" without any reason or recommendation on how to avoid it in the future and why it has happened!!
Cheers,
kt
This is a very helpful read. Currently I have a script running bi-weekly that does this shrink… my predecessor told me run the command every once in a while… not really his fault, our vendor for the Accounting DB software we use recommended it.
Fortunately, we do have the DBs backed up regularly using Backup Exec… but the logs offer a more specific point in time restore. I am going to revise how this is set up now. Thanks for the post!
I haven't checked, but I don't believe the logging mechanism looks at whether replication is enabled to determine how to handle logging. So my guess is that if you are in simple mode, and you do things that blow your log up unexpectedly, that would have happened whether or not you were using replication. Could replication contribute? Possibly, but I can't see that being a primary cause, nor do I see it as an excuse for using simple in the first place, or not planning log space correctly…
Aaron, Do you know if log files are maintained in simple mode when replication is enabled? My guess is that this fits as a scenario in addition to the four you mentioned. Thoughts?
And as for NUL, I bet the developer who worked on the feature saw NULL light up in the query editor's syntax highlighting, and thought he/she had better not use a keyword. 🙂
Dan, I didn't mean to imply that the switch was *necessary* … more just illustrating the bad advice some of these "helpful" articles are spreading around.
You don't need to change the database from full to simple and back to shrink the log.
Even though the TRUNCATE_ONLY has been deprecated, you can still backup the log to a null device by issuing, "BACKUP LOG DBName to DISK='NUL'" and then issuing a shrinkfile. You may have to issue the BACKUP LOG command a few times to wrap the active portion to the front of the logfile.
Now, why the null device is spelled "NUL" with one "L"? I have no idea.