One of my first major projects at SQL Sentry was framed shortly after I joined the company: to publish an analysis of the overhead that Performance Advisor and Event Manager place on both the server(s) being monitored and on the server(s) doing the monitoring. Every vendor wants to sell you on the "zero impact" and "no footprint" lines, but we all know that it is not possible for you to accurately measure performance on a server without also causing at least some performance degradation by virtue of performing said monitoring. So what does "zero impact" really mean in those cases? Most vendors won't tell you, and I can understand why: there are so many variables and "what ifs" involved that they couldn't possibly tell you – at least not with a straight face.
As I would tend to advise you even from an impartial, outside perspective (regardless of the solution being researched), the only way you can get the absolute truth about impact is to test it for yourself, in your own environment, using your hardware, data, network, usage patterns, etc. On the other hand, we realize that you may not have an environment yet, or you may not have a suitable replica of production (we all know how most folks feel about testing in production). We know this because, even with trial versions available and frequently downloaded, two of the most common questions we get about our software are "how much overhead will this cause on the monitored instance?" and "what kind of server(s) will we need for the monitoring components?"
To help answer this question without all of the effort that might be required on your part, the paper sets out to show, in our simulated environment, roughly how much additional load is placed on a single monitored server, and the total load placed on the monitoring servers when they are watching 1, 10, 25, 50 and 100 SQL Server instances. We tried to make the test environment as generic as possible, with separate servers for each monitoring component (to isolate and measure the resource impact of each), and a virtual environment of over 100 servers (we went virtual here to ease creation and re-creation of the environment, and also for the obvious budgetary benefits). Now, your environment may be quite different, so the numbers shown in the paper may not necessarily reflect your exact scenario. But they should be close enough to give you a general idea of what to expect from our software.
This is the first of many impact studies we will be conducting and making public. We are not afraid to show you our numbers, even in cases where they might not be the most flattering, because we can often learn as much from this process as you can. We are quite happy with the results of these tests, but we look forward to further tests where we measure the impact on a single monitored server with more variables and subject to heavier and more realistic workloads.
Download the paper (.doc, 2.1 MB)