Buck Woody (blog | twitter) just published a great post on session evaluations, and a lot of his points hit home for me. The premise is that the evaluations are not really meant for the attendee or the event organizers, but so that the speaker can get better and make the next session better. In light of this, at least in my opinion, the existing evaluation forms (and the way attendees tend to fill them out) do not achieve this at all. It may be a little more work for events to generate a more useful evaluation form, and it might even be a little more work for attendees to fill it out accurately and honestly, but in the long run it can be so much more useful than it is today.
One of Buck's main points is that a numeric scale is useless. And I agree; qualitative information is infinitely more actionable than quantitative information. In fact, I'll take one sentence over 20 scores. For example, since a 7 to one person can be a 3 to someone else, a scale of 1-10 is about as useful as differentiating between a 400 and a 500 session. Personally, I think the categories should be overview, deep dive and advanced. Because of the diversity at most events, you're never going to satisfy everyone in a room; you have less chance when you've stated a 400 session that is too advanced for someone who thinks they're a 400-500 or not advanced enough for something who thinks they're a 300-400. Or vice-versa. The level designation has many ways to simply go wrong.
Now, Buck does go off into territory that I think most events (and most attendees) will not venture; that the evaluation is anonymous, and that your evaluation will be made public. Buck postulates that people will be more honest when their name is attached, and I'm not so sure that's true. I think for a lot of people that is more true if their comments are glowing, and less true if they have criticism. Personally, I'd rather receive anonymous criticism than feedback that is buttered up for fear that I will be offended.
Finally, while I love Buck's proposed model for future evaluation forms, I think some of the things it will achieve will not ultimately be helpful. For example, if you learn how to improve your session for a bunch of involuntary DBAs who attended your session this time, you can't control that the next time you give the same session (with improvements aimed at those folks) you get 0 involuntary DBAs. This will work really well for speakers who tend to get the same rough demographics each time they give the session, but in my experience, I get a pretty wide variance of backgrounds (or at least that I can infer based on the types of questions and their answers to my feeler questions).
In related news, both sessions I submitted for the 2011 PASS Summit have been officially selected (details here). I submitted one session on T-SQL : Bad Habits to Kick. This session is based on a series of blog posts I've written here, and is my chapter in the upcoming SQL Server MVP Deep Dives 2. The other session is on What's New in SQL Server Denali, which I have presented in various places, including SQL Rally in Orlando.
And speaking of SQL Rally, I got some fantastic feedback following my Denali session there. I understand and agree with the constructive comments I received: I need to be less monotone, I need more realistic demos, I need to be more entertaining, I need to be better at repeating audience questions, and my time management needs to be more adaptive. I'm not going to get into any more details than that (I learned my lesson after reacting to my SQL Bits feedback), but I will say that in spite of my shortcomings, I was still one of the top rated sessions in the dev track. 🙂
Congratulations to all the top speakers at SQL Rally, and congratulations also to all speakers selected for sessions and pre-cons at the PASS Summit. This should be a fantastic event!