25 October 2016
tl;dr The talk is given, and inevitably, some well-meaning soul asks you afterwards, "How did it go?" I won't tell you how to answer, but for me, the answer is always, "I have no idea; that's for them to judge, not me."
Quite honestly, part of the reason I say this lies in the simple realization that no matter how you answer, you're either wrong, arrogant, or falsely humble. If the audience thinks the talk went well, but you think it was terrible, you seem either out of touch or affecting a false sense of humility. If you think it went well but the audience thinks it was terrible, you look like an idiot or a douchebag. If you and the audience both think it went well, you run the risk (smaller, perhaps) of seeming arrogant or "overly proud", and if you think it went horribly and the audience agrees with you, you seem out of touch and/or "if you knew it was bad, why didn't you fix it?".
But part of the problem is that sometimes the audience gets more out of the talk than you realize, so even if you didn't get what you wanted out of the talk (your message got lost in the noise of your demos, perhaps, or your demos didn't go as well as you would've liked or any of dozens or other things), the audience may have an entirely different opinion. (Matter of fact, I've had that exact scenario happen to me: gave a talk, every single demo I gave bombed, and still it came back as one of the highest-rated talks I've ever given, because---as one audience member told me later---they loved that I just kept rolling with it, didn't derail the talk trying to force the demos to work, and that "it was nice to see that even industry thought leaders can have a bad day at work!"
Evaluating the evaluations
Which means, then, that your best source of talk-effectiveness lies in the evaluations that a conference will ask the attendees to fill out. Not that this is the best source, but not an always-accurate source; lots of things can interfere with an honest appraisal of your talk:
- Attendees will blame you for conference issues. Too hot, too cold, seating issues, bad food, slides weren't available on a CD (or, for reals, one attendee gave me a lowest-rating-possible because the slides weren't available on a floppy disk), there's always something that an attendee can find wrong with the conference, and use your talk eval as a way to communicate their displeasure to the conference as a whole.
- Attendees often don't read your abstract, and grade you on the talk they wanted you to give, not the talk you gave. "Would have liked to see more on WebSockets," reads the evaluation, despite the fact that this talk was on AngularJS. Or, "Speaker didn't get into aspect-oriented programming" in your talk on .NET Generics. Or, my personal favorite, "Too much time spent on Swift" in a talk... on Swift!
- Attendees will contradict each other. "Speaker needed to spend more time on generic functions." "Speaker spent too much time on generic functions." Yup, thanks for that. Comments can cancel each other out, particularly if they're one-offs on each side.
- Attendees are not professional speakers. They can tell you what they liked, or what they didn't like, but they often can't tell you why, any more than you can tell a professional chef why the dish they made just didn't quite do it for you. (It's actually much harder than you might think.)
- Only about 10% of the attendees will turn in an evaluation. Hey, come on! There were only a limited number of donuts at the break table, and if they'd taken the time to fill out the evaluation, they'd have missed a shot at one of the rainbow sprinkles!
- Only the attendees with strong feelings will turn in an evaluation. If they hated you, they'll turn in an eval. If they loved you, they'll turn in an eval. If they were anywhere between "Blah", up through "Meh", and up until "Hmmm, interesting", they're more likely to not bother with an eval. Conference evaluations are not a scientific poll---far from it. They're a poll of "whomever bothered", which any politcal pollster will tell you is basically one step up from "purely randomly generated answers". (Think of it like this: how representative of the country's beliefs nationwide on on gun control will a poll be if we only ask people walking out to their cars at the local gun range?)
- The longer they wait to turn in an evaluation, the lower your score goes. The more time they have to think about it, the more things they can think about that they didn't like about the talk. Or they're talking about it with somebody else, who asks a question that didn't get answered in the talk, and they think, "Oh, yeah, sure, that question should've been answered in there somehow" and grade you down a bit.
For all these reasons and more, you need to keep a couple of things in mind when you look at evaluations:
- Gymnastics scoring. In Olympic Gymnastics, you always take the highest score and the lowest score and throw them out. (And you always ignore the scores from the East German judge---those people never like anything. But, then again, neither would you, if all you ate every day was boiled cabbage.) As a matter of fact, you probably should ignore the top 5% and bottom 5% entirely and then just take an average of the rest. But having said that....
- Avoid making a metric out of them. Remember, this is not a scientific poll, largely because no conference goes to the effort necessary to make it one. Thus, the quality of the feedback in an all-scored evaluation is already suspect from the beginning---trying to build some kind of accountability metric out of evaluations is a recipe for disaster. (Microsoft used to do this---in spades!---for their TechEd conferences, and it was anywhere between depressing to outright demeaning to see speakers, clustering around the monitor in the speaker lounge with the dashboard statistics on each talk, trying to see who "won" the conference. Inevitably, tempers flared, sides were chosen, and it got all "West Side Story" in there really quickly.) Don't ignore the evals entirely, though, because....
- Comments trump scores. Rating me a 5 out of 5 doesn't tell me anything. A comment of "I can't imagine how to improve this session" tells me a lot more. A rating of 1 out of 5 with no explanation is worse than no information---I can either assume you thought a "1" was actually the top score instead of the bottom, or I can assume that you actually meant to give me the worst score possible because you didn't like the T-shirt I ws wearing or... actually, I can imagine anything I want, and it will all be about as useful as imagining nothing at all. Scores are generally pretty worthless as anything other than a rough average; comments, on the other hand, can offer some details about what should be different. And, funnily enough, finding out what can/should be improved is a lot more helpful when trying to improve the talk. But, having said that....
- Keep a mental heat map of the comments when you read them. Realistically, when pressed, anybody can come up with something they'd like to see different or improved in the talk. It's only when everybody says the same thing, however, that I start to pay attention. If one or two people in a room of a thousand thought the session wasn't technically deep enough, pffft. That's like 0.1% of the total, and that many people also probably didn't like my choice of T-shirt that day, either. But if half the room (or, more accurately, half the people who bothered to hand in an eval) thought I wasn't deep enough, that makes me sit up and take notice. It doesn't mean that I was wrong---particularly if I kept to the abstract and delivered the talk that I said I was going to deliver---but it does mean that there was a gross miscommunication somewhere and needs correcting.
Evaluations are helpful, but they're definitely not the last word.
Self-evaluation
All that aside, though, it's nearly impossible for a speaker to not judge themselves. It's just too strong a desire, too deeply wired into the human psyche to not want to look in the mirror and say, "How'd I do?" I've been doing talks for twenty years, and I still do it, too, even though I don't always trust my own instincts. (I like to operate from Socrates' position of "All I know is that I know nothing", because then it forces me to look for evidence to confirm or deny my intuitions, rather than letting my intuitions quietly select the evidence that supports them.)
So, in the spirit of trying to get a bit more objective about self-evaluation, here's a list of things to evaluate:
- Timing. Did you go long? Short? Were you at the halfway point of your talk when the halfway point of the slot came and went?
- Questions. Did you accurately predict what the audience questions would be?
- Demos. How smooth were the demos? Did you have to cover for yourself somehow while giving a demo (as in, were you talking to try and fill in dead air while you were typing something or waiting for the demo to compile or run)?
- Audience recognition. How many of the audience members do you recognize? (If you made good eye contact during the show, you'll recognize certain faces when they come up to you and/or when you run into them in other parts of the event.)
- Casualties. How many people bailed out on your talk? Not everybody leaving the room is a casualty, but if they're fleeing the back in droves, chances are the people up front probably wanted to leave but couldn't because it would've been too obvious. (People in the back somehow assume that they're harder to spot.)
- More questions. How many audience questions did you get? No questions is a bad sign: either you were too basic and nobody had to ask any questions, or you were too complicated or complex and nobody could follow what you were doing. (This is different for keynotes, by the way: the larger the room, the less likely you are to get questions during your talk. Regional quirks also need to be taken into account here---Europeans are much less chatty than Americans. Except for the people in Madison, WA; those folks are just dead inside.)
But even here, unless you're recording your talk, your interpretation may be somewhat suspect.
So what's the well-meaning speaker to do?
Brutally honest evaluation
Easy: ask people who know you well---even better if they, too, are speakers---to evaluate your talk. Ask them to be brutally honest: tell you the things they liked, and the things they didn't, in roughly equal amounts. (OK, let's be clear here, it's probably going to be more of a 3-to-1 ratio, heavily favored towards things they didn't like, because it's a lot easier for us as humans to identify the things we don't like more than the things we like. Personally, I'm OK with that, but new speakers may need more in the way of encouragement and support.)
Here's a working checklist for evaluating another speaker, by the way, if you don't have one of your own; you don't have to track all of these, but the savvy speaker will pick a dozen or so specifically on which the evaluator should focus:
- What was the central message of the talk? (Don't fill this in until the end.)
- Timing: When did it start? When did it end? Roughly what time felt like the halfway point?
- Outline: What did the outline of the talk look like to you? (As a crutch, how was the talk structured? Choose one of: Chronological | Spatial | Causal | Comparative | Problem-solution/Pain-promise)
- Organization: How well did the talk flow from one element to the next? Could you spot the transitions?
- Memorable: What's the one thing you remember from this talk?
- Humor wins/fails: Count the number of jokes that worked, and the jokes that bombed. (Make sure to compare this with the speaker's own idea of what was a joke and what wasn't---sometimes the best jokes were totally unintended, but by all means, note them and use them again!)
- Vocal Delivery: Write down all the volume levels the speaker used during this talk (Loud, Soft, Moderate, Shout, Whisper, any other adjectives that come to mind, as your mind defines them; the goal is to see the variance)
- Crutch counter: Count all the "ums", "uhs", "right" or other crutch words the speaker used during the talk.
- Physical Delivery: Write down all the gestures the speaker used during this talk (Pointing to slide, pointing to audience, holding up hands to represent something abstract during an explanation, whatever; again, the goal is to see the variance)
- Nervous Delivery: Write down all the gestures communicating "nervousness" or "anxiety" that the speaker used (jingling change or keys in pocket, for example)
- Introduction:
-
How long did it last?
- Did it set the tone for the rest of the talk?
- Was it clear?
-
Body:
-
How long did it last?
- What was the main vehicle for delivering the message? Statistics? Demos Persuasion? Quotes from relevant figures? Pictures?
- Can you (the evaluator) recite the main body points from memory?
-
Content:
-
How many analogies did the speaker use to explain the content?
- How many direct descriptions did the speaker use?
- Was there any uncommon jargon used that wasn't explained/defined?
("Uncommon" here means it could've appeared in an IT manager's magazine/e-zine like ZDNet or GeekWire, without explanation attached.)
-
Conclusion:
-
Did it tie the body together back to the introduction?
- Did the conclusion raise any new information?
- Was the talk ended clearly? (Trailing off and mumbling is a horrible way to end a presentation; nobody knows if it's over or not.)
-
Body Language:
-
Did the speaker's eyes move around the room?
- Did the speaker move across the stage periodically, particularly towards the edges, in order to make better connection with the "wings" of the audience?
- How many times did the speaker speak to the slide? (As in, physically facing the slides while words were being said; if the speaker was quoting off the slide---and only part of the slide, mind you---for emphasis, that time doesn't count.)
- How long did the speaker speak directly into the laptop? (As in, staring at the laptop screen while seated behind it? Code demos count half-time; you need to be able to see what you're typing, but you also should be making efforts to check in with the crowd visually every few seconds or so even while typing code in.)
There's a ton more that could be added; anybody with a Toastmaster's evaluation sheet could improve this by a large margin, for example. And, if you as a speaker have a particular habit, have somebody keep an eye out for it and give you feedback as to when you do it--or, from the back of the room, flash you a sign when you do the wrong thing (I like having another speaker flip me off when I make a mistake I'm trying to correct; one female speaker took that to mean carte blanche, and she lifted her shirt at me when I turned away from the audience too far).
In particular, though, groom some people close to you to be this brutally-honest audience. Emphasize to them that the goal of their feedback isn't to make you feel good about your talk, but to improve---a good coach needs to be honest about what works and what doesn't. Ideally, this should be a person that will watch more than just one or two of your talks---you want them to get to know you, your style, and learn to recognize your particular weaknesses or crutch issues.
My most recent talk (as of this writing) was the closing keynote at a conference in Warsaw, just last week. My wife came along with me for the show, and she agreed (as she usually does, bless that poor woman's heart) to sit in on the keynote. It went off pretty well---most of the scores (I'm told) were very high, and it was one of the top-three-rated talks of the show. That said, when we got into the taxi to head back to the hotel to drop off stuff and meet up with other speakers for dinner, she gave me a pretty thorough rundown, including "You seemed a little rusty on this one in parts", "I didn't exactly see how some of the ideas came together at the end" and "It felt like in places you were trying to force-fit a joke into place and it didn't come off quite right". This was valuable stuff, and I absolutely love the woman for that.
Groom your critic to give you brutally honest feedback, and you will advance in your speaking skills by leaps and bounds.
Tags:
speaking tips