Two ways to take a hard look at conference evaluations

hard look at conference evaluations: an illustration of people looking at evaluation formsLet’s take a hard look at conference evaluations. Seth Godin wrote a great blog post about survey questions, and applying two of his insights will improve any conference evaluation.

First, ask yourself the following about every question you ask:

Are you asking questions capable of making change happen?
After the survey is over, can you say to the bosses, “83% of our customer base agrees with answer A, which means we should change our policy on this issue.”

It feels like it’s cheap to add one more question, easy to make the question a bit banal, simple to cover one more issue. But, if the answers aren’t going to make a difference internally, what is the question for?
—Seth Godin

In other words, if any question you ask doesn’t have the potential to lead you to change anything, leave it out!)

Second, think about Seth’s sobering experience when responding to “Any other comments?” style questions:

Here’s a simple test I do, something that has never once led to action: In the last question of a sloppy, census-style customer service survey, when they ask, “anything else?” I put my name and phone number and ask them to call me. They haven’t, never once, not in more than fifty brand experiences.

Gulp. Would your evaluation process fare any better? As Seth concludes:

If you’re not going to read the answers and take action, why are you asking?

Take a hard look at your conference evaluations. You may be surprised by what you find.

Do you review event evaluations like a Chinese censor?

event evaluations: black and white photograph of a standing Chinese official pointing to the screen of a computer used by a seated civilian. Photo attribution: Flickr user charleshopefascinating piece of research published in Science concludes that the Chinese government allows people to say whatever they like about the state, its leaders, or their policies—except for posts with collective action potential, which are far more likely to be censored. This reminds me of the meetings industry’s common response to event evaluations.

What do stakeholders do with event evaluations?

I often wonder whether organizers actually read event evaluations. And, if they read them, do they act upon them? Given the same old event designs I see repeated year after year, my experience is that most changes stemming from attendee feedback concentrate on logistical improvements or cosmetic restyling: changes that rarely get to core dissatisfactions experienced by many attendees.

This is why the bar is so low for event expectations. The majority of attendees assume that an event can’t be much better than what they routinely experience. Ask attendees what they consider to be a worthwhile meeting. You’ll frequently hear that they’re satisfied if they “learn one useful thing or meet one useful person a day”. You may think that’s acceptable. I know we can do much better.

Use event evaluations to improve future meetings

Can we significantly improve our events through attendee feedback? Yes. Done right, event evaluations provide an incredible opportunity to build community around our events. Yet, invariably, we squander this opportunity. Evaluations—either in the form of paper smile sheets or online surveys—get kidnapped, disappearing into the hands of the event organizers, never to see the light of day again.

When we routinely solicit comments about our meetings’ value and then ignore feedback that could fundamentally improve our events we are acting like Chinese censors. The minority of attendees who have experienced the value of participant-driven and participation-rich meeting designs—who know the value of small group work with peers and public evaluation during the event—expect something better from your meetings. If you don’t give it to them, they will stay away or go elsewhere.

A warning. They won’t be a minority forever.

Photo attribution: Flickr user charleshope

How to get great attendee evaluation response rates

evaluation response rates: photograph of a typical conference evaluation sheet. Photo attribution: Flickr user herzogbrI routinely get ~70% evaluation response rates for conferences I facilitate. Here are three reasons why this rate is so much higher than the typical 10-20% response rates that other conference organizers report.

1. Explain why evaluations are important

At the start of the first session at the event, I request that attendees fill out the online evaluations and explain why we want them to do so. I:

  • Promise attendees that all their feedback will be carefully read;
  • Tell them that their evaluations are crucial for improving the conference the next time it is held; and
  • Tell attendees that we will share all the (anonymized) evaluations with them. (I don’t share evaluations on individual sessions with attendees, but I forward all comments and ratings to the session presenters. I do share overall ratings and all general comments about the conference.)

When you explain to attendees why you are asking them to spend time providing evaluations, and they trust you to deliver what you’ve promised, they are much more open to providing feedback. And I suspect that when attendees know that other attendees will see their anonymized feedback, they may be more motivated to express their opinions.

2. Provide online evaluations early

I provide online surveys available at the start of the event, which participants can complete at any point. If the conference has a printed learning journal, I’ll include a printed version of the evaluation. Attendees can fill it out as an aide-mémoire during the event if they wish.

3. Follow-up reminders improve evaluation response rates

Post-conference, via email, I gently remind attendees who have not yet completed an evaluation. I include a due date (normally 10-14 days after the end of the event) and a few sentences reiterating the reasons why we’d appreciate their response. I send up to three of these reminders before the due date.

None of this is particularly onerous, and the result is a rich treasure trove of ideas and feedback from a majority of attendees that I can use to improve future conferences.

What are your typical evaluation response rates for your conferences? What do you do to encourage attendees to provide event feedback?

Photo attribution: Flickr user herzogbr