Four important truths about conference evaluations

A photograph of a conventional conference panel evaluationHere are some thoughts after spending a couple of hours reviewing participant post-event evaluations of the annual edACCESS conference.

I don’t know any meeting planners who especially enjoy creating, soliciting, and analyzing conference evaluations. It’s tempting to see them as a necessary evil, a checklist item to be completed so you can show you’ve attempted to obtain attendee feedback. Clients are often unclear about their desired evaluation learning outcomes, which doesn’t make the task any easier. After the event you’re usually exhausted, so it’s hard to summon the energy to delve into detailed analysis. And response rates are typically low, so who knows whether the answers you receive are representative anyway?

Given all this, how important are conference evaluations? I think they’re very important—if you design them well, work hard to get a good response rate, learn from them, and integrate what you’ve learned into improving your next event. Here are four important truths about conference evaluations, illustrated via my edACCESS post-conference evaluation review:

1) Conference evaluations provide vital information for improvement

We’ve held 30 edACCESS annual conferences. You’d think that maybe by now we have a tried and true conference design and implementation down cold. Not so! Even after 29 years, we continue to improve an already stellar event (“stellar” based on—what else?—many years of great participant evaluations). And the post-event evaluations are a key part of our continual process improvement. How do we do this?

We use a proven method to obtain great participant evaluation response rates.

We think about what we want to learn about participants’ experience and design an evaluation that provides us the information we need. Besides the usual evaluation questions that rate every session and ask general information about event logistics, we ask a number of additional questions that zero in on the good and bad aspects of the conference experience. We also ask first-time attendees a few additional questions to learn about their experience joining our conference community.

We provide a detailed report of post-conference evaluations to the entire edACCESS community—not only the current year’s participants but also anyone who has ever attended one of our conferences and/or joined our listserv. The responses are anonymized and include rating summaries, every individual comment on every session, and all comments on a host of conference questions.

Providing this kind of transparency is a powerful way to build a better conference and an engaged conference community. Sharing participants’ perceptions of the good, the bad, and the ugly of your event says a lot about your willingness to listen, and makes it easy for everyone to see the variety of viewpoints about event perceptions (see below).

The conference organizers review the evaluations and agree on changes to be implemented at the next conference. Some of the changes are significant: changing the length of the event, replacing some plenaries with breakouts, etc., while some are minor process or logistical tweaks that improve the conference experience in small but significant ways.

Evaluations often contain interesting and creative ideas that the steering committee likes and decides to implement. All major changes are considered experiments rather than permanent, to be reviewed after the next event, and are announced to the community well in advance.

We evaluate our evaluations. Are we asking a question that gets few responses or vague answers that don’t provide much useful information? If so, perhaps we should rephrase it, tighten it up, or remove the question altogether. Are there things we’d like to know that we’re not learning from the evaluation? Let’s craft a question or two that will give us some answers.

Using the above process yields fresh feedback and ideas that help us make each succeeding conference better.

One final point. Even if you’re running a one-off event, participant evaluations will invariably help you improve your execution of future events. So, don’t skip evaluations for one-time events—they offer you a great opportunity to upgrade your implementation skills. And, who knows, perhaps that one-off event will turn into an annual engagement!

2) You can’t please everyone

Were these people really at the same event?” Most meeting organizers have had the experience of reading wildly different evaluations of the same session or conference. While edACCESS 2013 evaluations don’t supply great examples of this, I always scan the answers to questions about the most and least useful session topics.

What I want to see is a wide variety of answers that cover most or all of the sessions. Because edACCESS is a peer conference with (at most) one conventional plenary session, I see such answers as reflecting the broad range of participant interests at the event. It’s good to know that most if not all of the chosen sessions satisfied the needs of some attendees. If many people found a specific session the least useful, that’s good information to have for future events, though it’s important to review this session’s evaluations to discover why it was unpopular.

What’s equally important is to share the diversity of the answers to these questions with participants. When people understand that a session they disliked was found to be useful by other participants, you’re less likely to need to field strident but minority calls for change—and have information to judge such requests if they do occur.

3) Dealing with unpleasant truths will strengthen your event

edACCESS evaluations contain many appreciations and positive comments on the conference format and how we run it. Sometimes, there are also a few anonymous comments that are less than flattering about an aspect of my facilitation style.

Even if the latter express a minority opinion, I work to improve from the feedback I receive. I plan to check in with the event organizing committee to get their take on the feedback. I’ll ask for suggestions on whether/how to make it less of an issue in the future without compromising the event.

Facing and learning from criticism is hard. The first response to criticism of anyone who is trying to do a good job is usually defensive. But when we confront unpleasant truths, plan to better understand them, and follow through we lay the ground for making our event and our contributions to it better.

4) Public evaluation during the conference augments post-event evaluations

While it’s still rare at traditional conferences to spend time evaluating the event face-to-face, I include such a session—the Group Spective—at the close of all Conferences That Work. I strongly recommend a closing conference session that includes facilitated public discussion of the conference covering topics like What worked? What can we improve? What initiatives might we want to explore? and Next steps.

It’s always interesting to compare the initiatives brought up by this session with suggestions contained in the post-event evaluations. You’ll find ideas triggered by the discussion during the spective that may appear or may be absent from the evaluations. The spective informs and augments post-event evaluations. Some of the ideas expressed will lead to future initiatives for the community or new directions for the conference.

What experience do you have with conference evaluations, either as a respondent or a designer? What other truths have you learned about conference evaluations?

Photo attribution: Flickr user stoweboyd

Are your meeting evaluations reliable?

are meeting evaluations reliable? drawing of a five smiley face evaluation scale: 4 "love love love", 2 "meh", 0 "hate hate hate"

Are your meeting evaluations reliable? Can the way we evaluate meetings change how participants view their experience? Possibly, given the findings of research reported in the June 2013 Personality and Social Psychology Bulletin. The study indicates that when we ask people for their reasons to justify their choices, they focus on aspects that are easy to verbalize. This can distort their overall judgment. Here’s Tom Stafford‘s description of the experiment.

An experiment

Participants were asked to evaluate five posters of the kind that students might put up in their bedrooms. Two of the posters were of art – one was Monet’s water lilies, the other Van Gogh’s irises. The other three posters were a cartoon of animals in a balloon and two posters of photographs of cats with funny captions.

All the students had to evaluate the posters, but half the participants were asked to provide reasons for liking or disliking them. (The other half were asked why they chose their degree subject as a control condition.) After they had provided their evaluations the participants were allowed to choose a poster to take home.

What happened?

So what happened? The control group rated the art posters positively (an average score of around 7 out of 9) and they felt pretty neutral about the humorous posters (an average score of around 4 out of 9). When given a choice of one poster to take home, 95% of them chose one of the art posters. No surprises there, the experimenters had already established that in general most students preferred the art posters.

But the group of students who had to give reasons for their feelings acted differently. This “reasons” group liked the art posters less (averaging about 6 out of 9) and the humorous posters more (about 5 to 6 out of 9). Most of them still chose an art poster to take home, but it was a far lower proportion – 64% – than the control group. That means people in this group were about seven times more likely to take a humorous poster home compared with the control group.

The twist

Here’s the twist. Some time after the tests, at the end of the semester, the researchers rang each of the participants and asked them questions about the poster they’d chosen: Had they put it up in their room? Did they still have it? How did they feel about it? How much would they be willing to sell it for? The “reasons” group were less likely to have put their poster up, less likely to have kept it up, less satisfied with it on average and were willing to part with it for a smaller average amount than the control group. Over time their reasons and feelings had shifted back in line with those of the control group – they didn’t like the humorous posters they had taken home, and so were less happy about their choice.
—Tom Stafford, When giving reasons leads to worse decisions

Implications for event evaluations

What might this imply for event evaluations? When asked to give our reasons why we evaluated an event a certain way, this research indicates that we’re likely to focus on reasons that are easy to express. Ever noticed in your event evaluations that attendees’ opinions about food and accommodations are often far more detailed than what they write about specific sessions or the event as a whole? It’s much easier to express an opinion about the former than the latter, and that’s OK in itself. What should concern us, though, is that evaluations themselves, by focusing on the easily quantifiable, may bias how participants perceive our event’s value.

In other words, your meeting evaluations may not be reliable because attendees tend to give easy feedback. One way to minimize this is to focus questions on the more intangible aspects of the event experience.

Perceived value is an important component of event Return On Investment (ROI). I’ve mused about Return On Investment (ROI) for social media (I’m skeptical about measuring it) and participant-driven events (I believe they improve ROI). How might this research affect the calculation of meeting ROI?

How bad smells, hand sanitizer, and Israeli judges affect your evaluation of an event

Evaluation of an event—three photographs of a pile of trash bags outside a gate, a wall-mounted hand sanitizer dispenser, and a hand rapping a judges gavel.

Can your evaluation of an event be influenced by the environment in which it’s performed?

In his remarkable book The Righteous Mind: Why Good People Are Divided by Politics and Religion, moral psychologist Jonathan Haidt makes a strong case that “an obsession with righteousness is the normal human condition. It is a feature of our evolutionary design…” Although the book is primarily a fascinating exploration of the origins and workings of morality, along the way Haidt describes many interesting aspects of how humans actually behave that are often at odds with how we think we act. Here’s an example that has direct relevance to your attendees’ evaluations of your events.

Some bizarre and unsettling experimental findings

Haidt describes a number of experiments that asked people to make moral judgments about controversial issues. In one, half were exposed to what I’ll describe as foul air while they were giving their judgments. (Read the book for the smelly details.) The result? The people who breathed in foul air made harsher judgments than those who did not. Another experiment had people fill out surveys about their political attitudes while standing near or far from a hand sanitizer dispenser. Those who stood near the dispenser became temporarily more conservative in their expressed attitudes. A final example (not from the book) is the somewhat alarming discovery from research in Israeli courts that a prisoner’s chance of parole depends on when the judge hearing the case last took a break.

What do these findings mean for your events?

What these experiments reveal is that our bodily experiences affect our simultaneous judgment of apparently unrelated issues. Our bodies guide our judgments. As Haidt explains: “When we’re trying to decide what we think about something, we look inward, at how we’re feeling. If I’m feeling good, I must like it, and if I’m feeling anything unpleasant, that must mean I don’t like it.”

What does this all imply? If we want to get unbiased evaluations of our events, we need to obtain them in neutral surroundings. Ask an attendee who prides herself on fairness “for a quick video testimonial” in a featureless, smelly corridor when she badly needs a restroom? You’ll get a less favorable response than if you interview her when she’s comfortable. Ask attendees to fill out online evaluations on the Monday they return to work with a backlog of while-you-were-out requests pending? Their evaluations will be negatively biased. Offer a meaningful immediate incentive to those who take the time to fill out the survey? You’ll reduce the bias.

And if we want to bias an evaluation of an event in a positive direction? Well, I think I’ve given you the background to figure out how that might work. Not that you’d ever do such a thing. Would you?

Composite image credits: Flickr users michaelbycroftphotography, nedrai, and safari_vacation

A challenge to anyone who organizes an event

challenge to anyone who organizes an eventHere’s a simple challenge to anyone who organizes an event and asks for evaluations.

(You do ask for evaluations, don’t you? Here’s how to get great event evaluation response rates.)

Publish your complete, anonymized evaluations.

You may want to restrict access to the people who attended the event.

That would be good.

You may decide to publish your evaluations publicly, as we just did for EventCamp East Coast 2011, and as we did a year ago for EventCamp East Coast 2010.

That’s even better.

That’s my challenge to anyone who organizes an event.

If you believe in your event, and want to make it better, why not be transparent about the good, the bad, and the ugly?

How to get great attendee evaluation response rates

evaluation response rates: photograph of a typical conference evaluation sheet. Photo attribution: Flickr user herzogbrI routinely get ~70% evaluation response rates for conferences I facilitate. Here are three reasons why this rate is so much higher than the typical 10-20% response rates that other conference organizers report.

1. Explain why evaluations are important

At the start of the first session at the event, I request that attendees fill out the online evaluations and explain why we want them to do so. I:

  • Promise attendees that all their feedback will be carefully read;
  • Tell them that their evaluations are crucial for improving the conference the next time it is held; and
  • Tell attendees that we will share all the (anonymized) evaluations with them. (I don’t share evaluations on individual sessions with attendees, but I forward all comments and ratings to the session presenters. I do share overall ratings and all general comments about the conference.)

When you explain to attendees why you are asking them to spend time providing evaluations, and they trust you to deliver what you’ve promised, they are much more open to providing feedback. And I suspect that when attendees know that other attendees will see their anonymized feedback, they may be more motivated to express their opinions.

2. Provide online evaluations early

I provide online surveys available at the start of the event, which participants can complete at any point. If the conference has a printed learning journal, I’ll include a printed version of the evaluation. Attendees can fill it out as an aide-mémoire during the event if they wish.

3. Follow-up reminders improve evaluation response rates

Post-conference, via email, I gently remind attendees who have not yet completed an evaluation. I include a due date (normally 10-14 days after the end of the event) and a few sentences reiterating the reasons why we’d appreciate their response. I send up to three of these reminders before the due date.

None of this is particularly onerous, and the result is a rich treasure trove of ideas and feedback from a majority of attendees that I can use to improve future conferences.

What are your typical evaluation response rates for your conferences? What do you do to encourage attendees to provide event feedback?

Photo attribution: Flickr user herzogbr