How bad smells, hand sanitizer, and Israeli judges affect your evaluation of an event

evaluation of an event

Can your evaluation of an event be influenced by the environment in which it’s performed?

In his remarkable book The Righteous Mind: Why Good People Are Divided by Politics and Religion, moral psychologist Jonathan Haidt makes a strong case that “an obsession with righteousness is the normal human condition. It is a feature of our evolutionary design…” Although the book is primarily a fascinating exploration of the origins and workings of morality, along the way Haidt describes many interesting aspects of how humans actually behave that are often at odds with how we think we act. Here’s an example that has direct relevance to your attendees’ evaluations of your events.

Some bizarre and unsettling experimental findings

Haidt describes a number of experiments that asked people to make moral judgments about controversial issues. In one, half were exposed to what I’ll describe as foul air while they were giving their judgments. (Read the book for the smelly details.) The result? The people who breathed in foul air made harsher judgments than those who did not. Another experiment had people fill out surveys about their political attitudes while standing near or far from a hand sanitizer dispenser. Those who stood near the dispenser became temporarily more conservative in their expressed attitudes. A final example (not from the book) is the somewhat alarming discovery from research in Israeli courts that a prisoner’s chance of parole depends on when the judge hearing the case last took a break.

What do these findings mean for your events?

What these experiments reveal is that our bodily experiences affect our simultaneous judgment of apparently unrelated issues. Our bodies guide our judgments. As Haidt explains: “When we’re trying to decide what we think about something, we look inward, at how we’re feeling. If I’m feeling good, I must like it, and if I’m feeling anything unpleasant, that must mean I don’t like it.”

What does this all imply? If we want to get unbiased evaluations of our events, we need to obtain them in neutral surroundings. Ask an attendee who prides herself on fairness “for a quick video testimonial” in a featureless, smelly corridor when she badly needs a rest room? You’ll get a less favorable response than if you interview her when she’s comfortable. Ask attendees to fill out online evaluations on the Monday they return to work with a backlog of while-you-were-out requests pending? Their evaluations will be negatively biased. Offer a meaningful immediate incentive to those who take the time to fill out the survey? You’ll reduce the bias.

And if we want to bias an evaluation of an event in a positive direction? Well, I think I’ve given you the background to figure out how that might work. Not that you’d ever do such a thing. Would you?

Composite image credits: Flickr users michaelbycroftphotography, nedrai, and safari_vacation

A challenge to anyone who organizes an event

challenge to anyone who organizes an event Here’s a simple challenge to anyone who organizes an event and asks for evaluations.

(You do ask for evaluations, don’t you? Here’s how to get great event evaluation response rates.)

Publish your complete, anonymized evaluations.

You may want to restrict access to the people who attended the event.

That would be good.

You may decide to publish your evaluations publicly, as we just did for EventCamp East Coast 2011, and as we did a year ago for EventCamp East Coast 2010.

That’s even better.

That’s my challenge to anyone who organizes an event.

If you believe in your event, and want to make it better, why not be transparent about the good, the bad, and the ugly?

How to get great attendee evaluation response rates

evaluation response rates 4026043182_c4525339ed_o I routinely get ~70% evaluation response rates for conferences I facilitate. Here are three reasons why this rate is so much higher than the typical 30-50% response rates that other conference organizers report.

1. Explain why evaluations are important

At the start of the first session at the event, I request that attendees fill out the online evaluations and explain why we want them to do so. I:

  • Promise attendees that all their feedback will be carefully read;
  • Tell them that their evaluations are crucial for improving the conference the next time it is held; and
  • Tell attendees that we will share all the (anonymized) evaluations with them. (I don’t share evaluations on individual sessions with attendees, but I forward all comments and ratings to the session presenters. I do share overall ratings and all general comments about the conference.)

When you explain to attendees why you are asking them to spend time providing evaluations, and they trust you to deliver what you’ve promised, they are much more open to providing feedback. And I suspect that when attendees know that other attendees will see their anonymized feedback, they may be more motivated to express their opinions.

2. Provide online evaluations early

I provide online surveys available at the start of the event, which participants can complete at any point. If the conference has a printed learning journal, I’ll include a printed version of the evaluation. Attendees can fill it out as an aide-mémoire during the event if they wish.

3. Follow-up reminders improve evaluation response rates

Post-conference, via email, I gently remind attendees who have not yet completed an evaluation. I include a due date (normally 10-14 days after the end of the event), and a few sentences reiterating the reasons why we’d appreciate their response. I send up to three of these reminders before the due date.

None of this is particularly onerous, and the result is a rich treasure-trove of ideas and feedback from a majority of attendees that I can use to improve future conferences.

What are your typical evaluation response rates for conference? What do you do to encourage attendees to provide event feedback?

Photo attribution: Flickr user herzogbr