Are your meeting evaluations reliable? Can the way we evaluate meetings change how participants view their experience? Possibly, given the findings of research reported in the June 2013 Personality and Social Psychology Bulletin. The study indicates that when we ask people for their reasons to justify their choices they focus on aspects that are easy to verbalize. This can distort their overall judgement. Here’s Tom Stafford‘s description of the experiment.
Participants were asked to evaluate five posters of the kind that students might put up in their bedrooms. Two of the posters were of art – one was Monet’s water lilies, the other Van Gogh’s irises. The other three posters were a cartoon of animals in a balloon and two posters of photographs of cats with funny captions.
All the students had to evaluate the posters, but half the participants were asked to provide reasons for liking or disliking them. (The other half were asked why they chose their degree subject as a control condition.) After they had provided their evaluations the participants were allowed to choose a poster to take home.
So what happened? The control group rated the art posters positively (an average score of around 7 out of 9) and they felt pretty neutral about the humorous posters (an average score of around 4 out of 9). When given a choice of one poster to take home, 95% of them chose one of the art posters. No surprises there, the experimenters had already established that in general most students preferred the art posters.
But the group of students who had to give reasons for their feelings acted differently. This “reasons” group liked the art posters less (averaging about 6 out of 9) and the humorous posters more (about 5 to 6 out of 9). Most of them still chose an art poster to take home, but it was a far lower proportion – 64% – than the control group. That means people in this group were about seven times more likely to take a humorous poster home compared with the control group.
Here’s the twist. Some time after the tests, at the end of the semester, the researchers rang each of the participants and asked them questions about the poster they’d chosen: Had they put it up in their room? Did they still have it? How did they feel about it? How much would they be willing to sell it for? The “reasons” group were less likely to have put their poster up, less likely to have kept it up, less satisfied with it on average and were willing to part with it for a smaller average amount than the control group. Over time their reasons and feelings had shifted back in line with those of the control group – they didn’t like the humorous posters they had taken home, and so were less happy about their choice.
—Tom Stafford, When giving reasons leads to worse decisions
Implications for event evaluations
What might this imply for event evaluations? When asked to give our reasons why we evaluated an event a certain way, this research indicates that we’re likely to focus on reasons that are easy to express. Ever noticed in your event evaluations that attendees’ opinions about food and accommodations are often far more detailed than what they write about specific sessions or the event as a whole? It’s much easier to express an opinion about the former than the latter, and that’s OK in itself. What should concern us, though, is that evaluations themselves, by focusing on the easily quantifiable, may bias how participants perceive our event’s value.
In other words, your meeting evaluations may not be reliable because attendees tend to give easy feedback. One way to minimize this is to focus questions on the more intangible aspects of the event experience.
Perceived value is an important component of event Return On Investment (ROI). I’ve mused about Return On Investment (ROI) for social media (I’m skeptical about measuring it) and participant driven events (I believe they improve ROI). How might this research affect the calculation of meeting ROI?