Successful event outcomes, strange web traffic, and the psychology of motivation

better event outcomes Understanding the psychology of motivation can help us create better event outcomes. I’ll illustrate with a story about strange traffic on this very web site…

The other day, I noticed a weird periodic surge of interest in one of my blog posts. Every January 1, page views for this post—but no other—spiked way up and stayed high for 7 – 10 days. Then they went back to normal year-round levels.

It took some head scratching before I finally realized what was going on. The article describes an obscure method for quickly deleting all emails on Apple devices—something Apple didn’t make easy until recently. Apparently, every January thousands of people all over the world stare at the 6,000 emails stuck on their iPhones. They resolve that this is the time they’re finally going to clean them up. So they Google “delete mail”, find my highly ranked post (currently, out of 228 million results I’m #2) click on it and, voila, lots of page views.

Well, lots of page views for a week or so. Then, what I call the New Year’s Resolutions Effect becomes…well, ineffective. People forget about their New Year’s resolutions and go on with their lives.

Why we are so poor at keeping resolutions

Why are we so poor at keeping resolutions? While scientific research into the psychology of motivation doesn’t currently offer a definitive explanation, there are some plausible theories. One of them, nicely explained by psychologist Tom Stafford, is proposed by George Ainslie in his book Breakdown of Will (read a forty-page “précis” here).

As Tom puts it:

“…our preferences are unstable and inconsistent, the product of a war between our competing impulses, good and bad, short and long-term. A New Year’s resolution could therefore be seen as an alliance between these competing motivations, and like any alliance, it can easily fall apart.
Tom Stafford, How to formulate a good resolution

And to make a long story short, he shares this consequence of Ainslie’s theory:

“…if you make a resolution, you should formulate it so that at every point in time it is absolutely clear whether you are sticking to it or not. The clear lines are arbitrary, but they help the truce between our competing interests hold.”

For years, I’ve used this observation to create better event outcomes. Here’s what I do.

If you’ve done a good job, by the close of your event participants will be fired up, ready to implement good ideas they’ve heard and seen. This is prime time for them to make resolutions to make changes in their professional lives. So how can we maximize the likelihood they will make good resolutions—and keep them!

A personal introspective

Close to the end of my events I use a personal introspective to give every attendee an opportunity to explore changes they may want to make in their life and work as a result of their experiences during the conference. (For full details of how to hold a personal introspective, see my book The Power of Participation: Creating Conferences That Deliver Learning, Connection, Engagement, and Action.)

At the start of the personal introspective, each attendee writes down (privately) the changes they want to make. Before they do so, I explain a crucial question they will need to answer later in the process: “How will you know when these changes happen?” I give them several relevant examples of vague versus measurable goals and actions, like those below.

PI Goals and Actions 2

It turns out that including the question “How will you know when these changes happen?” and giving relevant examples beforehand is very important. If you don’t, I’ve learned that hardly anyone will come up measurable resolutions that make it crystal clear whether you are succeeding or not.

Even with the directions and support, some people find it very difficult to come up with measurable, time-bound answers. Which is one of the reasons why every personal introspective has a follow-up small group component. There, they can share and get help on their goals. But that’s material for another blog post.

Over the years I’ve received enough feedback about the effectiveness of personal introspectives to know they can be a powerful tool for better event outcomes. As predicted by the psychology of motivation, helping participants make specific, measurable, and time-bound resolutions that are easier to keep is a vital component.

Photo attribution: Flickr user chrish_99

Are your meeting evaluations reliable?

meeting evaluations reliable 4175299981_7752cbe323_o

Are your meeting evaluations reliable? Can the way we evaluate meetings change how participants view their experience? Possibly, given the findings of research reported in the June 2013 Personality and Social Psychology Bulletin. The study indicates that when we ask people for their reasons to justify their choices they focus on aspects that are easy to verbalize. This can distort their overall judgement. Here’s Tom Stafford‘s description of the experiment.

An experiment

Participants were asked to evaluate five posters of the kind that students might put up in their bedrooms. Two of the posters were of art – one was Monet’s water lilies, the other Van Gogh’s irises. The other three posters were a cartoon of animals in a balloon and two posters of photographs of cats with funny captions.

All the students had to evaluate the posters, but half the participants were asked to provide reasons for liking or disliking them. (The other half were asked why they chose their degree subject as a control condition.) After they had provided their evaluations the participants were allowed to choose a poster to take home.

What happened?

So what happened? The control group rated the art posters positively (an average score of around 7 out of 9) and they felt pretty neutral about the humorous posters (an average score of around 4 out of 9). When given a choice of one poster to take home, 95% of them chose one of the art posters. No surprises there, the experimenters had already established that in general most students preferred the art posters.

But the group of students who had to give reasons for their feelings acted differently. This “reasons” group liked the art posters less (averaging about 6 out of 9) and the humorous posters more (about 5 to 6 out of 9). Most of them still chose an art poster to take home, but it was a far lower proportion – 64% – than the control group. That means people in this group were about seven times more likely to take a humorous poster home compared with the control group.

The twist

Here’s the twist. Some time after the tests, at the end of the semester, the researchers rang each of the participants and asked them questions about the poster they’d chosen: Had they put it up in their room? Did they still have it? How did they feel about it? How much would they be willing to sell it for? The “reasons” group were less likely to have put their poster up, less likely to have kept it up, less satisfied with it on average and were willing to part with it for a smaller average amount than the control group. Over time their reasons and feelings had shifted back in line with those of the control group – they didn’t like the humorous posters they had taken home, and so were less happy about their choice.
—Tom Stafford, When giving reasons leads to worse decisions

Implications for event evaluations

What might this imply for event evaluations? When asked to give our reasons why we evaluated an event a certain way, this research indicates that we’re likely to focus on reasons that are easy to express. Ever noticed in your event evaluations that attendees’ opinions about food and accommodations are often far more detailed than what they write about specific sessions or the event as a whole? It’s much easier to express an opinion about the former than the latter, and that’s OK in itself. What should concern us, though, is that evaluations themselves, by focusing on the easily quantifiable, may bias how participants perceive our event’s value.

In other words, your meeting evaluations may not be reliable because attendees tend to give easy feedback. One way to minimize this is to focus questions on the more intangible aspects of the event experience.

Perceived value is an important component of event Return On Investment (ROI). I’ve mused about Return On Investment (ROI) for social media (I’m skeptical about measuring it) and participant driven events (I believe they improve ROI). How might this research affect the calculation of meeting ROI?