What your conference evaluations are missing

One of the easiest, yet often neglected, ways for meeting professionals to improve their craft is to obtain (and act on!) client feedback after designing/producing/facilitating an event. So I like to schedule a thirty-minute call at a mutually convenient date one or two weeks after the event, giving the client time to decompress and process attendee evaluations.

During a recent call, a client shared their conference evaluation summaries that rated individual sessions and the overall conference experience.

This particular annual conference uses a peer conference format every few years. The client finds the Conferences That Work design introduces attendees to a wider set of peer resources and conversations at the event. This year, The Solution Room, was a highly rated session for building connections and getting useful, confidential peer consulting on individual challenges.

As the client and I talked, we realized that the evaluations had missed an important component. We were trying to decide how frequently the organization should alternate a peer conference format with more traditional approaches. However, we had no attendee feedback on how participants viewed the effectiveness of the annual event for:

  • making useful new connections;
  • building relationships;
  • getting current professional wants and needs met; and
  • building community.

Adding ratings of these KPIs to conference evaluations provides useful information about how well each event performs in these areas. Over time, conveners will see if/how peer conference formats improve these metrics. I also suggested that we include Net Promoter Scores in future evaluations.

The client quickly decided to include these ratings in future conference evaluations. As a result, our retrospective call helped us to improve how participants evaluate his events. This will provide data that will allow more informed decisions about future conference design decisions.

Do your evaluations allow attendees to rate the connection and just-in-time learning effectiveness of your meeting? Do they rate how well your meeting met current professional wants and needs? If not, consider adding these kinds of questions to all your evaluations. Over time you’ll obtain data on the meeting designs and formats that serve your participants best.

Do you review event evaluations like a Chinese censor?

event evaluations 101807127_2f67f9f623_o A fascinating piece of research published in Science concludes that the Chinese government allows people to say whatever they like about the state, its leaders, or their policies—except for posts with collective action potential, which are far more likely to be censored. Which reminds me of the meetings industry’s common response to event evaluations.

What do stakeholders do with event evaluations?

I often wonder whether organizers actually read event evaluations. And, if they read them, do they act upon them? Given the same old event designs I see repeated year after year, my experience is that most changes stemming from attendee feedback concentrate on logistical improvements or cosmetic restyling: changes that rarely get to core dissatisfactions experienced by many attendees.

This is why the bar is so low for event expectations. The majority of attendees assume that an event can’t be much better than what they routinely experience. Ask attendees what they consider to be a worthwhile meeting. You’ll frequently hear that they’re satisfied if they “learn one useful thing or meet one useful person a day”. You may think that’s acceptable. I know we can do much better.

Use event evaluations to improve future meetings

Can we significantly improve our events through attendee feedback? Yes. Done right, event evaluations provide an incredible opportunity to build community around our events. Yet, invariably, we squander this opportunity. Evaluations—either in the form of paper smile sheets or online surveys—get kidnapped, disappearing into the hands of the event organizers, never to see the light of day again.

When we routinely solicit comments about our meetings’ value and then ignore feedback that could fundamentally improve our events we are acting like Chinese censors. The minority of attendees who have experienced the value of participant-driven and participation-rich meeting designs—who know the value of small group work with peers and public evaluation during the event—expect something better from your meetings. If you don’t give it to them, they will stay away or go elsewhere.

A warning. They won’t be a minority forever.

Photo attribution: Flickr user charleshope