Nine conference mythodologies

Mythodology 3564715180_40dc1bb7f8_oLong ago, consultant Tom Gilb coined the term “mythodology” to describe erroneous but commonly held beliefs about how something should be done. Here are nine mythodologies about conferences.

Mythodology: We know what our attendees want to learn about.
Reality: No, you don’t. At least half the sessions programmed at traditional conferences are not what attendees want.

Mythodology: Event socials are a good way to meet people.
Reality: People tend to stay with people they already know at event socials. Participant-driven and participation-rich events provide far more opportunities to meet people you actually want to meet.

Mythodology: A “conference curator” can improve the quality of your conference content.
Reality: Sadly, conference curators don’t exist. But discovering the content wants and needs of participants at the event and satisfying them with the collective resources in the room is routinely possible and significantly improves the quality of your conference content.

Mythodology: Learning occurs through events.
Reality: Learning is a continual process; formal events only contribute a small percentage to the whole.

Mythodology: Conference programs should be stuffed full of sessions so there’s something of interest for everyone.
Reality: Downtime is essential for effective learning and connection, so providing conference white space is essential. (Trick: Stuff your program if you must, but give attendees explicit permission to take their own downtime when they need it.)

Mythodology: Adding novelty to a meeting makes it better.
Reality: Novelty is a one-time trick. Next time it’s old. But making your meeting better lasts. Go for better, not just different.

Mythodology: Big conferences are better conferences.
Reality: Better for the owners perhaps (if the meeting is making a profit) but not better for participants. Today’s most successful conferences are micro conferences. (And, by the way, most conferences are small conferences.) 

Mythodology: We know what attendees like, don’t like, and value about our meeting.
Reality: If you’re using smile sheets or online surveys, you’re learning nothing about the long-term value of your meeting. This is the meeting industry’s biggest dirty secret. Use long-term evaluation techniques [1] [2] instead.

Mythodology: We can contract a venue for our meeting before we design it.
Reality: Sounds silly when put like that, but it happens all the time. Designing your meeting and then choosing a venue that can showcase your design will improve your meeting experience (and can save you big bucks!)

I bet you can think of more mythodologies. Share them in the comments!

Image attribution: Flickr user dunechaser

What your conference evaluations are missing

One of the easiest, yet often neglected, ways for meeting professionals to improve their craft is to obtain (and act on!) client feedback after designing/producing/facilitating an event. I like to schedule a thirty-minute call at a mutually convenient date one or two weeks after the event, giving the client time to decompress and process attendee evaluations.

During a recent call for an event that I designed and facilitated, a client shared his conference evaluation summaries that rated individual sessions and the overall conference experience.

This particular annual conference uses a peer conference format every few years. The client believes that the Conferences That Work design introduces attendees to a wider set of peer resources and conversations at the event. An addition this year, The Solution Room, was a highly rated session for building connections and getting useful, confidential peer consulting on individual challenges.

As the client and I talked, we realized that the evaluations had missed an important component. We were trying to decide how frequently the organization should alternate a peer conference format with more traditional approaches. Yet we had no attendee feedback on how participants viewed the effectiveness of the annual event for:

  • making useful new connections;
  • building relationships;
  • getting current professional wants and needs met; and
  • building community.

Adding ratings of these KPIs to conference evaluations provides useful information about how well each event performs in these areas. Over time, it allows conveners to see if/how peer conference formats improve these metrics. I also suggested that we include Net Promoter Scores in future evaluations.

The client quickly decided to include these ratings in future conference evaluations. Our retrospective call helped us to improve how participants evaluate his events. providing data that will feed more informed decisions about future conference design decisions.

Do your evaluations allow attendees to rate the connection and just-in-time learning effectiveness of your meeting? Do they rate how well your meeting met current professional wants and needs? If not, consider adding these kinds of questions to all your evaluations, allowing you to obtain data over time on the meeting designs and formats that serve your participants best.

Ask Me Anything About Conference Panels—Annotated Video

I guarantee you will learn many new great ideas about conference panels from this Blab of my Thursday chat with the wonderful Kristin Arnold. I’ve annotated it so you can jump to the good bits . (But it’s pretty much all good bits, so you may find yourself watching the whole thing. Scroll down the whole list; there are many advice gems, excellent stories and parables, folks show up at our homes, Kristin sings, etc.!) With many thanks to Kristin and our viewers (especially Kiki L’Italien who contributed mightily) I now offer you the AMA About Conference Panels annotated time-line.

[Before I turned on recording] We talked about: what panels are and aren’t; the jobs of a moderator; panel design issues; some panel formats; and our favorite panel size (Kristin and I agree on 3).

[0:00] Types of moderator questions.

[1:30] Using sli.do to crowdsource audience questions.

[2:40] Panel moderator toolboxes. One of Kristin’s favorite tools: The Newlywed Game. “What word pops into your mind when you think of [panel topic]?”

[4:30] Audience interaction, bringing audience members up to have a conversation; The Empty Chair.

[6:00] Preparing panelists for the panel.

[9:10] Other kinds of panel formats: Hot Seat, controversial topics.

[12:00] Continuum/human spectrograms/body voting and how to incorporate into panels.

[13:50] Panelist selection.

[14:40] Asking panelists for three messages.

[16:30] How the quality of a moderator affects the entire panel.

[17:30] More on choosing panelists.

[18:30] How to provoke memorable moments during panels; Kristin gives two examples involving “bacon” and “flaw-some“.

[20:30] Panelist homework. Memorable phrases: “The phrase that pays“; Sally Hogshead example.

[23:00] Panelists asking for help. Making them look good.

[24:10] Warming up the audience. The fishbowl sandwich: using pair-share as a fishbowl opener.

[25:30] Other ways to warm up an audience: pre-panel mingling, questions on the wall, striking room sets.

[26:30] Meetings in the round.

[28:00] Kristin’s book “Powerful Panels“, plus a new book she’s writing.

[29:00] Pre-panel preparation—things to do when you arrive at the venue.

[30:00] Considerations when the moderator is in the audience.

[31:00] Panelist chairs: favorite types and a clever thing to do to make panelists feel really special.

[32:50] Where should the moderator be during the panel? Lots of options and details.

[36:20] A story about seating dynamics from the late, great moderator Warren Evans.

[37:50] The moderator as consultant.

[38:40] Goldilocks chairs.

[39:40] Adrian explains the three things you need to know to set chairs optimally.

[41:00] “Stop letting the room set being decided for you,” says Adrian, while Kristin sees herself as more of a suggester.

[44:40] When being prescriptive about what you need is the way to go.

[46:30] Ideas about using screens at panel sessions.

[49:00] The UPS truck arrives at Adrian’s office door!

[50:00] Using talk show formats for panels: e.g. Sellin’ with Ellen (complete with blond wig.)

[52:20] Kristin’s gardener arrives!

[53:40] American Idol panel format.

[55:20] Oprah panel format.

[55:50] Control of panels; using Catchbox.

[56:20] Ground rules for the audience.

[59:10] What to say and do to get concise audience comments.

[1:00:00] A sad but informative story about a panelist who insisted on keeping talking.

[1:03:20] The Lone Ranger Fantasy.

[1:04:00] The moderator’s job, when done well, is pretty thankless.

[1:05:30] How you know if a panel is good. (Features mind meld between Kristin & Adrian!)

[1:06:10] The end of the fishbowl sandwich.

[1:07:40] Room set limitations caused by need to turn the room.

[1:10:00] Language: ground rules vs covenant; “Can we agree on a few things?”; standing to indicate agreement.

[1:13:00] You can’t please everyone.

[1:14:20] Kristin breaks into song!

[1:15:00] Non-obvious benefits obtained when you deal with an audience’s top issues.

[1:15:50] Why you should consider responding to unanswered attendee questions after the panel is over.

[1:16:40] The value (or lack of value) of evaluations.

[1:18:00] Following up on attendee commitments.

[1:20:00] Immediate evaluations don’t tell you anything about long term attendee change.

[1:21:10] “Panels are like a Wizard of Oz moment.”

[1:22:30] “Panels reframe the conversation in your head.

[1:25:00] Kristin’s process that quickly captures her learning and future goals; her continuous improvement binder.

[1:26:40] Closing thoughts on the importance of panels, and goodbye.

Two ways to take a hard look at conference evaluations

evaluationSeth Godin wrote a great blog post about survey questions—and applying two of his insights will make any conference evaluation better.

First, ask yourself the following about every question you ask:

Are you asking questions capable of making change happen?
After the survey is over, can you say to the bosses, “83% of our customer base agrees with answer A, which means we should change our policy on this issue.”

It feels like it’s cheap to add one more question, easy to make the question a bit banal, simple to cover one more issue. But, if the answers aren’t going to make a difference internally, what is the question for?
—Seth Godin

In other words, if any question you ask doesn’t have the potential to lead you to change anything, leave it out!)

Second, think about Seth’s sobering experience on responding to “Any other comments?” style questions:

Here’s a simple test I do, something that has never once led to action: In the last question of a sloppy, census-style customer service survey, when they ask, “anything else?” I put my name and phone number and ask them to call me. They haven’t, never once, not in more than fifty brand experiences.

Gulp. Would your evaluation process fare any better? As Seth concludes:

If you’re not going to read the answers and take action, why are you asking?

The Reminder—a new way to obtain long-term evaluations of events

letter 8909849224_832820ea27_kCan conference organizers get evaluative feedback on the long-term outcomes of their events? Last week, I pointed out that short-term evaluations routinely solicited at events are unreliable. If we want to honestly learn whether our conferences create long-lasting change, we need evaluation methods that can be applied after an appropriate length of time (three months? six months? a year?—you choose!) rather than within a few hours or days of the meeting taking place.

Here’s one way I’ve devised to obtain long-term feedback. It’s based on an old technique “A Letter To Myself” (ALTM, aka “A Letter To My Future Self”) which you may have experienced at meetings over the years.

I call it The Reminder.

In the standard ALTM version, described in Conferences That Work: Creating Events That People Love, the organizers set aside around 30 minutes just before the end of the event, supply each participant with notepaper and an envelope, and ask them to reflect on the changes they would like to make in their lives as a result of the event over the next [3 months/6 months/year/appropriate time period]. People then write letters to themselves about their changes and insert the letters into the supplied envelopes, which they then seal and address to themselves. The envelopes are collected and mailed out, unread, by the conference organizers once the announced time period has passed.

ALTM works because the recipients find value in being reminded of their resolutions after time has passed. They can note what they have accomplished, what is yet to be done, and what they may have forgotten but still have energy to pursue.

When I run a Personal Introspective at the end of a peer conference, I often add the ALTM exercise to provide a personal “tickler” reminder of the changes participants decide to make.

The Reminder
To modify ALTM to incorporate long-term feedback, add the following to the envelope supplied to each participant:

Sample feedback form to be included in the A Letter To Myself envelopeBefore the end of the ALTM session, briefly go through the feedback form with the group. Explain that completing the form on receipt and promptly mailing it back will provide the conference organizers valuable information about the long-term effectiveness of the conference, and this will help make the event even better next time.

It’s harder to implement long-term evaluations of our events because participants have less motivation to provide the information we need. The Reminder combines the effect of receiving the participant-created letter with a quick request for feedback. Motivation can be increased by adding an incentive for returning the feedback form, like a small prize or chance to win a raffle from those who return the form. In this case, name/contact information should be added to the form.

What do you think? Can The Reminder be a useful tool for evaluating your events? If you use it, share how it worked in the comments below.

Photo attribution: Flickr user gufoblu

Why meeting evaluations are unreliable and how we can improve them

evaluation 5201223017_52a7453f27_o

A fatal flaw
Just about all meeting evaluations are elicited within a few days of the session experience. All such short-term evaluations of a meeting or conference session possess a fatal flaw. They tell you nothing about the long-term effects of the session.

What is the purpose of a meeting? Unless we’re talking about special events, which are about transitory celebrations and entertainment (nothing wrong with these, but not what I’m focusing on here), isn’t the core purpose of a meeting to create useful long-term change? Learning that can be applied productively in the future, connections that last and reward, communities that grow and develop new activities and purpose—these are the key valuable outcomes that meetings and conferences can and should produce.

Unfortunately, humans are poor objective evaluators of the enduring benefits of a session they have just experienced.

Probably the most significant reason for this is that we are far more likely to be influenced by our immediate emotional experience during a session than by the successful delivery of what eventually turn out to be long-term benefits. We like to think of ourselves as driven by rationality, but as Daniel Kahneman eloquently explains in Thinking, Fast and Slow we largely discount the effects that our emotions have on our beliefs. Although information provided by lectures and speeches is mostly forgotten within a week, the short-term emotional glow fanned by a skillful motivational speaker can last long enough for great marks on smile sheets. And paradoxically, the long-term learning that can result from well-designed experiential meeting sessions may not be consciously recognized for some time.

Other reasons why evaluations of conference sessions can be unreliable include quantifiable reason bias (the distortions that occur when attendees are asked to justify their evaluations) and evaluation environment bias (evaluations are influenced by the circumstances in which they’re made). These biases are minimized if evaluations are made in the environment in which hoped-for learning can actually be applied: i.e. back in the world of work. But instead—worried that no one will provide feedback if we wait too long—we supply evaluation sheets to fill out at the session, or push evaluation reminders right away via a conference app.

How can we improve meeting evaluations?
If we want meeting evaluations to reflect real-world long-term change, we need to use evaluation methods that allow participants to report on their meeting experiences’ long-term effects.

This is hard—much harder than asking for immediate impressions. Once away from the event, memories fade, our professional lives center around our day-to-day work, and we are less amenable to being refocused on the past.

While I haven’t formulated a comprehensive approach to evaluating long-term change related to meetings, I think an effective long-term meeting evaluation should include the following activities:

  • Individual participants document perceived learning and change resolutions before the meeting ends.
  • Follow-up with participants after an appropriate time to determine whether their chosen changes have actually occurred.

In my next post I’ll share a concrete example of one way to implement a long-term evaluation that incorporates these components.

Photo attribution: Flickr user jurgenappelo

Four important truths about conference evaluations

conference evaluation 2337313655_76ff374a5a_oI’ve just spent a couple of hours reviewing participant post-event evaluations of the annual edACCESS conference.

I don’t know any meeting planners who especially enjoy creating, soliciting, and analyzing conference evaluations. It’s tempting to see them as a necessary evil, a checklist item to be completed so you can show you’ve attempted to obtain attendee feedback. Clients are often unclear about their desired evaluation learning outcomes, which doesn’t make the task any easier. After the event you’re usually exhausted, so it’s hard to summon the energy to delve into detailed analysis. And response rates are typically low, so who knows whether the answers you receive are representative anyway?

Given all this, how important are conference evaluations? I thing they’re very important—if you design them well, work hard to get a good response rate, learn from them, and integrate what you’ve learned into improving your next event. Here are four important truths about conference evaluations, illustrated via my edACCESS post-conference evaluation review:

1) Conference evaluations provide vital information for improvement
We’ve held 23 edACCESS annual conferences (one year we ran two). You’d think that maybe by now we have a tried and true conference design and implementation down cold. Not so! Even after 22 years we continue to improve an already stellar event (“stellar” based on—what else?—many years of great participant evaluations). And the post-event evaluations are a key part of our continual process improvement. How do we do this?

We use a proven method to obtain great participant evaluation response rates.

We think about what we want to learn about participants’ experience, and design an evaluation that provides us the information we need. Besides the usual evaluation questions that rate every session and ask general information about event logistics, we ask a number of additional questions that zero in on good and bad aspects of the conference experience. We also ask first-time attendees a few additional questions to learn about their experience joining our conference community.

We provide a detailed report of post-conference evaluations to the entire edACCESS community—not only the current year’s participants but also anyone who has ever attended one of our conferences and/or joined our listserv. The responses are anonymized, and include rating summaries, every individual comment on every session, and all comments on a host of conference questions.

Providing this kind of transparency is a powerful way to build a better conference and an engaged conference community. Sharing participants’ perceptions of the good, the bad, and the ugly of your event says a lot about your willingness to listen, and makes it easy for everyone to see the variety of viewpoints about event perceptions (see below).

The conference organizers review the evaluations and agree on changes to be implemented at the next conference. Some of the changes are significant: changing the length of the event, replacing some plenaries with breakouts, etc., while some are minor process or logistical tweaks that improve the conference experience in small but significant ways.

Evaluations often contain interesting and creative ideas that the steering committee likes and decides to implement. All major changes are considered experiments rather than permanent, to be reviewed after the next event, and are announced to the community well in advance.

We evaluate our evaluations. Are we asking a question that gets few responses or vague answers that don’t provide much useful information? If so, perhaps we should rephrase it, tighten it up, or remove the question altogether. Are there things we’d like to know that we’re not learning from the evaluation? Let’s craft a question or two that will give us some answers.

Using the above process yields fresh feedback and ideas that help us make each succeeding conference better.

One final point. Even if you (think) you’re running a one-off event, participant evaluations will invariably help you improve your execution of future events. So, don’t skip evaluations for one-time events—they offer you a great opportunity to upgrade your implementation skills. And, who knows, perhaps that one-off event will turn into an annual engagement!

2) You can’t please everyone
Were these people really at the same event?” Most meeting organizers have had the experience of reading wildly different evaluations of the same session or conference. While edACCESS 2013 evaluations don’t supply great examples of this, I always scan the answers to questions about the most and least useful session topics.

What I want to see is a wide variety of answers that cover most or all of the sessions. Because edACCESS is a peer conference with (at most) one conventional plenary session, I see such answers as reflecting the broad range of participant interests at the event. It’s good to know that most if not all of the sessions that were chosen satisfied the needs of some attendees. If many people found a specific session the least useful, that’s good information to have for future events, though it’s important to review this session’s evaluations to discover why it was unpopular. What is equally important is to share the diversity of the answers to these questions with participants. When people understand that a session they disliked was found to be useful by other participants, you’re less likely to need to field strident but minority calls for change—and have information to judge such requests if they do occur.

3) Dealing with unpleasant truths will strengthen your event
This year’s edACCESS evaluations contained many appreciations and positive comments on the conference format and how it was run. There were also a few anonymous comments that were less than flattering about an aspect of my facilitation style.

Even if the latter express a minority opinion, I will be working to improve from the feedback I received. I plan to check in with the event organizing committee to get their take on the feedback and ask for suggestions on whether/how to make it less of an issue in the future without compromising the event.

Facing and learning from criticism is hard—the first response to criticism of anyone who is trying to do a good job is usually defensive. But when we confront unpleasant truths, plan to better understand them, and follow through we lay the ground for making our event and our contributions to it better.

4) Public evaluation during the conference augments post-event evaluations
While it’s still rare at traditional conferences to spend time evaluating the event face-to-face, I include such a session—the Group Spective—at the close of all Conferences That Work. I strongly recommend a closing conference session that includes facilitated public discussion of the conference covering topics like: What worked? What could be improved? What initiatives might we want to explore? and Next steps.

It’s always interesting to compare the initiatives brought up by this session with suggestions contained in the post-event evaluations. You’ll find ideas triggered by the discussion during the spective that may appear or may be absent from the evaluations. The spective informs and augments post-event evaluations. Some of the ideas expressed will lead to future initiatives for the community or new directions for the conference.

What experience do you have with conference evaluations, either as a respondent or a designer? What other truths have you learned about conference evaluations?

Photo attribution: Flickr user stoweboyd

Are your meeting evaluations reliable?

evaluation 4175299981_7752cbe323_o

Can the way we evaluate meetings change how participants view their experience? Possibly, given the findings of research reported in the June 2013 Personality and Social Psychology Bulletin. The study indicates that when we ask people for their reasons to justify their choices they focus on aspects that are easy to verbalize, and this can distort their overall judgement. Here’s Tom Stafford‘s description of the experiment:

Participants were asked to evaluate five posters of the kind that students might put up in their bedrooms. Two of the posters were of art – one was Monet’s water lilies, the other Van Gogh’s irises. The other three posters were a cartoon of animals in a balloon and two posters of photographs of cats with funny captions.

All the students had to evaluate the posters, but half the participants were asked to provide reasons for liking or disliking them. (The other half were asked why they chose their degree subject as a control condition.) After they had provided their evaluations the participants were allowed to choose a poster to take home.

So what happened? The control group rated the art posters positively (an average score of around 7 out of 9) and they felt pretty neutral about the humorous posters (an average score of around 4 out of 9). When given a choice of one poster to take home, 95% of them chose one of the art posters. No surprises there, the experimenters had already established that in general most students preferred the art posters.

But the group of students who had to give reasons for their feelings acted differently. This “reasons” group liked the art posters less (averaging about 6 out of 9) and the humorous posters more (about 5 to 6 out of 9). Most of them still chose an art poster to take home, but it was a far lower proportion – 64% – than the control group. That means people in this group were about seven times more likely to take a humorous poster home compared with the control group.

Here’s the twist. Some time after the tests, at the end of the semester, the researchers rang each of the participants and asked them questions about the poster they’d chosen: Had they put it up in their room? Did they still have it? How did they feel about it? How much would they be willing to sell it for? The “reasons” group were less likely to have put their poster up, less likely to have kept it up, less satisfied with it on average and were willing to part with it for a smaller average amount than the control group. Over time their reasons and feelings had shifted back in line with those of the control group – they didn’t like the humorous posters they had taken home, and so were less happy about their choice.
—Tom Stafford, When giving reasons leads to worse decisions

What might this imply for event evaluations? If we’re asked to give our reasons why we evaluated an event a certain way, this research indicates that we’re likely to focus on reasons that are easy to express. Ever noticed in your event evaluations that attendees’ opinions about food and accommodations are often far more detailed than what they write about specific sessions or the event as a whole? It’s much easier to express an opinion about the former than the latter, and that’s OK in itself. What should concern us, though, is that evaluations themselves, by focusing on the easily quantifiable, may bias how participants perceive our event’s value.

Perceived value is an important component of event Return On Investment (ROI). I’ve mused about Return On Investment (ROI) for social media (I’m skeptical about measuring it) and participant driven events (I believe they improve ROI). How might this research affect the calculation of meeting ROI?

How bad smells, hand sanitizer, and Israeli judges affect your evaluation of an event

In his remarkable book The Righteous Mind: Why Good People Are Divided by Politics and Religion, moral psychologist Jonathan Haidt makes a strong case that “an obsession with righteousness is the normal human condition. It is a feature of our evolutionary design…” Although the book is primarily a fascinating exploration of the origins and workings of morality, along the way Haidt describes many interesting aspects of how humans actually behave that are often at odds with how we think we act. Here’s an example that has direct relevance to your attendees’ evaluations of your events.

Some bizarre and unsettling experimental findings
Haidt describes a number of experiments where people were asked to make moral judgments about controversial issues. In one, unbeknownst to the experimental subjects, half were exposed to what I’ll describe as foul air while they were giving their judgments. (Read the book for the smelly details.) The result? The people who breathed in foul air made harsher judgments than those who did not. Another experiment had people fill out surveys about their political attitudes while standing near or far from a hand sanitizer dispenser. Those who stood near the dispenser became temporarily more conservative in their expressed attitudes. A final example (not from the book) is the somewhat alarming discovery from research in Israeli courts that a prisoner’s chance of parole depends on when the judge hearing the case last took a break.

What do these findings mean for your events? 
What these experiments reveal is that our bodily experiences affect our simultaneous judgment of apparently unrelated issues. Our bodies guide our judgments. As Haidt explains: “When we’re trying to decide what we think about something, we look inward, at how we’re feeling. If I’m feeling good, I must like it, and if I’m feeling anything unpleasant, that must mean I don’t like it.”

What this all implies is that if we want to get unbiased evaluations of our events, we need to obtain them in neutral surroundings. Detaining an attendee who prides herself on fairness “for a quick video testimonial” in a featureless, smelly corridor when she badly needs a rest room will result in a less favorable response than if she is interviewed when she’s comfortable. Asking attendees to fill out online evaluations on the Monday they return to work with a backlog of while-you-were-out requests pending guarantees less charitable responses. (Offering a meaningful immediate incentive to those who take the time to fill out the survey might help to redress such negative feelings.)

And if we want to bias attendee evaluations in a positive direction? Well, I think I’ve given you the background to figure out how that might work. Not that you’d ever do such a thing. Would you?

Composite image credits: Flickr users michaelbycroftphotography, nedrai, and safari_vacation

A challenge to anyone who organizes an event

Here’s a simple challenge to anyone who organizes events and asks for evaluations.

(You do ask for evaluations, don’t you? Here’s how to get great event evaluation response rates.)

Publish your complete, anonymized evaluations.

You may want to restrict access to the people who attended the event.

That would be good.

You may decide to publish your evaluations publicly, as we just did for EventCamp East Coast 2011, and as we did a year ago for EventCamp East Coast 2010.

That’s even better.

If you believe in your event, and want to make it better, why not be transparent about the good, the bad, and the ugly?