Combining facilitation tools

 

Conference participants using facilitation tools RSQP and dot votingA June 2023 conference gave me a perfect opportunity to use one of my facilitation tools: Reminders, Sparks, Questions, Puzzles (RSQP). RSQP can be thought of as a highly interactive debrief after an information dump. It’s an efficient way to get participants to rapidly engage with and explore presented content in a personally meaningful way. And, as we’ll see, RSQP offers the potential to devise on-the-fly sessions that meet participants’ uncovered wants and needs.

My 2014 post on RSQP gives a clear example of how it works (and my book Event Crowdsourcing includes full details) so I won’t repeat myself here. The 2014 and the recent conference each had around 200 participants, so the process and timing (around 25 minutes) were pretty similar.

But there were two significant differences.

Two significant differences

1. Conference length

The 2014 conference ran for three days.

But the 2023 conference ran a mere eight hours, from 8:30 AM – 4:30 PM on a single day.

2. How we used the gallery created by RSQP

The 2014 conference didn’t use the RSQP gallery to directly influence what would happen during the rest of the conference. A small group of subject-matter experts clustered key theme notes into a valuable public resource for review throughout the event. Participants simply used the clustered gallery to discover what their peers were thinking.

In contrast, I designed the 2023 conference to explore the future of a 50-year-old industry, and we needed to use the information mined by RSQP to create same-day sessions that reflected participants’ top-of-mind issues, questions, and concerns. We had just 2½ hours to:

  • review the information on nearly a thousand sticky notes;
  • determine an optimum set of sessions to run;
  • find facilitators for the sessions; and
  • schedule the sessions to time slots and rooms.

Combining facilitation tools

 

An example of facilitation tools: a wall of RSQP plus dot voting flip chartsTo make our 2½ hours of participant-driven session determination a little easier, I combined RSQP with another facilitation technique: dot voting.

So, at the end of the standard RSQP process, I added a dot voting step. While the participants individually shared their ideas with the others at their table, the staff gave each table a strip of three red sticky dots. When the flip chart sheets were complete, I asked each table to spend three minutes choosing and adding red dots to the three topics on their sheet they thought were the most important for further discussion. Here’s an example of one table’s work.
facilitation tools example: an RSQP plus dot voting flip chart

An initial review of the gallery’s red-dot items, allowed us to quickly zero in on needed and wanted topics. We saw a nice combination of popular ideas and great individual table suggestions. Being able to initially focus on red-dot topics on the flip charts saved us crucial time.

As a result, we determined the topics, assigned facilitators, and scheduled a set of nine sessions in time to announce them during lunch. (Once again, refer to my book Event Crowdsourcing for the step-by-step procedures we used for session selection and scheduling.) We ran the sessions in two one-hour afternoon time slots, and, as is invariably the case with program crowdsourcing, every session was well-attended and received great reviews.

Conclusion

I’m sure there are still great group facilitation techniques I have yet to discover. But my facilitation toolbox doesn’t get as many new tools added each year as when I began to practice professionally. However, when I consider how many possible combinations of my existing tools are available to solve new group work situations, I feel increasingly confident in my ability to handle novel facilitation challenges that may arise.

As my mentor Jerry Weinberg wrote:

“I may run out of ideas, but I’ll never run out of new combinations of ideas.”
Jerry Weinberg, Weinberg on Writing: The Fieldstone Method

A novel way to assess consensus

assess consensus: a blurred photograph of an audience overlaid with an illustration of people humming at different volumes

Chapter 44 of my book The Power of Participation explains how facilitators use participatory voting to provide public information about viewpoints in the room, paving the way for further discussion. In particular, we often use participatory voting to assess consensus.

It’s often unclear whether a group has formed a consensus around a specific viewpoint or proposed action. Consensual participatory voting techniques can quickly show whether a group has reached or is close to consensus, or wants to continue discussion.

Methods to assess consensus

For small groups, Roman voting (The Power of Participation, Chapter 46) provides a simple and effective method of assessing agreement.

However, Roman voting isn’t great for large groups, because participants can’t easily see how others have voted. Card voting (ibid, Chapter 47) works quite well for large groups, but it requires:

  • procurement and distribution of card sets beforehand; and
  • training participants on how to use the cards.

A novel way to assess consensus with large groups

I recently came across a novel (to me) way to explore large group consensus. This simple technique requires no training or extra resources. In addition, it’s a fine example of semi-anonymous voting: group voting where it’s difficult to determine how individuals vote without observing them during the process. [Dot voting (ibid, Chapter 49), is another semi-anonymous voting method.]

Want to know how it works?

By using humming!

Humming: a tool to assess consensus

Watch this 85-second video to see how it works.

I learned of this technique from the New York Times:

“The Internet Engineering Task Force eschews voting, and it often measures consensus by asking opposing factions of engineers to hum during meetings. The hums are then assessed by volume and ferocity. Vigorous humming, even from only a few people, could indicate strong disagreement, a sign that consensus has not yet been reached.”
‘Master,’ ‘Slave’ and the Fight Over Offensive Terms in Computing, Kate Conger, NY Times, April 13, 2021

The Internet Engineering Task Force (IETF)is “the premier Internet standards body, developing open standards through open processes.” We all owe the IETF a debt, as they are largely responsible for creating the technical guts of the internet.

On Consensus and Humming in the IETF

The IETF builds consensus around key internet protocols, procedures, programs, and concepts and eventually publishes them as RFCs (Request for Comments). And, yes, there is a wonderful RFC 7282 On Consensus and Humming in the IETF (P. Resnick, 2014).

Software engineer Ana Ulin explains further:

‘In IETF discussions, humming is used as a way to “get a sense of the room”. Meeting attendees are asked to “hum” to indicate if they are for or against a proposal, so that the person chairing the meeting can determine if the group is close to a consensus. The goal of the group is to reach a “rough consensus”, defined as a consensus that is achieved “when all issues are addressed, but not necessarily accommodated”.’
Ana Ulin, Rough Consensus And Group Decision Making In The IETF

When assessing rough consensus, humming allows a group to experience not only the level of agreement or disagreement but also the extent of strong opinions (like Roman voting’s up or down thumbs). And if the session leader decides to hold further discussion, the group will have an idea of who holds such opinions.

Voice voting

This technique to assess consensus reminds me of voice voting at New England Town Meetings, which have been annual events in my home state, Vermont, since 1762. But, though it’s possible to hear loud “Aye” or “Nay” votes, humming makes strong opinions easier to detect.

Conclusion

RFC 7282 starts with an aphorism expressed by Dave Clark in 1992 on how the IETF makes decisions:

“We reject: kings, presidents and voting.”

“We believe in: rough consensus and running code.”

Replace “running code” with your organization’s mission, and you may just have the core of an appealing approach to decision-making in your professional environment.

And let us know in the comments below if you try using humming as a tool to assess consensus!

Video “Please Hum Now: Decision Making at the IETF” courtesy of Niels ten Oever
Image background by Robert Scoble

How to use dot voting to choose the sessions your attendees need and want

A photograph of a smiling conference participant who, with others, is dot voting on session topics written on large sticky notes posted on a wallHow do we build conference programs that attendees actually want and need? Since 1992 I’ve experimented with multiple methods to ensure that every session is relevant and valuable. Here’s what happened when I incorporated dot voting into a recent two-day association peer conference.

For small (40 – 70 participants) one-day conferences I often use the large Post-it™ notes technique described in detail in my post How to crowdsource conference sessions in real-time. Participants simply post desired topics, which are then clustered and used to determine sessions and facilitators/leaders.

What we did

The September 2017 two-day conference had 160 participants, so I decided to add interest dot voting to obtain additional information about the relative popularity of topics. This added a couple of extra steps to the process used in the post above.

We had three one-hour time slots available the following day, and six separate rooms for participants to meet. This allowed us to schedule a maximum of eighteen peer sessions.

After twenty minutes of obtaining topic offers and wants, a small group of volunteers clustered the~150 topics posted, combined them appropriately, and, when needed, rewrote session titles on a fresh Post-it. Participants then returned to dot vote on the cleaned-up topics.

Each participant received three colored dots, which they could assign to the topics however they wanted — including all three to a single topic if desired.

dot voting
Handing out dots for dot voting

In addition, we gave each participant a black fine-point Sharpie. They wrote a number between 1 and 3 on each of their dots to indicate their level of interest in the dotted session.
Here’s a 22-second video excerpt of the dot voting, which was open for 35 minutes during an evening reception.


Finally, the small volunteer group spent about ninety minutes using the peer session selection process described in my book Conferences That Work and associated supplement to create the conference program for the next day.

Observations

  • The entire process went very smoothly.
  • It became clear that there were fifteen topics with significant interest. So we ended up scheduling five simultaneous sessions in each time slot, leaving one room empty. We advertised the empty room as a place for impromptu meetings on other topics.
  • The ninety minutes needed to analyze the voting and create appropriate sessions compares favorably with the time needed for the more detailed process described in Conferences That Work.
  • I had expected that most people would choose “3 — High Interest” for their dots. Although a majority of the dots were indeed 3’s, there were a significant number of 1’s and 2’s. This was helpful for rejecting topics that had a number of dots with mostly “medium” or “low” interest. Without the interest level information, it would have been harder to pick the best topics to schedule.
  • Every one of the scheduled sessions had good attendance. In addition, we scheduled sessions that seemed to be more popular (many dots) in the larger rooms. This worked out well.
  • Although at the time of writing, session evaluations are not yet available, the conference closing Group Spective made it clear that participants were very happy with the program they had created.

Conclusions

I was pleased with how well adding dot voting to Post-it topic selection worked. It’s a simple tool that provides useful information on participants’ session preferences. This approach fits nicely between the most basic crowdsourcing methods, like Post-it topic choice, and the more information-rich approach used for classic Conferences That Work peer conferences.

I expect to use the technique again!

Have you used sticky notes and/or dot voting to crowdsource sessions at your events? Share your experience in the comments below! 

Participatory voting at events: Part 2—Low-tech versus high-tech voting

Low-tech and high-tech voting: a photograph of low-tech RSQP voting using sticky notes on wall-mounted flipchart paperIn Part 1 of this series, I defined participatory voting and we explored the different ways to use it to obtain public information about viewpoints and participants in the room, paving the way for further useful discussions and conversations. Now let’s explore low-tech and high-tech voting solutions.

High-tech voting

There is no shortage of high-tech systems that can poll an audience. Commonly known as ARSs, Student Response Systems (SRSs), or “clickers,” these systems combine an audience voting method—a custom handheld device, personal cell phone/smartphone, personal computer, etc.—with a matched receiver and software that processes and displays responses.

Here are three reasons why high-tech ARSs may not be the best choice for participatory voting:

  • ARSs necessitate expense and/or time to set up for a group. No-tech and low-tech approaches are low or no cost and require little or no preparation.
  • Most ARS votes are anonymous; no one knows who has voted for what. When you are using voting to acquire information about participant preferences and opinions, as opposed to deciding between conflicting alternatives, anonymous voting is rarely necessary. (An exception is if people are being asked potentially embarrassing questions.) When a group of people can see who is voting for what (and, with some techniques, even the extent of individual agreement/disagreement), it’s easy to go deeper into an issue via discussion or debate.
  • Participatory voting techniques involve more movement than pushing a button on an ARS device. This is important, because physical movement improves learning. Some techniques include participant interaction, which also improves learning.

While there are times when low-tech or high-tech voting is the right choice, I prefer no-tech and low-tech techniques for participatory voting whenever possible. No-tech techniques require only the attendees themselves. Low-tech approaches use readily available and inexpensive materials such as paper and pens.

No-tech and low-tech

Wondering what no-tech and low-tech voting techniques can be used for participatory voting? Here’s a list, taken from a glossary of participation techniques covered in detail in my book The Power of Participation: Creating Conferences That Deliver Learning, Connection, Engagement, and Action.

Body/Continuum Voting: See Human Spectrograms.

Card Voting: Provides each participant with an identical set of colored cards that can be used in flexible ways: typically for voting on multiple-choice questions, consensus voting, and guiding discussion.

Dot Voting: A technique for public semi-anonymous voting where participants are given identical sets of one or more colored paper dots which they stick onto paper voting sheets to indicate preferences.

Hand/Stand Voting: In hand voting, participants raise their hands to indicate their answer to a question with two or more possible answers. Stand voting replaces hand raising with standing.

Human Graphs: See Human Spectrograms.

Human Spectrograms: Also known as body voting, continuum voting, and human graphs. A form of public voting that has participants move in the room to a place that represents their answer to a question. Human spectrograms can be categorized as one-dimensional, two-dimensional, or state-change.

Idea swap: A technique for anonymous sharing of participants’ ideas.

One-dimensional Human Spectrograms: Human Spectrograms where participants position themselves along a line in a room to portray their level of agreement/disagreement with a statement or a numeric response (e.g. the number of years they’ve been in their current profession.)

Plus/Delta: A review tool that enables participants to quickly identify what went well at a session or event and what could be improved.

Post It!: A simple technique that employs participant-written sticky notes to uncover topics and issues that a group wants to discuss.

Roman Voting: Roman Voting is a public voting technique for gauging the strength of consensus.

State-change Human Spectrograms: Human Spectrograms where participants move en masse from one point to another to display a change of some quantity (e.g. opinion, geographical location, etc.) over time.

Table Voting: A technique used for polling attendees on their choice from pre-determined answers to a multiple-choice question, and/or for dividing participants into preference groups for further discussions or activities.

Thirty-Five: A technique for anonymously evaluating participant ideas.

Two-dimensional Human Spectrograms: Human Spectrograms where participants position themselves in a two-dimensional room space to display relative two-dimensional information (e.g. where they live with reference to a projected map.)

This ends my exploration of low-tech and high-tech voting solutions. And what are public, semi-anonymous, and anonymous voting? We’ll explain these different voting types and explore when they should be used in the third part of this series.