A novel way to assess consensus

assess consensus: a blurred photograph of an audience overlaid with an illustration of people humming at different volumes

Chapter 44 of my book The Power of Participation explains how facilitators use participatory voting to provide public information about viewpoints in the room, paving the way for further discussion. In particular, we often use participatory voting to assess consensus.

It’s often unclear whether a group has formed a consensus around a specific viewpoint or proposed action. Consensual participatory voting techniques can quickly show whether a group has reached or is close to consensus, or wants to continue discussion.

Methods to assess consensus

For small groups, Roman voting (The Power of Participation, Chapter 46) provides a simple and effective method of assessing agreement.

However, Roman voting isn’t great for large groups, because participants can’t easily see how others have voted. Card voting (ibid, Chapter 47) works quite well for large groups, but it requires:

  • procurement and distribution of card sets beforehand; and
  • training participants on how to use the cards.

A novel way to assess consensus with large groups

I recently came across a novel (to me) way to explore large group consensus. This simple technique requires no training or extra resources. In addition, it’s a fine example of semi-anonymous voting: group voting where it’s difficult to determine how individuals vote without observing them during the process. [Dot voting (ibid, Chapter 49), is another semi-anonymous voting method.]

Want to know how it works?

By using humming!

Humming: a tool to assess consensus

Watch this 85-second video to see how it works.

I learned of this technique from the New York Times:

“The Internet Engineering Task Force eschews voting, and it often measures consensus by asking opposing factions of engineers to hum during meetings. The hums are then assessed by volume and ferocity. Vigorous humming, even from only a few people, could indicate strong disagreement, a sign that consensus has not yet been reached.”
‘Master,’ ‘Slave’ and the Fight Over Offensive Terms in Computing, Kate Conger, NY Times, April 13, 2021

The Internet Engineering Task Force (IETF)is “the premier Internet standards body, developing open standards through open processes.” We all owe the IETF a debt, as they are largely responsible for creating the technical guts of the internet.

On Consensus and Humming in the IETF

The IETF builds consensus around key internet protocols, procedures, programs, and concepts and eventually publishes them as RFCs (Request for Comments). And, yes, there is a wonderful RFC 7282 On Consensus and Humming in the IETF (P. Resnick, 2014).

Software engineer Ana Ulin explains further:

‘In IETF discussions, humming is used as a way to “get a sense of the room”. Meeting attendees are asked to “hum” to indicate if they are for or against a proposal, so that the person chairing the meeting can determine if the group is close to a consensus. The goal of the group is to reach a “rough consensus”, defined as a consensus that is achieved “when all issues are addressed, but not necessarily accommodated”.’
Ana Ulin, Rough Consensus And Group Decision Making In The IETF

When assessing rough consensus, humming allows a group to experience not only the level of agreement or disagreement but also the extent of strong opinions (like Roman voting’s up or down thumbs). And if the session leader decides to hold further discussion, the group will have an idea of who holds such opinions.

Voice voting

This technique to assess consensus reminds me of voice voting at New England Town Meetings, which have been annual events in my home state, Vermont, since 1762. But, though it’s possible to hear loud “Aye” or “Nay” votes, humming makes strong opinions easier to detect.

Conclusion

RFC 7282 starts with an aphorism expressed by Dave Clark in 1992 on how the IETF makes decisions:

“We reject: kings, presidents and voting.”

“We believe in: rough consensus and running code.”

Replace “running code” with your organization’s mission, and you may just have the core of an appealing approach to decision-making in your professional environment.

And let us know in the comments below if you try using humming as a tool to assess consensus!

Video “Please Hum Now: Decision Making at the IETF” courtesy of Niels ten Oever
Image background by Robert Scoble

An introduction to participatory voting—Part 3: Public, semi-anonymous, and anonymous voting

Public, semi-anonymous, and anonymous voting: a photograph of children raising their hands in a classroom. Photo attribution: Flickr user gpforeducationIn the first two parts of this series on participatory voting at events, I introduced the concept and compared low-tech and high-tech approaches. Now, let’s explore an issue that should (but often doesn’t) determine the specific voting methods we choose: knowledge about how other participants have voted. In this post, I’ll explain the differences between public, semi-anonymous, and anonymous voting and when you should use them.

High-tech methods typically default to anonymous voting: i.e. we have no information on anyone’s individual vote. Audience response systems (ARSs) — which combine an audience voting method such as a custom handheld device, personal cell phone/smartphone, personal computer, etc. with a matched receiver and software that processes and displays responses — are so commonly used to provide anonymous voting at meetings today that many event planners and attendees are unaware that public voting is a simple and, in many cases, more useful alternative.

Public voting

Public voting methods allow a group to see the individuals who have voted and how they voted. (For a list of anonymous and public participatory voting techniques, see Part 2 of this series.)

In Part 1 of this series, I explained why using public voting techniques is key to creating truly participatory voting:

“Allowing participants to discover those who agree or disagree with them or share their experience efficiently facilitates valuable connections between participants in ways unlikely to occur during traditional meetings. Giving group members opportunities to harness these techniques for their own discoveries about the group can further increase engagement in the group’s purpose.”

It’s also worth noting that public voting offers follow-up opportunities to uncover group resources, interest, and commitment on specific action items from individual participants.

Anonymous voting

Anonymous voting informs us about a group’s collective opinion but hides individual opinions. As mentioned in Part 2, anonymous voting is certainly appropriate when we are exploring deeply personal or potentially embarrassing questions: e.g. “Who has or has had a sexually transmitted disease?” But how often is this necessary? In my experience, the vast majority of questions asked of a group during meeting sessions are not sensitive, and there is real value in participants’ discovery of others with like-minded and opposing views via public voting.

Some argue that anonymous voting is necessary to avoid a bandwagon effect, where people vote in a particular way because other people are doing so, rather than expressing their own opinion. Although no one can divine participants’ true beliefs, a facilitator who creates a safe environment for individuals to express any opinion will minimize groupthink during participatory voting.

For example, when I facilitate The Solution Room, a session that provides just-in-time peer support and answers to a pressing professional challenge, I ask participants to place themselves in the room to show how risky it feels to share the challenge they have chosen. As I do so, I say “I’ve had challenges where I’d be standing over here, and others where I’d be standing over there.” Sharing my experience that any position along the riskiness spectrum might be appropriate for me helps to support and legitimize each participant’s choice.

Semi-anonymous voting

Finally, there’s a form of participatory voting I call semi-anonymous that’s essentially but not perfectly anonymous. Two common examples are dot voting (described in detail in Chapter 49 of The Power of Participation: Creating Conferences That Deliver Learning, Connection, Engagement, and Action) and crowdsourcing techniques involving group posting of written notes on walls or tables. Although in principle, continuously spying on an individual participant could allow observation of specific votes, such surveillance would be pretty obvious, impracticable for multiple participants, and is realistically unlikely to occur in practice.

The next time you need to determine a group’s response to a question, take a moment to consider whether anonymous voting is really necessary. In the majority of cases, you’ll find that public voting is a better choice, allowing participants to learn more about each other while setting the stage for a deeper look at the issues uncovered.

Still have questions about public, semi-anonymous, and anonymous voting? Share them in the comments below!

Photo attribution: Flickr user gpforeducation