I’ve written about my skepticism of attempts to crowdsource session topics before a conference. After running edACCESS 2010 last month, I realized I could analyze the success of our pre-conference crowdsourcing. So here is a real-life comparison of pre-conference and at-conference crowdsourcing.
Conference details
edACCESS is a four-day conference for information technology staff at small independent schools that has been held every year since 1992. In 2010 there were 47 attendees, 9 of whom had not attended before. The conference uses the Conferences That Work event design.
Before the conference, we solicited suggestions for session topics via messages to the edACCESS listserv and email to registrants. Ideas could be posted on the edACCESS wiki, which requires registration. Some of the new attendees did not register on the wiki before the conference, but all returning attendees were registered (since most of each year’s conference content and discussion is posted there.)
Seven attendees (15%) posted a total of fourteen topic suggestions to the wiki before the conference.
In contrast, during the conference roundtable, 147 topics were suggested, an average of over three per participant.
The schedule for edACCESS 2010 included a vendor exhibit, two predetermined sessions (an attendee-created Demo Showcase and a Web 2.0 Demo Tools Workshop), and nine one-hour time slots for peer sessions with topics crowdsourced through the Conferences That Work peer session sign-up process, for which 78 topics were suggested.
We had space to run four simultaneous sessions in each of the nine time slots and ended up scheduling the 33 most popular (and feasible) of the 78 suggestions.
Findings
Of the 14 topics suggested prior to the conference, only 5 ended up being chosen for sessions by attendees at the event. So only 15% of the 33 actual conference sessions were predicted by attendees in advance of the event.
Of the 147 topics suggested during the conference roundtable, 26 were subsequently chosen for sessions. That’s 80% of the final sessions. Interestingly, seven of the final session topics were not mentioned in the roundtable notes. It’s quite common for participants to think of new topics after the roundtable is over. Once seen by other attendees, some of these ideas turn into desirable sessions.
Conclusions
It’s hard to know how to improve on our process of soliciting session ideas before the conference, short of forcing attendees to make suggestions during registration—something that would not go over well with the typical edACCESS attendee. In my experience, the above analysis is pretty typical for peer conferences I run. Having session topics and formats that fit participants’ needs is vital for the success of any conference. Given the poor showing for pre-conference topic crowdsourcing, (and, by extension, efforts of a conference’s program committee) I feel that having attendees brainstorm and then propose topics at the start of the event, as is done at Conferences That Work, is well worth the work involved.
I hope this comparison of pre-conference and at-conference crowdsourcing is informative. Do you think we should even be trying to crowdsource topics for a conference? Are you still skeptical of the utility of crowdsourcing topics at the beginning of a conference? What would you do to improve the success of crowdsourcing sessions before an event?
I’m on my iPhone so sorry for the brief and choppy reply. Great post! Do you think that the lack of and quality of the crowdsourced suggestions prior to the conference vs during may also have to do with your attendees’ comfort level with social media and their social technographics?
Hi Lara, it’s good to hear from you. The attendees at edACCESS 2010 are very comfortable with social media. (They use Facebook more than Twitter. I think this is because they are extremely busy professionally and prefer asynchronous social media channels.) In my judgment, the quality of the pre-conference session suggestions was similar to those that were suggested at the event. But there just weren’t many advance suggestions.
I find that conference registrants do not generally want to spend time thinking in advance about an event they’re going to attend months in the future. It’s this reality, rather than their familiarity/comfort with social media that limits attempts to effectively crowdsource events in advance. In support of this observation, Tony Stubblebine of Crowdvine told me recently that significant activity on a conference social networking site begins a mere week before the event commences.
I may have mispoke or failed to make a subtle distinction. I prefer to say that the majority of activity comes in the week before a conference. We often see significant activity well before. It’s just that this activity is dwarfed by what happens in the final week. So the question I have for crowd sourcing proponents is, “How much early adoption do you need in order to be effective?”
If you consider call for proposals a form of crowd sourcing (I would consider it at least a relative) then it’s clear that you can get significant activity well before an event.
Thanks for the clarification Tony; I’m sorry if I misrepresented your observation. I think that your statement that activity in the final week dwarfs what occurs earlier supports my experience that attendees aren’t thinking much about the event until just before it starts. My subjective experience is that much of the activity in the week before the event is about connecting with other attendees and settling logistics rather than discussing session details.
I agree that you can get responses in advance to calls for proposals, but tried to show that at the conference I analyzed, responses to these requests were minimal and a small subset of what attendees actually wanted to discuss. Such activity might be considered significant, but I have found that crowdsourcing at the start of the event provides far better results.