Comparing pre-conference and at-conference crowdsourcing
I’ve written about my skepticism of attempts to crowdsource session topics before a conference. After running edACCESS 2010 last month, I realized I could analyze the success of our pre-conference crowdsourcing. So here is a real-life comparison of pre-conference and at-conference crowdsourcing.
Conference details
edACCESS is a four-day conference for information technology staff at small independent schools that has been held every year since 1992. In 2010 there were 47 attendees, 9 of whom had not attended before. The conference uses the Conferences That Work event design.
Before the conference, we solicited suggestions for session topics via messages to the edACCESS listserv and email to registrants. Ideas could be posted on the edACCESS wiki, which requires registration. Some of the new attendees did not register on the wiki before the conference, but all returning attendees were registered (since most of each year’s conference content and discussion is posted there.)
Seven attendees (15%) posted a total of fourteen topic suggestions to the wiki before the conference.
In contrast, during the conference roundtable, 147 topics were suggested, an average of over three per participant.
The schedule for edACCESS 2010 included a vendor exhibit, two predetermined sessions (an attendee-created Demo Showcase and a Web 2.0 Demo Tools Workshop), and nine one-hour time slots for peer sessions with topics crowdsourced through the Conferences That Work peer session sign-up process, for which 78 topics were suggested.
We had space to run four simultaneous sessions in each of the nine time slots and ended up scheduling the 33 most popular (and feasible) of the 78 suggestions.
Findings
Of the 14 topics suggested prior to the conference, only 5 ended up being chosen for sessions by attendees at the event. So only 15% of the 33 actual conference sessions were predicted by attendees in advance of the event.
Of the 147 topics suggested during the conference roundtable, 26 were subsequently chosen for sessions. That’s 80% of the final sessions. Interestingly, seven of the final session topics were not mentioned in the roundtable notes. It’s quite common for participants to think of new topics after the roundtable is over. Once seen by other attendees, some of these ideas turn into desirable sessions.
Conclusions
It’s hard to know how to improve on our process of soliciting session ideas before the conference, short of forcing attendees to make suggestions during registration—something that would not go over well with the typical edACCESS attendee. In my experience, the above analysis is pretty typical for peer conferences I run. Having session topics and formats that fit participants’ needs is vital for the success of any conference. Given the poor showing for pre-conference topic crowdsourcing, (and, by extension, efforts of a conference’s program committee) I feel that having attendees brainstorm and then propose topics at the start of the event, as is done at Conferences That Work, is well worth the work involved.
I hope this comparison of pre-conference and at-conference crowdsourcing is informative. Do you think we should even be trying to crowdsource topics for a conference? Are you still skeptical of the utility of crowdsourcing topics at the beginning of a conference? What would you do to improve the success of crowdsourcing sessions before an event?