Hybrid event architecture ideas sparked by Event Camp Twin Cities 2011

I expect much will be written about the problems encountered with communications with the remote pods at Event Camp Twin Cities 2011 last week. Rather than concentrate on what went wrong, I thought I’d share some ideas on hybrid event architecture that grew from my on-site experience and a long conversation with Brandt Krueger, who produced the event, the following morning. Without Brandt’s explanations I wouldn’t have been able to write this post, but any errors or omissions are mine and mine alone. I am not a production professional, so I write this post in the spirit of provoking discussion and input from those who have far more experience in this area.

Let’s start with a brief description of the set-up at Event Camp Twin Cities. As with many hybrid events, there were three audiences:

  • The local on-site attendees in Minneapolis
  • Seven “pods” (small groups of people that gathered in Amsterdam, Philadelphia, Toronto, Vancouver, Silicon Valley and two corporate headquarters)
  • Individual remote audience members

Both the pods and the individual remote audience members viewed the activities in Minneapolis via Sonic Foundry’s Mediasite platform. This product provides, via a browser-embedded player, A/V from the event (e.g. a presenter speaking) alongside additional media feeds (e.g. presenter slides). The flexibility of this technology, however, comes with a cost that may have contributed to the problems encountered at Event Camp Twin Cities: namely that the “real-time” feed delivered to remote attendees was delayed approximately twenty seconds.

During Event Camp Twin Cities 2011, individual remote audience members viewed the Mediasite feed and interacted with the proceedings via Twitter as a backchannel, ably assisted by remote audience host (aka virtual emceeEmilie Barta. From the accounts I’ve heard, this channel worked well.

The pods also viewed the Mediasite feed and could interact via Twitter. To provide additional interactivity for the pods, Event Camp Twin Cities set up live Skype calls to the pods, with several pods clustered on one Skype call. When the local participants wanted to have a real-time conversation, the plan was to switch to Skype, turning off the Mediasite feed, very much in the same way that a caller to a radio show is asked to turn off their time-delayed broadcast radio once they’re on the phone.

For reasons that are not clear to me, this switchover process did not work well at Event Camp Twin Cities. Again, rather than concentrate on what happened and why, I’d like to suggest another architectural approach for the pods’ experience that may prevent similar problems in the future.

Instead of switching between delayed and real-time channels for the pods, I think that pod <—> local communications should be set up only via real-time channels. One reason that the (delayed) Mediasite feed was used for the pods at Event Camp Twin Cities is that it provided a convenient aggregation of the two broadcast sources needed for any event these days—A/V of what is going on at the venue plus a channel for slides or other supporting materials. That works for the individual remote audience, which only interacts with the event via Twitter. But when you want to have significant real-time, two-way communication between pods and the main event, you have to handle the complexity involved in switching between delayed and real-time channels on the fly.

Here’s how my approach would work. All the pods would receive a single real-time broadcast channel for supporting materials (slides, movies etc.) created at the event. This can easily be done using one of the “screen-sharing” solutions in wide use today; the A/V from a “master” computer would be broadcast to each pod. And then each pod would be linked to the event via its own two-way channel. This could be a Skype or other videoconference call, or perhaps a product like Google+ Hangouts could be used.

With this architecture, the pods would not receive a delayed feed (i.e. no Mediasite feed), so no switching between delayed and live would be necessary. (Individual remote audience members would continue to receive the delayed feed, as before.) The main event site would need to produce the audio feed, so that sound from the pods would not be distracting, but the complexities of switching between two channels on the fly would be eliminated using this approach.

I think that this approach might be an improvement over the design used at Event Camp Twin Cities 2011, as it would allow easier spontaneous real-time interaction with the pods while eliminating one potential source of problems during the event. I await with interest any comments by those who understand the issues better than I.

Hybrid event production professionals, hybrid event attendees, in fact all event professionals: what do you think?

Thanks Ruud Janssen for the photo of the production studio at Event Camp Twin Cities 2011!

16 thoughts on “Hybrid event architecture ideas sparked by Event Camp Twin Cities 2011

  1. Hi Adrian, nice post and a ‘constructive perspective’.
    Your setup sounds solid. Though if I understand you correctly, the POD’s would then not be able to see each other right?
    In that case we (in the Amsterdam POD), would have never had the chance to engage in an hilarious conversation with @Gregruby:disqus in the PhillyPOD: http://twitter.com/#!/GregRuby/status/106798104967266305

    On the other hand, sacrificing a laugh for the possibility to become a true participant in the event would be an easy choice. We felt a little left out.

    Though the moment we were promoted to main stage, created (at least for us) the best 15 minutes of live stream ‘ever captured’: http://www.youtube.com/watch?v=jE9EFRHwUmo

    (without focussing on what went wrong, we might still have a good laugh about it!) 🙂

    Cheers, Gerrit

    1. Gerrit, I’m hazy about the details of how the realtime two-way communications with the pods would be implemented. In principle, any reliable (does this exist?) video conferencing solution could be used. The essential features of what I’ve proposed are that the solution for pod participant interaction is realtime, and that there’s a separate broadcast channel for content support.

      As for the 15 minutes you refer to, they encompassed one of the most curious emotional experiences I’ve had: a mixture of feeling really bad for Sam who soldiered on in the face of adversity, while simultaneously being unable able to completely suppress laughter so intense that I cried. No improv troupe, however gifted, could have come up with the way the situation built while slowly deteriorating. Truly a remarkable experience that held many lessons for us all.

      1. Hi Adrian,

        Thanks for attending Event Camp Twin Cities and running the Pecha Kucha presentations. We appreciated having you there.

        If I understand your response to Gerrit – you are proposing exactly what we did. We had a live connection between the pods and a secondary channel for speaker support materials. The only difference is that we sent the Pods the “at home attendee” version of the speaker support materials – which was delayed by about 20 seconds.

        1. Thanks for the clarification Sam. What I’m saying in the post is that removing the “at home attendee” stream from the pods and replacing it with a live video conferencing channel plus a separate broadcast feed of supporting materials could help to both prevent problems as well as simplify switching to and interacting with the pods during the event.

      2. Adrian, since Mediasite is a webcasting product (correct me if I’m wrong here), I believe that there will always be a delay of 20-30 seconds. In fact, event radio can have a delay of a few seconds, no? Delivering real-time communications may require a different technology solution and possibly more cost to achieve the desired result.

        I think there is also a physical component to how pods were displayed in the physical space. Maybe if the pods were “streamed” up to the presentation area more often and a “phone line” connected the two, then the delay could’ve been eliminated? In the end, this is what Sam resorted to to speak with the Amsterdam audience =)

        1. I think you’re making my point Cece. I’m suggesting replacing the delayed feed with a real-time video conference channel for the pods. Given that you still need the delayed feed for the individual remote audience members, then yes, there will be an additional expense providing the real-time channels.

          One point I haven’t mentioned so far is that each pod would end up with two video feeds, probably displayed on separate screens: one for the video conference channel and one for the broadcast supporting media (eg. slides).

  2. I think this is a pretty fair assessment and a constructive solution.  I mentioned several times in my conversation with Adrian that I feel like all the pieces are there and it’s just a matter of getting them in the right order.  I feel like Einstein staring at a blackboard of technologies and audience experiences, and we’re so very close to landing on E=mc2 for hybrid meetings.  My initial response to your setup is that I’m not sure it’s scalable.  Signal routing is proving to be one of the tricky bits of all this.  When you fix one thing, another thing breaks.  Getting all of the things you want- seeing each other, seeing what’s in the room, seeing what’s being presented, near-instant interaction- is what’s proving difficult.  Any one, sure.  Any three even.  But all of them?  There’s the nut that’s yet to be cracked on anything more than a very small scale.

    The guys tried something, and it didn’t work for a variety of reasons.  But, they TRIED it!  I’ll be posting soon on the tech, and will be speaking at Event Camp Europe next week about some of what we’ve learned.  They’re trying something different yet there.

    In the meantime, I’ve started adding some of my own reflections, specifically as it relates to sponsorship.  http://wp.me/pSVmS-66, for good or for bad.

    You’ve mentioned Sonic Foundry by name here, so I want to encourage caution, and if you read my post you’ll understand why.  Pods have been done, and done effectively using Mediasite.  You have to be keenly aware as a speaker and ready to deal with the delay.  I would argue that the minimal delay is part of what makes their solution so stable, so it’s a trade off.  I use there service with our own clients and would recommend them in a heartbeat.

    1. Good point about the scalability Brandt. I think a broadcast channel for the presentations would work, since everyone should see the same thing, and what is shown would be produced at the main site. But I wonder if a product like Google Hangouts would solve the videoconferencing needs of the design, at least for up to nine pods. As we discussed, we’d need to research whether audio muting can be controlled by the main site tech team.

  3. I do not want to comment on the technology because I do not understand it at all.  I’m going to encourage Jeff Halligan from Dyventive who was our Pod sponsor to comment.  They have a virtual event platform that I’m dying to try out. 

    Going forward we need to consider the Pod as a separate audience.  We are not live attendees obviously, but we are not virtual attendees either.  We are a hybrid of the two.  I have no doubt, from viewing the Twitter stream that the virtual attendees may have had the best experience of anyone.  But I can speak for the Philly Pod and say were were not at all engaged with the event.  We made the best of it and made our own fun…we actually had a lot of fun! 

    If you are having pods at your event your must schedule white space for them.  You must stick to a schedule and let them know what that schedule is.  You need to give them more information up front because they cannot simply raise their hand and ask a question when they do not understand something.  I would suggest having Pod leaders in on a conference call meeting two weeks before the event to run through the schedule.  If there are going to be games, we need to understand how they are played.  We need one document or preferably one website we can go to, to get all the information and links needed to execute the event.  Not five different e-mails and documents.  But the whitespace is very important.  We are gathered together in a group and we naturally want to get to know one another and network.  Trust me, the live audience wants the same thing.  Why are meeting planners so terrified of whitespace?

    As for the hilarious 15 minute video that everyone keeps referring to.  It was not hilarious for us in Philly.  It was a huge disappointment.  We were on the phone, on twitter, texting…just hoping someone would try to connect with us.  Not only did they not try to connect with us…no one even responded to our plea.  It was a fitting end to the rest of the two days. 

    1. Thanks Traci for your detailed comments from the point of view of the Philly pod (which Traci organized). Everything you mention makes sense to me; your observations show how important careful advance preparations are in order to create a successful complex remote audience experience.

      Several of the game groups, including mine, decided not to complete some of the game sections so we could get to know each other better. In my opinion, the game was too elaborate for the time available; the mode switching from conference sessions to game and back again was exhausting for me by the end of the first day and that’s why I (and others) backed out of participating.

      I agree 100% about whitespace at events. At my workshops on participation techniques there are several times when people are working silently by themselves, and many participants tell me how unusual this is, and how much they appreciate the opportunity to reflect and then share with others.

  4. Thanks for the opportunity to offer an opinion. One point of clarification, as it pertains to broadcast technology like media site or my company’s platform.  The 20 to 30 second delay is not a hard and fast rule.  The majority of the delay is not occurring somewhere in cyberspace.  It is occurring at the computer level, where the computer is decoding (unpackaging data).  The computer is receiving the data, virtually at real time.  The length of the delay is affected by how a stream in encoded. 20 seconds, as a rule, is a really long time.  But the delay does vary from computer to computer.
     
    As it pertains to real time communication between pods, things would have been much smoother if you chose a hosted webcasting technology.  A hosted technology allows the webcasting provider to have an individual, behind the curtain, controlling pod cameras and microphones, muting and unmuting, and granting microphone rights, eliminating a large % of the problems seen in “the video”.
     
    I commented earlier in the week that I thought that it was a little unfair to blame the pods.  I think that my point was missed.  By using a free “out of the box” technology like Skype, you are putting too much control in the hands of the individual pods. By using any, not just mine, hosted technology and preferably having dedicated pod-site technicians, you run a far smaller risk. 

    I hope that everyone stays positive and is not discouraged by the experience.

  5. Adrian-

    If a planner is looking to do a flawless hybrid event hire a production company. Have dedicated people for online communication and production from the get-go. It is more about planning, preparation and of course the all mighty budget.  We have been producing remote feeds for years.  This isnt a technological problem this is event/meeting planning 101.  Sam and Ray are creative forward thinkers they just need an execution team to make sure that their vision happens.  It was really disappointing for me and I know next year Sam and Ray will make the adjustments and have a kick ass event.  If a few people found out about new stuff and ideas this year then it was worth it. 

    mike

  6. Thanks Adrian for the post – and to all so far for the dialogue.

    I thought I’d take a stab at answering a few questions on how hybrid+pods+Mediasite work. I’m definitely not uber-techy – but this is how I got my head around it.

    – I tend to think of the conference HQ and Pod as being in an audio and video handshake, and once it starts happening, the webcasting technology – in this case Mediasite – is like the news crew, there to broadcast the moment to the world over the web. The news crew doesn’t make the handshake happen, and it isn’t required for the shakers of the hands to hear or see each other. But with it, all the other people who are interested in the fact that it is happening are now able to watch and listen, live and later.

    – Mediasite is absolutely, positively, by design a webcasting platform. It’s purpose built for one-to-many live streaming, with audio, video and graphics synchronized for real-time playback or to watch on-demand.- That said, Mediasite will suck in and stream out anything you can show on a laptop or shoot with a videocamera. For pods, the event producer just flips the source from presenter(s) at a podium to presenter(s) in a pod, and that’s what the remote or on-demand viewer sees. For Event Camp Twin Cities 2011, you could even think of Emilie and Glenn in the studio as a pod. Flip the video switch and you go from Sam and Ray at ECTC-HQ to Emilie and Glenn in kind of a MN-based Pod.

    – For “it-has-to-work-or-the-world-comes-to-an-end” hybrid events with pods, clients will use Mediasite + videoconferecing (http://en.wikipedia.org/wiki/Videoconferencing). Videoconferencing systems (and I’m talking way beyond Skype here – the more sophisticated ones involve hardware and software, and granted, cost $) create a dedicated connection between two (or more) endpoints (a technical term for say MN and the Philly or Silicon Valley pods) that provides reliable, real-time interaction. But note, audio and video are bouncing back and forth between the endpoints within the system, and that system alone. That’s the handshake.

    – The visual aids (like PowerPoint or Keynote) between HQ and pod are shared either through the videoconferencing system (again, the more sophisticated ones) or yet another kind of technology: remote desktop or desktop sharing software (http://en.wikipedia.org/wiki/Desktop_sharing).

    – Lastly, once all that other interactivity is established, that’s when Mediasite grabs on to what’s happening and streams the handshake – video (either HQ or Pod or both split-screen), audio (real-time between pods, they don’t need Mediasite to hear each other) and slides – out to the remote attendees. 

    Hope that helps.Here are some examples in case anyone wants to see more – they’ll play from the moment where they pod interaction begins. [Note that none of what I’ve blathered on about speaks to quality. That’s subject to all the tech above and also relies on a bunch of other tech I didn’t cover here but people like Brandt and Mike and Heroic know all about – the caliber of the videocamera, microphone, audio mixer and even the skills of your camera operator.] 

    Bottom line – it’s way complicated, and even the top-of-the-line systems can be finicky, but it’s the future, and when it works, it rocks.

    Event Camp Twin Cities – http://bit.ly/q2xj9l
    Event Camp National – http://bit.ly/oRQ9Ya
    Technical University Delft – http://bit.ly/qJyvtI
    Northern Michigan University with President – http://bit.ly/pZdCMR

  7. Ves Campion and Dennis Fernandes, co-founders of StreamGate in Sydney Australia, did the streaming technical for Event Camp Downuder 2012.
    Ves’s blurb:
    We used three streaming encoders. Two of the encoders were dedicated to linking up the remote pods, point-to-point, using Vidyo as our videoconference technology. The third did a continuous live stream, (out only) to the world. We leap-frogged the point-to-point encoders so that while one pod was being beamed in (two-way & realtime) the second Pod was hooked up ready to go live, as soon as  the current Pod presentation was complete. We had two vision mixers switching between signals. What went globally to the world was not what we wanted to have here locally. So the realtime two-way interaction on screen in Sydney had a mix of local cameras whereas the live stream had a mix of everything including the pods feeds. We also needed to do a complete mix-minus of the audio of local sound whereas the live stream needed the full audio mix. A nice challenging event that we were proud to say was a complete success.
    Ves Campion

    1.  Ves, thank you for providing a description of how the streaming was done at Event Camp Downunder.

      Although I’m not technically savvy enough to understand or appreciate the details of what you did, one point stands out for me. You provided different streams for the local and global audiences. This seems to be a more sophisticated approach than that used at Event Camp Twin Cities, allowing the local attendees to have more of a audience broadcast experience. I can see that this could be desirable, at the cost of additional complexity.

  8. Thanks for sharing the lessons learned. A big challenge is to make the people at home feel like first-class citizens when everyone else is interacting in physical space. Another solution is to broadcast the same presentation online and in-person then break out into small group discussion both online and in-person. Then folks don’t feel like they are missing out. That’s what we’ve learned with QiqoChat.

Leave a Reply

Your email address will not be published. Required fields are marked *