Why measurable outcomes aren’t always a good thing

On measurable outcomes: A self-referential comic entitled "Self-Description" with three panels. By XKCD, Comic #688. The contents of any one panel are dependent on the contents of every panel including itself. In the first panel's pie chart, "this image" refers to the entire comic image, the one that can be downloaded from xkcd (and the entire comic as displayed here above). In the second panel the amount of black used in each panel is displayed in a bar chart. This actually makes this panel the one that uses most black. The third panel features a scatter plot labeled "Location of black ink in this image." It is the first quadrant of a cartesian plane with the zeroes marked. The graph is the whole comic scaled proportionally to fit the axes, so the last panel also has to contain an image of itself having an image of itself ad infinitum.

What could be wrong with requiring measurable outcomes?

“Enough of this feel-good stuff! How do we know whether people have learned anything unless we measure it?”
—A little voice, heard once in a while in learning designers’ heads

Ah, the lure of measurement! Yes, it’s important. From a scientific perspective, a better understanding of the world we live in requires doing experiments that involve quantifying properties in a statistically meaningful and repeatable way. Science has no opinion about ghosts, life after death, and astrology, for example, because we can’t reliably measure associated attributes.

The power of scientific thinking became widely evident at the start of the twentieth century. It was probably inevitable that it would be applied to management. The result was the concept of scientific management, developed by Frederick Winslow Taylor. Even though Taylorism is no longer a dominant management paradigm, its Victorian influence on how we view working with others still persists to this day.

But we can’t measure some important things

I’m a proponent of the scientific method, but it has limitations because we can’t measure much of what’s important to us. (Actually, it’s worse than that—often we aren’t even aware of what’s important.) Here’s Peter Block on how preoccupation with measurement prevents meaningful change:

The essence of these classic problem-solving steps is the belief that the way to make a difference in the world is to define problems and needs and then recommend actions to solve those needs. We are all problem solvers, action oriented and results minded. It is illegal in this culture to leave a meeting without a to-do list. We want measurable outcomes and we want them now…

…In fact it is this very mindset, one based on clear definition, prediction, and measurement which prevents anything fundamental from changing.
—Peter Block, Community: The Structure of Belonging

One of my important learning experiences occurred unexpectedly in a workshop. A participant in a small group I was leading got furious after something I had said. He stood up and stepped towards me, shouting and balling his fists. At that moment, to my surprise, I knew that his intense anger was all about him and not about me. Instead of my habitual response—taking anger personally—I was able to effectively help him look at why he had become so enraged.

There was nothing measurable about this interchange, yet it was an amazing learning and empowering moment for me.

The danger of focussing on what can be measured

So, one of the dangers of requiring measurable outcomes is that it restricts us to concentrating on what can be measured, not what’s important. Educator Alfie Kohn supplies this example:

…it is much easier to quantify the number of times a semicolon has been used correctly in an essay than it is to quantify how well the student has explored ideas in that essay.
—Alfie Kohn, Beware of the Standards, Not Just the Tests

Another reason why we fixate on assigning a number to a “measured” outcome is that doing so can make people feel they can show they’ve accomplished something, masking the common painful reality that they have no idea how to honestly measure their effectiveness.

Measured learning outcomes can be relevant if we have a clear, performance-based, target. For example, we can test whether someone has learned and can apply cardiopulmonary resuscitation (CPR) by testing them in a realistic environment. (Even then, less than half of course participants can pass a skills test one year after training.)

This leads to my final danger of requiring measurable outcomes. It turns out that measurements of learning outcomes aren’t reliable anyway!

For nearly 50 years measurement scholars have warned against pursuing the blind alley of value added assessment.  Our research has demonstrated yet again that the reliability of gain scores and residual scores…is negligible.
—Professor Trudy W. Banta, A Warning on Measuring Learning Outcomes, Inside Higher Ed

Given that requiring measurable outcomes often inhibits fundamental change and is of dubious reliability, I believe we should be considerably more reluctant to insist on including them in today’s learning and organizational environments.

[This post is part of the occasional series: How do you facilitate change? where we explore various aspects of facilitating individual and group change.]

Image attribution: xkcd

Learning is messy

Learning is messy.

Learning is messy: illustration of the myth and reality of success and learning. On the left, a straight arrow represents what people think the path to success looks like. On the right, an arrow with a messy detouring center represents what the path to success really looks like. Sketch attribution: Babs Rangaiah of Unilever ("& learning" added by me)

Johnnie Moore wrote about this sketch: “I think it captures very succinctly the perils of retrospective coherence – the myriad ways we tidy up history to make things seem more linear.” And: “I think learning needs to be messier; amid all those twists and turns are the discoveries and surprises that satisfy the participant and help new things stick.”

Great points, Johnnie, and I’d like to add one more. Models of success and learning like the one on the left lead to tidy, simplistic conference models (with those deadening learning objectives). When we embrace the reality of messy and/or risky learning, embodied by the sketch on the right, we become open to event designs that mirror this reality and provide the flexibility and openness to address it.

I’ve been designing and facilitating participant-driven and participation rich conferences for over thirty years. It’s true that carefully prepared broadcast-style sessions can provide important learning from lectures by experts to a less-well informed audience. But, in my experience, most of the deepest learning that occurs at events is unexpected. It’s a product of the serendipity that interactions and connections create. And the event’s design facilitates (or restricts) the level of serendipity that is possible.

That’s why fundamental learning is messy.

Sketch attribution: Babs Rangaiah of Unilever (“& learning” added by me)

Why requiring learning objectives for great conference presentations sucks

Requiring learning objectives for great conference presentations sucks. Photograph of a whiteboard on which is written: Learning Outcomes All will have understood how decay is caused Most will have understand [sic] the importance of dental care Some will be able to imagine themselves as a tooth Photo by Flickr user orange_squash_123
I have been filling out quite a few conference presentation proposals recently. And I’ve begun to notice a pattern in my behavior. My mood changed when I had to fill out the session’s learning objectives. (These are statements of what attendees will be able to do by the end of the session.)

Specifically, every time I had to fill out the learning objectives for a proposal I got really, really annoyed.

Over the years I’ve found that paying attention to patterns like this is nearly always a learning experience for me. And I had just watched Chris Flink‘s TEDx talk on the gift of suckiness, where he makes a great case for exploring things that suck for you…

…so I reluctantly delved into why I started to feel mad when required to write things like “attendees will be able to list five barriers to implementing participant-driven events“.

At first, I wondered whether my annoyance at having to come up with learning objectives (with active verbs, please, like these…)

"Learning

was because I was a sloppy presenter who hadn’t thought about what my attendees wanted or needed to learn. I imagined the conference program committee wagging their finger at me. Or sighing because they’d seen this so many times before. Listing learning objectives was forcing me to face what I should have thought about before I even suggested the session, and I didn’t like being confronted with my lack of planning.

And then I thought, NO. I DO have goals for my sessions. But they’re much more ambitious goals than having participants be able to regurgitate lists, define terms, explain concepts, or discuss issues.

I want to blow attendees’ minds. And I want to change their lives.

OK, I admit that would be the supreme goal, one that I’m unlikely to achieve most of the time. But it’s a worthy goal. If I can make some attendees see or understand something important in a way that they’ve never seen or understood before so that they will never see or understand it in the same way again—now that’s worth striving for.

Here’s an imaginary example (not taken from my fields of expertise). Suppose you are evaluating two proposed sessions on the subject of sexual harassment in the workplace. The first includes learning objectives like “define and understand the term sexual harassment”, “identify types of sexual harassment”, and “learn techniques to better deal with sexual harassment”. The second simply says, “People who actively participate in this session are very unlikely to sexually harass others or put up with sexual harassment ever again.”

Assuming the second presenter is credible, which proposal would you choose?

Learning objectives restrict outcomes to safe, measured changes to knowledge or competencies. They leave no place for passion, for changing worldviews, or for evoking action.

That’s why requiring learning objectives for great conference presentations sucks.

What’s your perspective on learning objectives?