Tuesday, February 23, 2010

Assessment as Marketing

In a conversation last week with a big muckety-muck, I realized that there are two fundamentally different, and largely opposed, understandings of outcomes assessment in play. Which definition you accept will color your expectations.

The first is what I used to consider the basic definition: internal measures of outcomes, used to generate improvement over time. If you understand assessment in this way, then several things follow. You might not want all of it to be public, since the candid warts-and-all conversations that underlie real improvement simply wouldn't happen on the public record. You'd pay special attention to shortcomings, since that's where improvement is most needed. You'd want some depth of understanding, often favoring thicker explanations over thinner ones, since an overly reductive measure would defeat the purpose.

The second understanding is of assessment as a form of marketing. See how great we are! You should come here! The "you" in that last sentence could be prospective students being lured to a particular college, or it could be companies being lured to a particular state. If you understand assessment in this way, then several things follow. You'd want it to be as public as possible, since advertising works best when people see it. You'd pay special attention to strengths, rather than shortcomings. You'd downplay 'improvement,' since it implies an existing lack. And you'd want simplicity. When in doubt, go with the thinner explanation rather than the thicker one; you can't do a thick description in a thirty-second elevator pitch.

Each of these understandings is valid, in its way, but they often use the same words, with the result that people who should work together sometimes talk past each other.

If I'm a governor of an economically struggling state, I want easy measures with which I can lure prospective employers. Look how educated our workforce is! Look at what great colleges your kids could attend! I want outcomes that I can describe to non-experts, heavy on the positive.

And in many ways, there's nothing wrong with that. When TW and I bought our house, we looked at the various public measures of school district quality that we could find, and used them to rule out some towns as against others. We want our kids to attend schools that are worthy of them, and we make no apologies for that. They're great kids, and they deserve schools that will do right by them. I knew enough not to place too much stock on minor differences in the middle, but the low-end outliers were simply out of the question. I can concede all the issues with standardized testing, but a train wreck is a train wreck.

The issue comes when the two understandings crash into each other.

I'm happy to publicize our transfer rates, since they're great. But too much transparency in the early stages of improvement-driven assessment can kill it, leading to CYA behavior rather than candor. Basing staffing or funding decisions on assessment results, which sounds reasonable at first blush, can also lead to meaningful distortions. If a given department or program is lagging, would more resources solve it, or would it amount to throwing good money after bad? If a given program is succeeding, should it be rewarded, or should it be considered an area of relatively less need for the near future? (If you say both need more resources, your budget will be a shambles.) Whichever answer seems to open the money spigot is the answer you'll get from everybody once they figure it out.

Until we get some clarity on the different expectations of assessment, I don't see much hope for real progress. Faculty won't embrace what they see as extra work, especially if they believe -- correctly or not -- that the results could be used against them. Governors won't embrace what they see as evasive navel-gazing ("let's do portfolio assessment!") when what they really need is a couple juicy numbers to lure employers. And the public won't get what it really wants until it figures out what that is.