Monday, February 13, 2017

Predictions, Probabilities, and Placebos


Do you have twice as high a chance of graduating from a college with a 50 percent graduation rate than of graduating from one with a 25 percent graduation rate?

No.  Because graduation isn’t something that just happens to you. You have an active role to play.

That’s where predictive analytics can get tricky.

They’re based on extrapolations from past behavior of people in a defined situation.  The best ones are probabilistic, and when applied to large groups for planning purposes, they can be useful.  Knowing the typical pass rate for English 1 in the Fall can help us schedule the right number of sections of English 2 for the Spring.  At that level, they’re helpful.  Applied to any given person, they’re much less reliable.

I’m wondering whether they could also become self-fulfilling, in a bad way.

This past weekend was a quick check-in for the Aspen fellowship program.  It was a terrific few days, other than jet lag.  We were lucky enough to have Claude Steele speak to us about stereotype threat and its effects on academic performance.  

Steele uses the term “stereotype threat” to refer to a nagging sense that “people like me aren’t good at x,” and the effects that the feeling has on doing x.  He has found that awareness of the negative stereotype diverts cognitive resources away from the task itself; in a wonderful illustration, he likened it to trying to relax and go to sleep just after learning there’s a snake in the house.  

That’s true whether the person being stereotyped has high self-esteem or low, good confidence or bad.  The stereotype can become self-fulfilling by virtue of the extra weight that some people are forced to carry.

I suspect there’s a lot of truth to Steele’s findings.  A visceral sense of being out of place in whatever way can be distracting and draining.  As educators, we were asked to look for ways to reduce or counteract the power of negative stereotypes. Steele found that relatively simple interventions, like telling students before a math test that a particular test had been found to have no gender gap in performance, could become self-fulfilling in a positive way.  It was a sort of placebo effect.

And that’s when I started to wonder about predictive analytics.

Colleges that apply analytics to individual students, as opposed to large groups, could easily fall into the trap of inadvertently confirming negative stereotypes with numbers.  Steering students into the courses in which they’re statistically likeliest to succeed could easily mean recreating existing economic gaps, only with the blessing of science.  

Part of the point of open-access higher education is allowing people to defy the odds.  And in contrast to the usual ethic of transparency, there may be times when telling students the odds will actually hurt their performance.  There may be a results-based argument for deliberately introducing statistical placebos.  If you tell students that the odds are plausibly better than they actually are, that may become self-fulfilling.

At the very least, there may be a positive duty to withhold data that would do active harm.  

Had I thought about it quickly enough, I would have asked Steele about that, but it took a little while to simmer.  So instead, I’ll throw it open to my wise and worldly readers.  (And if anyone wants to pass it along to Prof. Steele, I’d love to hear what he has to say, too....)

If the information from predictive analytics could be discouraging, do we have a duty to withhold it?  Is there an ethical basis for a sort of statistical placebo?