Wednesday, January 03, 2018

Reflections on “Weapons of Math Destruction”


Part of the joy of the winter break is that it’s a chance to read something longer than twenty pages.  I spent part of it reading Cathy O’Neil’s “Weapons of Math Destruction,” which is well worth reading if you haven’t already.

O’Neil was a quantitative analyst in the banking world for a while, until she grew disenchanted at the social uses to which her expertise was being put.  She saw sophisticated math being used to make the rich richer at the expense of everyone else.  This book is a form of penance, showing the rest of us cases in which Big Data gets used as a form of power.

(Readers of a certain age will hear echoes of Foucault in there, but don’t worry.  O’Neil’s writing style is far more lucid.)

The book is a series of chapter-length examples, but if you stick with it, you start to pick up the pattern.  O”Neil distinguishes directly-valid data from proxy data, warning us against the latter.  

Baseball offers easy examples of directly-valid data.  A runner reaches base, or does not.  A given swing results in a home run, or it does not.  Somebody wins each game.  That sort of data can lead to valuable insights, because it’s about what it says it’s about.  Sabermetrics, the sort of statistical analysis pioneered by Bill James and immortalized in Michael Lewis’ Moneyball, tends to work because it’s (mostly) based on solid data.  It’s far from perfect, but it can help.  For example, if a team knows that a particular hitter on another team always hits the ball to the right side of the field, it can shift its fielders to the right when he’s up.  

Baseball also offers an example of self-correcting experiments.  If a given team misreads the numbers, or takes a flyer on something wacky and fails badly, the numbers adjust accordingly.  For instance, Moneyball could be read two ways.  A literal reading would tell you that on-base percentage is the key stat for batters.  A fuller reading would tell you that at any given moment, some traits are overvalued in the market and some undervalued, and the first one to find an undervalued one is in a position to win.  OBP was simply an example.  When OBP became the new orthodoxy, it lost much of its competitive usefulness.

Proxy data is where things get squishy.  Proxy data, as she uses the term, refers to data that correlates with the desired trait, but isn’t the trait itself.  She gives the example of insurance companies basing rates on the zip codes where different customers live, on the theory that birds of a feather flock together.  And it’s true that you can find geographic patterns in the data.  But the patterns don’t tell you about any particular person’s risk, and they’re often reflections of other factors -- race and income, notably -- that have the effect of placing extra burdens on the people with the fewest resources.  If living in a low-income neighborhood raises your insurance rates, well, who tends to live in low-income neighborhoods?  

O’Neil points out the irony that relatively strict regulation of the factors that can go into calculating credit scores has had the effect of encouraging the wanton generation of unregulated, rogue scores that are often far more pernicious.  What the market wants, the market finds a way to get.

At times, she falls into the trap of calling for transparency as a solution.  I get the temptation, but it falls short of a solution in a couple of ways (both of which she actually identifies, in passing).  The first is familiar to any customer of a credit card: if you bury something in twenty pages of two-point font legalese, you may have “disclosed” it for compliance purposes, but it’s effectively still hidden for all practical purposes.  That has the effect of neutering much regulation.  The second is familiar to folks who’ve navigated the admissions game to competitive universities.  If you disclose the proxies, people can beef up the proxy scores at the expense of what the proxy is supposed to represent.  This is the kid who joins six clubs and two teams without really getting beneath the surface of any of them.  Alternately, in the context of performance-based funding for colleges, this is the college that tweaks its policies so that any students who place into remedial classes don’t count in their graduation rate.  

Ultimately, there’s no neutral technical fix, because it’s basically a political problem.  If Big Data is about power, and the incentives are there to abuse it, then we can expect the powerful to abuse it for their own gain.  It would be surprising if they didn’t.  And appeals to conscience only work among people who have consciences.  We’ve seen that some very powerful people don’t.  

Higher education certainly isn’t immune to what she calls WMD’s.  Performance-based funding is an easy example: the states that have adopted it haven’t seen meaningful gains.  Instead, they’ve seen a fair amount of system-gaming as institutions do what they have to do to survive.

The ideal solution would be to find the stat that comes closest to reflecting the point of higher education, in much the same way that on-base percentage did in baseball.  But to do that, we’d have to define the point of higher education.  A baseball game has a clear winner, but a higher education system may not.  That’s where politics come roaring back.  O’Neil flags some hazards nicely, but ultimately you don’t solve a political problem with better math.  You solve it with politics.  Get the politics right, and the math will follow.