I think there's inherently an issue that models will literally never be able to handle, which is that when somebody comes along with a new way of doing something that's really excellent, the models will not recognize it. They only know how to recognize excellence when they can measure it somehow.
Cathy O'NeilObviously the more transparency we have as auditors, the more we can get, but the main goal is to understand important characteristics about a black box algorithm without necessarily having to understand every single granular detail of the algorithm.
Cathy O'NeilAn insurance company might say, "Tell us more about yourself so your premiums can go down." When they say that, they're addressing the winners, not the losers.
Cathy O'NeilBecause of my experience in Occupy, instead of asking the question, "Who will benefit from this system I'm implementing with the data?" I started to ask the question, "What will happen to the most vulnerable?" Or "Who is going to lose under this system? How will this affect the worst-off person?" Which is a very different question from "How does this improve certain people's lives?"
Cathy O'Neil