People felt like they were friends with Google, and they believed in the "Do No Evil" thing that Google said. They trusted Google more than they trusted the government, and I never understood that.
Cathy O'NeilI think there's inherently an issue that models will literally never be able to handle, which is that when somebody comes along with a new way of doing something that's really excellent, the models will not recognize it. They only know how to recognize excellence when they can measure it somehow.
Cathy O'NeilObviously the more transparency we have as auditors, the more we can get, but the main goal is to understand important characteristics about a black box algorithm without necessarily having to understand every single granular detail of the algorithm.
Cathy O'NeilWe don't let a car company just throw out a car and start driving it around without checking that the wheels are fastened on. We know that would result in death; but for some reason we have no hesitation at throwing out some algorithms untested and unmonitored even when they're making very important life-and-death decisions.
Cathy O'NeilBecause of my experience in Occupy, instead of asking the question, "Who will benefit from this system I'm implementing with the data?" I started to ask the question, "What will happen to the most vulnerable?" Or "Who is going to lose under this system? How will this affect the worst-off person?" Which is a very different question from "How does this improve certain people's lives?"
Cathy O'Neil