No Safe Zone

Max
2 min readApr 21, 2021

In Cathy O’Neil’s chapter 9 of Weapons of Math Destruction, the discussion turns to people’s ability to get insurance. This chapter is very similar to the previous chapter about credit as credit and insurance as closely related and rely on similar principles. Because of this, I have overlapping feelings regarding the two, mainly from my own previous idea and perception of the credit and insurance industries themselves. Confusion of causation and correlation, mentioned in the beginning of this chapter, seems to be the thesis for the explanation of why WMDs can be harmful and why they ought to only be used when necessary. In our class’s guest lecture from the IBM AI scientist, she mentioned clients that wanted to apply deep learning to a problem that did not merit a deep learning solution. Even as someone who is optimistic more often than not, situations like this deeply worry me. Ignorance is one thing, but I feel a palpable aura of sinisterness when hearing things like this. I think this is because not only is there ignorance but a perception that there is a hidden power within deep learning that is above and beyond what can be achieved; there is the belief that deep learning not only ought to apply to every problem, but that doing so creates better solutions for every problem it is applied to. This is a fundamental miscalculation about what deep learning/AI/WMDs are and are not good at. This belief leads to a red-hot take: the robo-apocalypse that is depicted in movies is of course a farce, but may very well come instead at the hands of mis-diagnosing problems as deep learning, the people become slaves to not machines, but in their own religious adherence to what the machines say — the belief that they SHOULD be listened to rather than consulted as a special tool, which is perhaps the thesis of O’Neil’s entire novel thus far.

--

--