Date of Award
Doctor of Philosophy (PhD)
Stereotypes and implicit bias have long been objects of psychological study. Recently, philosophers, too, have attempted to understand stereotypes and implicit bias: what kinds of mental states or objects are they? Are stereotypes epistemically deﬁcient or ethically suspect? How do implicit biases aﬀect behavior, and how might these biases be changed? This dissertation takes up these and related questions, advancing accounts of stereotyping and implicit bias informed by evidence from psychology.
Chapter 1 sets the stage by conducting a critical survey of the history and development of today’s most widely used measures of implicit bias. Although the history of the different tests suggests that they measure different aspects of cognition, and the replication crisis in psychology has brought the methodology and results of many studies of implicit bias under suspicion, I argue that there is evidence that implicit bias is a genuine phenomenon with important real-world effects (like teachers meting out harsher discipline to Black and Hispanic students or employers judging women’s resumes to be less impressive than similar resumes submitted by men, among other examples; Greenwald et al., 2015). I also propose a functional understanding of implicit bias in contrast to the dominant mechanistic accounts.
Chapter 2 surveys the most prominent theoretical accounts of implicit bias on offer from philosophers and psychologists. I build on Chapter 1’s conclusions to argue that, despite the lively and interesting philosophical debate about the metaphysics of implicit bias, the tendency on the part of philosophers to extrapolate ethical recommendations from one or another of these accounts is misguided. First, I establish that these accounts are often presented with an eye toward recommending ameliorative measures based on the author’s metaphysical conclusions. By “ameliorative measures” I mean interventions designed to reduce the harm caused by implicit bias. This is prima facie a sensible approach because knowing the structure of implicit biases can give clues about how to intervene to reduce or eliminate them (or their effects). For example, if implicit biases are associations, we could reasonably expect that combatting our biases would be better achieved by strengthening new, less biased associations, whereas if implicit biases are propositions, exposure to arguments that undercut those biases should prove most effective1. Ultimately, I argue that in making these recommendations, philosophers have conflated the activation stage and the expression stage of implicit bias. Separating these is important because, as I will argue, activation is ethically irrelevant. It is the expression of implicit bias that carries ethical weight. I consider the case of moral scrupulosity to help make this point. Ultimately, because the place for ethical concern is the expression stage, we do not need to wait for the correct metaphysical theory to evaluate interventions. This is a welcome result given the glut of metaphysical accounts on offer and the difficulty in adjudicating among them.
Chapter 3 broadens the discussion by considering ethical aspects of stereotyping (implicit or explicit). I give a definition of stereotyping, one that is consistent with a variety of accounts of stereotyping, for the sake of argument. I argue that there is nothing necessarily wrong with stereotyping per se, but that defenses of the value of stereotyping have overlooked the epistemic and social costs of stereotyping, and defenses of the cognitive necessity of stereotyping overlook possibilities for change through cognitive and environmental plasticity.
Finally, Chapter 4 argues that most current accounts of responsibility for implicit bias take what I call the life hack approach, which emphasizes individual control over one’s cognitive landscape and immediate environment. I develop an objection by considering cases of perverse hacks: hacks that fulfill all the requirements of the life hack approach to eliminating ill effects of implicit bias, but nevertheless seem deeply unsatisfying, perhaps even morally wrong. I then consider one response to perverse hacks, namely, abandoning the life hack approach in favor of structural solutions (advocated by, for example, Haslanger and Anderson). Despite its merits, I argue that the structural solutions approach also fails to adequately address the problem of moral responsibility for implicit bias. Finally, I draw on the relational autonomy literature to work toward a new account of responsibility for implicit bias, one that combines the strengths of both the life hack and structural approaches while avoiding their pitfalls.
Chair and Committee
John Doris Ron Mallon
Eric Brown, Casey O'Callaghan, Lizzie Schechter,
Merritt, Christiane, "‘Nouns That Cut Slices’: The Ontology and Ethics of Stereotypes and Implicit Bias" (2020). Arts & Sciences Electronic Theses and Dissertations. 2331.