1 Advances in Understanding Humanness and Dehumanization
Paul G. Bain, Jeroen Vaes, and Jacques-Philippe Leyens
Editing a book can be an exciting and daunting process, but it is the excitement that keeps things going. We are very excited to have the opportunity to coordinate a volume showcasing new ideas and thinking in the linked fields of humanness and dehumanization. This derives from both the fascinating ideas of our contributors, and because it shows that this critical field of research is gaining broader recognition. Ideas about humanness are important in many sciences, ranging from biology to sociology, and are consequential in other fields, from religion to politics. However, the importance of beliefs about humanness is a relatively recent interest in psychology, even though it pervades many aspects of human functioning and interaction.
One of the aims of this introductory chapter is to introduce this field of research to those who might be less familiar with it, and to address a number of basic issues that might help readers understand the nature and importance of this area of research. This includes a brief exposition of its historical and theoretical background (expanded on in some chapters in this volume). We follow with a brief description of the contributions to this book, which span novel theories and approaches to understanding dehumanization, the application of dehumanization to new areas (such as crime and policing, medicine, gender, and interpersonal relationships), and novel perspectives on what it means to be human and its consequences.
A (Very) Brief Background to Humanness and Dehumanization Research
Ideas about humanness imbue our everyday lives and the theories developed to describe our behavior. Philosophers invoke images of human nature to inform their ethical and political philosophies, such as Hobbes (1996/1651) who argued that humans feared evil but were not concerned about the common good. Within psychology, Wrightsman (1992) was one of the first to empirically examine peopleâs general beliefs about humanness, and others have focused on specific aspects of human nature such as free will (Fahrenberg & Cheetham, 2007) or specific constructs such as values (Bain, Kashima, & Haslam, 2006). Some researchers have highlighted how conceptions of humanness are the site of ideological debate, as interested groups and individuals define humanness in ways that support their own ends. A good example is the claim that humans are rational and self-interested, which does not just reflect a simplification in economic theory but is used to justify support and opposition toward government policies and intervention (Schwartz, 1986).
A unified definition of humanness between sciences seems therefore a utopia, partly because it is an abstract, often metaphysically tinged concept and partly because we are all involved parties. Several researchers agree, however, that we can learn a lot about humanity when looking at its violations (Kaufmann, Kuch, Neuhäuser, & Webster, 2011). Focusing on processes of dehumanization often makes that which is denied concrete and almost tangible. Although dehumanization is relevant to many fields, it is perhaps most contentious (and has received most attention) when applied in intergroup contexts. Symbolically powerful and enduring examples of dehumanization are the different genocides committed in the past century. However, dehumanization has a much longer history. For example, from the Renaissance through the Enlightenment, European intellectuals were preoccupied with stories of savages, barbarians, and exotic tales of humanoid creatures and monsters (see Jahoda, 1999). It is thus fitting that our volume begins with Jahodaâs further exploration of this history, covering the period from the late-18th century to the mid-20th century (Chapter 2). Jahoda focuses on how learned philosophers and scientists of the period tried to objectively establish that some races were superior to others. By tracking racialism and the scientific and philosophical endeavors used to justify it right up to the mid-20th century, he shows how these ideas laid the groundwork and justification for Nazi racial ideology providing a context for modern ideas about humanness and psychological theories of dehumanization.
Of course, with the abundance of armed conflicts, dehumanization remained an important topic of research in the latter half of the 20th century. There was a continued focus on overt forms of dehumanizationâthe literal description or treatment of others as nonhuman. Of particular concern was how viewing others as nonhuman allowed us to morally âdisengageâ from themâjustifying treating them as animals as undermining the legitimacy of their views and needs (Bandura, 1999; Bar-Tal, 1989; Opotow, 2001).
Yet despite this long history, our understanding of dehumanization has recently come a long way in a short period, kick-started at the turn of the century by the discovery that dehumanization is not just restricted to extreme or overt prejudice but can occur subtly and even without conscious awareness (Leyens et al., 2000). The impetus that arose from this âinfrahumanizationâ (a term chosen to distinguish this subtle denial of humanness from overt dehumanization) was shown to be a pervasive feature of intergroup perception (Leyens, Demoulin, Vaes, Gaunt, & Paladino, 2007; Leyens et al., 2001) and to have meaningful behavioral consequences (e.g., Vaes, Paladino, Castelli, Leyens, & Giovanazzi, 2003). Infrahumanization is observed through the attribution or association of characteristics that are uniquely human (âhuman uniqueness,â or HU), such as complex emotions like embarrassment or optimism, more to an ingroup than to an outgroup. In contrast, basic emotions (e.g., fear, pleasure) are shared with animals, and thus their attribution across groups is less relevant to dehumanization (Leyens et al., 2000). The HU sense of humanness corresponds to a distinction between humans and animals, so its denial to others is sometimes called âanimalisticâ dehumanization (Haslam, 2006). The burgeoning field of infrahumanization has been the subject of extensive reviews (Leyens et al., 2007; Vaes, Leyens, Paladino, & Miranda, 2012).
Haslam (2006) made an important further contribution by noting that humanness can be defined not only by what is uniquely human (as in infrahumanization), but also by what is typically human. These core human characteristics form what is called âhuman natureâ (or HN). The denial of HN implies lacking characteristics such as emotionality, agency, warmth, and cognitive flexibility, making people resemble machines or robots, and thus has been called âmechanisticâ dehumanization. People typically attribute greater HN characteristics, especially negative ones, to themselves than to other individuals (Haslam, Bain, Douge, Lee, & Bastian, 2005; Loughnan et al., 2010) and sometimes deny HN to other groups (Bain, Park, Kwok, & Haslam, 2009).
To better understand the HU and HN conceptions of humanness and their role in dehumanization, it can also be helpful to explain what these effects are not. Some people perceived that the greater attribution of HN to the self than to others was another way to measure the âbetter-than-averageâ effectâwhere people attribute more favorable characteristics to themselves than to others (Alicke, 1985). However, it has been demonstrated that self-humanizing is distinct from the âbetter-than-averageâ effect (Haslam et al., 2005; Loughnan et al., 2010), and that attributions of humanness are not reducible to merely attributing more positive characteristics to the self (Haslam & Bain, 2007) or the ingroup (Bain et al., 2009). Similarly, early studies in infrahumanization were sometimes understood in terms of ingroup favoritism (assigning more positive attributes to the ingroup), but statistical analyses have shown that the two phenomena were completely different (Demoulin et al., 2009). In particular, these findings cannot be explained by the valence of the characteristics, as both positive and negative emotions are attributed more to the ingroup (Leyens et al., 2001). As a result, we can be confident that both HN and HU forms of dehumanization are not reducible to viewing the self or ingroup positively and outgroups negatively.
Another perspective has interpreted these findings in terms of models of stereotyping, particularly the stereotype content model that posits two dimensions of stereotypesâwarmth and competence (Fiske, Cuddy, Glick, & Xu, 2002). Research has shown that HU and HN senses of humanness are related but conceptually distinct from these stereotype content dimensions (Haslam, Loughnan, Kashima, & Bain, 2008). Reflecting this relationship, groups lacking both warmth and competence are especially likely to be dehumanized (Harris & Fiske, 2006), and Vaes and Paladino (2010) found that more competent outgroups were dehumanized less.
Overall, the novelty and usefulness of ideas about humanness and dehumanization, along with their distinctiveness from other theoretical understandings, has led to an impressive literatureâmore than 140 publications. Researchers have not only used the basic concepts, but also enriched the field with their own approaches and theoretical modifications. One issue with this surge in research has been some disclarity about whether researchers are investigating similar or distinct phenomena. This is where Haslamâs contribution (Chapter 3) is especially valuable. Haslam describes a three-dimensional framework that imposes a coherent structure on the multitude of approaches to dehumanization. His framework distinguishes three dimensions: the type of nonhuman comparison (animal or object), the degree to which it is held or expressed consciously (implicit or explicit), and whether it involves an absolute judgment or a relative comparison (absolute or relative). Haslam then discusses how prominent dehumanization theories and findings can be understood within this framework and considers alternative conceptualizations as goals for further research.
Where to From Here?
In producing this volume, we were in the enviable position to ask leaders in the field their views about the most important issues for this field, now and into the future. Of course, there were diverse responses! However, we attempt to bring these different strands of thought together in a concluding chapter (Chapter 17). Here, we outline the major areas that these scholars addressed.
Why Do We Dehumanize?
One class of contributions, which is discussed in Part 1, has moved beyond showing that we dehumanize others (in both obvious and subtle ways) to understanding why subtle, implicit forms of dehumanization are so widespread. The contribution of Waytz, Schroeder, and Epley (Chapter 4) argues that it arises from the difficulty of fully understanding minds other than our ownâwhile we are aware of the internal complexities of our own mind, when we try to understand other minds, they will always seem a bit more simplistic. This suggests that dehumanization is a default state that can only be overcome by effort. This idea has several corollaries, and they explore the implications of each.
The contribution of Lee and Harris (Chapter 5) also focuses on the problem of knowing othersâ minds as a default judgment, focusing on its neuropsychological correlates. Importantly, they argue that even though our basic state is to not fully recognize othersâ mental states, this can be easily overcome by directing people to think about the distinct mental states of anotherâthinking about them as individualsâand this has the effect of humanizing them. This suggests that both motivation and contextual cues (to think of others as individuals vs. category members) can temper or even reverse processes of dehumanization.
The contribution of Hodson, MacInnis, and Costello (Chapter 6) expands this rationale even further, considering cognitive and motivational bases for dehumanization. They argue that dehumanization of other groups rests in part on the division we make between humans and animals (interspecies model of prejudice). In addition, they broaden consideration of attributions of both lesser and greater humanity to others, which they argue varies as a function of how valued a group is and whether it is seen as a threat. This is one of the few theoretical perspectives that allows for superhumanization of others, in particular gods and demons, potentially extending to those with comparable powers on Earth, such as kings and dictators.
Heflick and Goldenberg (Chapter 7) focus on terror management (managing awareness of oneâs own death) as an important function of dehumanization. Reminders of our animal nature (i.e., that we are creatures like any other and thus will die) undermines one of our defenses against this mortal terror (i.e., that we can be at least symbolically immortal through our shared human culture). Hence, when we are reminded of our animal features, we react to view ourselves and our ingroups as more uniquely human. However, they also argue that our defense against this terror can be achieved by viewing ourselves as objects (deny HN, or see ourselves and groups in machinelike ways) because, unlike animals, objects and machines do not die. Thus, just as terror management can lead to (animalistic) dehumanization of others, it can also lead to the (mechanistic) dehumanization of ourselves.
Examining Dehumanization in New Domains
The archetypal groups for examining dehumanization have been national and ethnic groups, and these have occupied the main focus of recent work on dehumanization. Research is emerging that attempts to understand the humanness of other types of groups, particularly occupational groups (Iatridis, in press; Loughnan & Haslam, 2007). In this volume, several scholars have pushed these ideas further to provide a detailed analysis of dehumanization in a wider range of contexts, and this forms the core theme of Part 2.
Two chapters focus on how dehumanization can offer important insights into crime and policing. Vasiljevic and Viki (Chapter 8) focus on dehumanization of offenders and how this results in some offenders (particularly from racial minorities) being excluded from moral consideration, thus justifying harsher punishment and reduced support for rehabilitation. Importantly, given that most offenders reenter the community, they explore how this dehumanization can be ameliorated through positive interpersonal contact and learning more about offenders as individuals. Hetey and Eberhardt (Chapter 9) consider the interplay between dehumanization of criminals and police, particularly the portrayal of violent criminals as animals and police as machines. They describe how physical elements of the social context, such as police uniforms, contribute to how police themselves, as well as observers, may mechanistically dehumanize the police. They explore the functions each form of dehumanization serves in this context, such as when laypeople would actually prefer police to be more âmachinelike,â and the social contexts in which these perceptions are likely to be stronger, such as in times of rising crime rates.
In a similar vein, Leyens (Chapter 10) focuses on dehumanization in the medical profession, identifying not just when it is dysfunctional, but also where it can be functional and important. Importantly, he extends consideration of dehumanization beyond peopleâs attitudes to the dehumanizing effects of physical contexts like the use of medical technology and machines. In addition to exploring how dysfunctional elements of medical dehumanization can be overcome, Leyens makes the critical point that allowing terminally ill patients to die may actually restore their humanity relative to prolonging their life using machines.
Another important extension of dehumanization research is into gender relations, particularly arising from sexual objectification. Vaes, Loughnan, and Puvia (Chapter 11) note that objectification, while related to dehumanization, has some distinctive characteristics, particularly an âapproachâ tendency that seems at odds f...