Our research is always traveling in new directions, but the following are several research areas we pursue. In all our work, we take a multi-method approach incorporating functional neuroimaging (fMRI and EEG), computational modeling, and behavioral paradigms, among other techniques.
Person Perception
What cognitive and neurobiological mechanisms do people use to form perceptions of other people? A brief glance of another’s face can trigger perceptions of the social groups to which a person belongs (e.g., gender, race), the emotions they feel (e.g., anger, sadness), and the personality traits they likely possess (e.g., trustworthiness, competence). In the lab, we explore the role of facial cues, as well as bodily, vocal, and contextual cues.
The lab explores fundamental questions about how the brain represents social information to drive judgments of other people. In several lines of work, we are investigating how people rapidly combine visual cues with their prior knowledge to form perceptions and build internal models of others’ current states and their upcoming behaviors.
Some of our current studies are aiming to develop a more comprehensive model of the multiple perceptions that play out in split-second social perception. For instance, while facial stereotype biases (e.g., trustworthiness, dominance) have long been argued to reflect universal feature-trait mappings (e.g., upward-turned lips = trustworthy), we find that these biases and their underlying structure are simultaneously shaped by stereotypes related to gender and race as well as ingroup or outgroup status.
We also explore how a range of perceiver processes (e.g., exposure, stereotypes, attitudes, goals) shape person perception. As one recent example, using data across 42 countries, we found that how people form impressions of faces is not universal but varies across culture. Specifically, the way in which people’s actual personality traits are structured varies across the world (e.g., people who are intelligent are also warm in Country A, but not in Country B). Perceivers across the world appear to pick up on this personality structure from their local social environment, and then use it to form impressions of other people. For instance, when judging other people’s faces, people in Country A will tend to judge those that appear warm as also being intelligent (and vice-versa), whereas people in Country B will not.
More generally, even within a single culture, we’ve shown that people vary in their conceptual understanding of different traits, and this understanding helps determine the particular facial features that drive their impressions. For example, if a person believes that agreeableness and open-mindedness are conceptually similar (i.e., that people who are agreeable tend to also be open-minded), then the specific facial appearances that produce those impressions are also visually more similar for them (see below). This shows how our learned social experiences fundamentally shape the way we form impressions of others’ faces.
Most recently, we have begun exploring how perceivers form neural representational models of other people’s personality traits, emotions, and other social characteristics during naturalistic observation (e.g., watching movies or TV shows), and how such internal models of others may automatically combine (and sometimes be biased by) multiple sources of information and dynamically update and adjust over time.
Social Vision
The lab explores how what we think and believe about our social world can influence the visual “reality” we see before our eyes. This could be both “adaptive” (such as when context is used to inform perception of a facial expression) or “maladaptive” (such as when stereotypical expectations cause perception of a face or object to partly conform to those expectations).
As such, the lab’s research often concerns fundamental questions about the interplay between systems of social cognition and visual perception. We believe several neural regions are important for this dynamic interplay, including the fusiform gyrus (FG), the anterior temporal lobe (ATL), and the orbitofrontal cortex (OFC). We also believe fairly domain-general computational and neural processes are responsible for socially-informed visual perception. For instance, the same processes that allow status stereotypes in the U.S. (higher status = White, lower status = Black) to bias perceptions of a face’s race may be those that also allow prior knowledge to adaptively impact perceptions of objects or words. In general, our research has found that stereotypes bias perceptual judgments to fall partly more in line with one’s preconceived notions, and this bias manifests even in regions directly involved in visual representation such as the FG.
As one example of this of “visual confirmation bias” due to stereotypes, because men are stereotyped as more dominant, assertive, and angry, and women are stereotyped as more submissive, docile, and happy, men’s faces may be distorted toward perceived anger and women’s face distorted toward perceived joy. In one study, we used this specific instance of bias in tandem with backward masking, a technique that renders a visual stimulus subjectively invisible to participants. Although participants are not subjectively aware of the stimulus under backward masking, limited portions of the brain can still process the image in a more localized fashion. Specifically, masking has been shown to functionally disconnect ventral-temporal regions involved in visually representing faces (e.g., the FG) from regions outside ventral-temporal cortex involved in making predictions (e.g., the OFC).
By presenting faces varying in gender and emotion either under backward masking or in normal fashion, we could test whether regions like the OFC may implement stereotypical predictions by sending signals into regions involved in visually representing faces, such as the FG. Indeed, when faces were presented normally, men’s faces were distorted anger and women’s faces were distorted toward joy in visual processing regions such as the FG. However, under masking when the FG is relatively disconnected from the OFC, the FG’s distorted representations were “de-biased”, which was related to the disrupted functional connectivity between the OFC and FG. This suggests that OFC-FG interactions help drive the harmful effects of stereotypes on distorting visual perception.
As another example with critical real-world relevance, research has long found that people tend to misidentify innocuous tool images presented in the context of Black faces as guns, particularly when forced to respond quickly. These weapon bias effects have generally been interpreted to reflect an inability to control one’s racial bias or other post-perceptual processes such as response priming. In addition to such effects, it is also possible that racially biased predictions may lead innocuous objects presented in the context of a Black person to be distorted toward gun representations within the brain’s visual system. Indeed, in ongoing work, we have found evidence for this kind of ‘visual confirmation bias’.
A useful approach we often take in the lab to understand social cognitive influences on visual perception is to examine the relative contribution of social, affective, and visual factors on the representational structure of neural activity patterns. This often involves assessments of subjective perceptions (via behavior, e.g., mouse-tracking), computational models (which try to mimic how brain regions process visual properties in images), and person knowledge, emotion concepts, stereotypes, attitudes, or other social factors. For instance, as in this set of studies, this approach was valuable in exposing how gender and racial stereotypes lead visual representations of others’ faces to fall partly in with stereotypical expectations (e.g., Black men’s faces appear angrier than they ‘objectively’ are).
By better understanding the mechanisms underlying social cognitive influences on visual perception, we can develop strategies to increase them when they are adaptive, and more critically, to reduce or eliminate them when they are maladaptive and are helping to maintain social biases.
Stereotyping & Bias
The lab is interested in how stereotypes and less conscious forms of bias are learned and maintained in the brain, as well as how they can be reduced or eliminated. For instance, in two series of studies (see here and here), we have explored how statistical learning processes drive the acquisition, activation, and updating of stereotypes, and we are currently attempting to leverage this statistical learning to reduce certain forms of real-world stereotyping.
We believe that it is important to conceive of trait inferences of faces (e.g., trustworthiness, dominance) as “facial stereotypes” and find ways to reduce these inferences and/or people’s reliance on them, as they have been shown to predict significant real-world consequences (e.g., criminal sentencing decisions). Recently, we adapted a counter-stereotype training paradigm from the racial bias intervention literature and, in a series of studies, found that it was successful in reducing or eliminating both explicit and implicit facial stereotyping. We are currently expanding on this work in several directions.
We are also exploring how these automatic trait inferences from faces may serve as the perceptual foundation for inferring “perceptually ambiguous” social group memberships. Considerable research has found that perceivers can make inferences about sexual orientation, political affiliation, religious group membership, social status, and any number of complex social characteristics from facial appearance alone. We recently found that this is driven by a process mediated through facial stereotypes. Specifically, perceivers use facial features to infer personality traits (e.g., warm, competent, etc.) and then can infer an effectively limitless number of perceptually ambiguous group memberships (e.g., alcoholics, liberals, gay people, gun-owners, etc.) based on learned stereotypes (i.e., conceptual associations) that link these groups with certain traits (e.g., alcoholics = incompetent).
We also explore the ways in which implicitly held bias may have lingering consequences for social decision-making, interpersonal interaction, and real-world outcomes. As one example, in one set of studies we found that female politicians with more masculine facial features had a decreased chance of winning their election, particularly in U.S. conservative states. This bias could be observed as early as 380 ms after being exposed to a female politician’s face. Having a less attractive or competent-looking face was associated with a decreased chance of winning for both male and female politicians, but only for female politicians did gender cues predict electoral failure. This suggests an additional barrier women face in politics due to a bias in the early processing of their facial appearance.
We are conducting several studies currently examining the consequences of facial appearance based biases on other real-world outcomes, such as criminal sentencing, and testing interventions to reduce or eliminate these biases including their downstream consequences.
Emotion
The lab investigates the cognitive and neural processes involved in perceiving others’ emotions, and how these are shaped by our own understanding of emotions and affective experiences. We also explore how, conversely, our understanding of emotion and affective experiences are influenced by perceptions of other people and social cognition.
As one example, in one set of studies we showed that individual differences in people’s own conceptual understanding of different emotions helps drive their perception of facial emotion expressions. For example, if someone believes that anger and sadness are conceptually more similar (e.g., that they involve similar thoughts, bodily feelings, and actions), then their visual representation of what angry and sad expressions look like have a greater physical resemblance, and this increased similarity is reflected in brain regions involved in visual perception of faces (see below). These findings suggest that everyone perceives emotion expressions in a slightly different way depending on their own conceptual understanding of what those emotions mean.
The lab is also investigating how people perceive more naturalistic, dynamic, and multidimensional emotion expressions, where facial, bodily, vocal, and contextual cues may fluctuate over time (as they do in the real world). For instance, recent research using more naturalistic face stimuli and sensitive statistical techniques has found that perceivers are able to represent others’ emotions in a high-dimensional space on the order of ~28 dimensions, rather than just the 6 “basic” emotions or a 2-dimensional valence-arousal space. These dimensions have been found to be relatively categorical but overlapping and graded. In current work, we find that neural representations of others’ naturalistic emotional expressions retain this high dimensionality.
In addition, we are exploring perceptions of compound, blended emotion expressions (e.g., contempt, awed, or appalled), expressions conveying complex mental states (often via eye cues; e.g., bored, pensive, or jealous), and the roles of perceiver characteristics and affective processes in driving these complex perceptions that are more challenging to study but highly important for social interaction.