Person Perception Research Unit

The efficient analysis and representation of person-related information is one of the most important challenges for human social perception.
Attention

The Person Perception Research Unit was funded by the DFG between 2009 and 2016. Find the current Research Projects of the General Psychology I Department here.

Person Perception Research Unit's Mission Statement

The efficient analysis and representation of person-related information is one of the most important challenges for human social perception. Faces, for instance, inform us about a large variety of socially relevant information including a person´s identity, emotions, gender, age, ethnic background or focus of attention. Cognitive models of face perception acknowledge a degree of functional independence between these different aspects of perception, each of which may be mediated by different types of “diagnostic” information in the stimulus. Cognitive neuroscience is beginning to reveal the neural mechanisms that underlie face perception, but there has been little work on integrating those data with models from cognitive and social psychology. The present applicants have begun to successfully collaborate on this integrative view. In this Research Unit, we will combine a multilevel methodological approach to promote a unified theory of the psychological and neural bases of person perception. In a closely coordinated research programme, we will investigate i) basic perceptual processes, ii) the processing of social and emotional information about people, and iii) person perception in specific populations.

Examining the transient effects of previous experience with faces has been particularly influential for cognitive models of face perception. While research on priming, and more recently on adaptation, is shaping theories about the neural mechanisms and representations involved in various aspects of face perception, the precise relationship between these two phenomena remains to be determined. A highly relevant open issue is face learning – the question how perceptual and neural mechanisms create stable representations for initially unfamiliar faces. The voice – which carries a wealth of nonverbal social information similar to faces – has received little scientific attention in the past. Current research thus needs to reflect the importance of auditory person perception. Dynamic multimodal information from faces and voices is often combined to shape our perception of identity or social group membership (e.g., gender, ethnicity, region). Faces and voices also can be highly potent emotional stimuli which may be processed even in the absence of attention. Moreover, people differ in physical attractiveness, a powerful variable for sexual partner preference. Person perception and memory for elderly people may become more prevalent in the future as a result of demographic changes. Finally, dramatic impairments in person perception can occur in specific conditions such as congenital prosopagnosia, an inability to identify faces of familiar people.

Research Projects of the 2nd Funding Period (2012-2015)

  • 1. Temporal context in face perception: The interaction of competition and prediction and prediction, G. Kovács

    G. Kovacs de

    Previous encounters with other people – or the temporal context of a given face – modifies its perception, such that a given picture of a person might look different at different times. Our previous research on priming and adaptation is shaping theories about the neural mechanisms and representations involved in face perception. As we are  beginning to understand the relationship between these two phenomena, an account of the interaction between top-down processes, such as predictions, attentional cueing and sensory competition among stimuli becomes increasingly important. Project 1 will therefore study further the effect of prior experiences on face perception, using psychophysical, electrophysiological and neuroimaging methods and the theoretical frame of predictive coding models. In two lines of our planned experiments we will use repeated stimuli, leading to specific, high level aftereffects, priming or predictive cueing.

    In the first line we will capitalize upon the previously found interactions among multiple simultaneously presented faces (Nagy, Greenlee, & Kovács, 2011). Using ERP recordings we will test the temporal dynamics of sensory competition among faces. Using fMRI, we will compare the competition effect (manifest in the reduction of the blood oxygen level dependent (BOLD) signal) for different categories versus faces, and test if a prior stimulus, serving as an attentional cue, is able to bias these competitions similarly or not. In our preceding experiments we were able to prove that previous experiences can change face perception, and that this effect is largely due to the altered activity of early face selective neurons. Here we will further test the effect of prior information on face perception, using the theoretical framework of predictive coding models. Predictive coding (PC) hypotheses assume that higher level neurons “predict” the forthcoming information by comparing the current stimulus against an internal template, and that this feedback suppresses predicted information. Currently, this model has been reconciled with another influential theory of visual perception, the biased competition (BC) model of attention which proposes that the feedback (rather than suppressing) enhances the neural responses evoked by the predicted stimulus.While both the PC and BC models have been studied extensively in the past, an analysis of their interaction has just started.

  • 2. Individual differences in learning and recognizing faces, J.M. Kaufmann, F.J. Neyer, and S.R. Schweinberger

    Dr. J.M. Kaufmann, Prof. Dr. F.J. Neyer de, Prof. Dr. S.R. Schweinberger

    While it has long been held that humans in general are experts in face recognition (Diamond & Carey, 1986), individual differences receive increasing scientific attention (Herzmann et al., 2010) and face perception skills were recently suggested to form an independent part of social competence (Wilhelm et al., 2010). Here we focus on individual differences in face learning and face recognition. Apart from extreme groups such as “superrecognizers”, who never forget a face (Russell, Duchaine, & Nakayama, 2009) and people suffering from prosopagnosia, who do not even recognize faces of close relatives (Behrmann & Avidan, 2005), large variations in the normal population also become increasingly evident (Bate, Parris, Haslam, & Kay, 2010). In addition to further investigating the range of individual differences in face learning, we are interested in construct and criterion validity of these differences. Based on the assumption that reliable individual differences (i.e., retest stability, internal consistency) exist, we assume that face learning and recognition shows convergent validity vis-à-vis established measures of social competence and discriminant validity vis-àvis psychometric intelligence. In addition, the predictive validity regarding real-life outcomes in the domain of interpersonal functioning (i.e., job performance, social relationships in the private and public domain) will be examined.

    At present, the functional mechanisms underlying individual differences in face learning and recognition are largely unknown. For instance, it is unclear whether good and poor recognizers utilize different types of information in faces. If so, a question is whether poor recognizers’ performance can be improved by training strategies aiming at critical information. The present project is based on previous research that has contributed to identifying neural markers of face learning and recognition (Kaufmann, Schweinberger, & Burton, 2009; Kaufmann & Schweinberger, 2008; Schweinberger, Pickering, Jentzsch, Burton, & Kaufmann, 2002; Schweinberger, Kaufmann, Moratti, Keil, & Burton, 2007), the influence of shape distinctiveness on forming new face representations (Kaufmann & Schweinberger, 2012) and potential differences regarding the use of shape information by good and poor recognizers (Kaufmann et al., 2011). In the proposed experiments we record performance, ERPs, scan paths and electro-dermal responses in order to study differences between good and poor recognizers. We will explore (1) whether the relative contributions of shape and texture to face recognition depend on face familiarity; (2) inter-hemispheric cooperation in face learning; 3) whether differences in face learning relate to decreased attention and/or emotional response to faces; 4) the functional relationship between face and voice processing.

    The ultimate aim of this project is to better understand individual differences in face learning, their relationship to personality traits, and daily-life consequences of these differences (e.g. regarding job performance and social relationships, which may represent the two most important domains of interpersonal functioning). If face recognition proves as a fundamental aspect of “social competence”, we expect that “good” face recognizers will have significant advantages in these domains. Finally, the project will provide crucial information for the development of training strategies for individuals suffering from poor face learning and recognition.

  • 3. Attractiveness: Statistical properties versus individual person characteristics; C. Redies, G. Hayn-Leichsenring

    Prof. Dr. med. Dr. nat. C. RediesExternal link, Dr. G. Hayn-LeichsenringExternal link

    This project studies higher-order statistical properties of face images and their relation to the perception of individual characteristics of a person, with a particular focus on face attractiveness. The project’s aim is to determine how much information about the individual person characteristics can already be deduced from higher-order image statistics that are potentially processed at early stages of visual perception. The work is based on similar work on the statistical properties of natural scene images and aesthetic artworks, where the PIs have shown that the two types of images resemble each other in that both have a scaleinvariant (fractal-like) Fourier spectrum. Previously, many researchers have studied the Fourier spectral composition of face images in studies of face representation, typically with bandwidth frequency-filtered faces. These studies have demonstrated an effect of altering the spatial frequency profile of face images on face learning and recognition but revealed inconsistent results that favour different frequency ranges. Only a few studies have addressed the question how the spatial frequency spectrum and other statistical image properties affect the perception of face attractiveness.

    Here we pursue three strategies to identify statistical image properties that may affect low-level perceptual mechanisms of face attractiveness and other individual person characteristics. First, we will correlate statistical image properties (Fourier spectral composition, features from pyramid histogram of gradient analysis, etc.) with person characteristics. Second, we will study the effect of these image statistics on the perception of individual characteristics in face images by psychological evaluation. Third, the neural underpinnings of the perception of statistical properties of face images will be probed in adaptation experiments and by recording ERPs from human participants. For comparison, aesthetic images (for example, face portraits by artists) and non-aesthetic images will be used to characterize the differences between face attractiveness (defined as the physical allurement of a person) and image beauty (defined as the pleasure derived from the global composition of an image). In summary, this project will provide a description of individual characteristics in face images in terms of their statistical properties as well as in relation to other categories of images. Moreover, we will study how a modification of these properties will affect the perception of individual characteristics in face images.

  • 4. Facial Expressions: The role of spatial frequencies for information selection and attention; O. Langner, K. Rothermund

    O. Langner, Prof. Dr. K. RothermundExternal link

    Facial emotional expressions differ in their information profiles, such that salient visual cues that are most characteristic for different expressions may be coded in different spatial frequency (SF) bands. Like other affective stimuli, emotional faces have repeatedly been shown to attract attention systematically, albeit with mixed reports regarding whether positive, negative, or self-relevant expressions are the most salient. Given the differences in SF-profiles between expressions, the emergence of attentional biases may rest on both the availability of the discriminative SF-bands for a particular expression and on the current tuning of the visual system to these SF-bands. Stable interindividual differences have also been demonstrated with regard to the processing both of faces and SFs: Socially anxious individuals exhibit robust attentional biases for negative facial expressions and are also characterized by a bias towards processing the low SF part of images (Langner et al., 2009). Another relevant social variable regarding the interplay between emotional attention and the processing of specific SF-information is an observer’s age. Higher age is related to reduced sensitivity for a range of SFs and possibly to discrimination deficits for particular facial expressions. This project investigates (a) how the presence or absence of specific SF-profile information relates to the attentional selection of facial expressions in general, (b) how people adapt to different SF-profiles when discriminating facial expressions, and (c) whether interindividual differences regarding attentional biases in anxiety or specific emotion discrimination deficits in higher age can be linked to changes in the processing of different SF-bands.

  • 5. Voice Perception: Basic Parmeters; S.R. Schweinberger

    Prof. Dr. S.R. Schweinberger

    The human voice carries a wealth of social information including emotion, gender, age or person identity, yet relatively little research has been devoted to processes mediating auditory perception of people via their voices. In the first funding period, we initially explored the role of attention for explicit and implicit voice memory for famous voices. We then conducted a substantial series of experiments on adaptation-induced aftereffects in voice perception. Building on this successful research, on further substantial and directly relevant work we conducted in the first funding period, and on ample methodological expertise acquired by the researchers in the project, we will pursue three main issues in voice perception: (1) First, exploiting the fact that new voice morphing software TANDEMSTRAIGHT permits independent morphing across each of five acoustic parameters (F0, formant frequencies, spectrum level information, aperiodicity, and time), we investigate the differential contribution of these acoustic parameters to the perception of speaker gender and age. (2) Second, in an attempt to delineate individual contributions of basic low-level information to adaptation, we use single parameter-modified adaptor voices to create aftereffects in the perception of speaker gender and age. (3) Systematic research using larger samples of personally familiar voices for recognition is almost non-existent. We will test twelfth grade secondary school pupils as a homogeneous group to create a unique database that will allow us to assess the relative contribution of acoustic parameters, speech type, perceived voice characteristics (such as rated distinctiveness of a voice), and personal contact to the accuracy in individual voice recognition. This sample will also allow us to probe gender differences (both on the speaker and listener level), and to assess own voice recognition. (4) Finally, we plan to continue earlier work on voice averaging using full sentences to test a prototype account of familiar voice representation, and we will perform an EEG study investigating induced oscillatory responses as potential correlates of voice familiarity. Overall, we expect that this project continues to substantially improve our understanding of basic acoustic, perceptual and neuronal processes involved in human voice perception.

  • 6. Determinants of Voice Learning; R. Zäske, J.M. Kaufmann, S.R. Schweinberger

    Dr. R. Zäske, Dr. J.M. Kaufmann , Prof. Dr. S.R. Schweinberger

    Recognizing people from their voices is a routine performance in social interactions that critically depends on the degree of familiarity with a speaker (Yarmey et al., 2001). It has been suggested that the processing of unfamiliar and familiar voices involves partially distinct cortical areas (von Kriegstein & Giraud, 2004) and differs qualitatively (Kreiman & van Lancker Sidtis, 2011). However, the neural processes mediating the transition from unfamiliar to familiar voices and the conditions under which voices are learned, remain largely unexplored. While forensic research has begun to study voice learning and recognition in the 1930s to improve the reliability of earwitness testimony for once-heard “unfamiliar” voices, this branch of research continues to rely on, almost exclusively, behavioural measures. By contrast, more recent neuroscientific research is strongly inspired by cognitive models of face perception. With respect to learning, these studies tend to look at short-term implicit effects of priming and adaptation. Accordingly, current models of person perception are void of learning mechanisms that are associated with explicit speaker recognition (Belin et al., 2004; Campanella & Belin, 2007). Thus, the applicability of these models to everyday face and voice recognition is limited.

    Based on the notion that voice learning may be affected by characteristics of (1) the stimulus material, (2) speaker and listener attributes as well as (3) specific task demands, we will study effects of dynamic information in faces, distinctiveness and accents, speaker and listener age as well as selective attention on voice learning. To this end, we will relate behavioural measures of learning and recognition to electrophysiological and functional magnetic resonance imaging data which provide high temporal and spatial resolution, respectively. Taken together, we expect that the present studies will significantly contribute to our understanding of how voice representations are formed in person memory.

  • 7. Interactions of Visual and Auditory information in social perception related to gender and ethnicity; M.C. Steffens, T. Rakíc and A.P. Simpson

    M.C. SteffensExternal link, T. Rakíc, A.P. Simpson de

    Research in this project tests social perception, categorization, and impression formation related to gender and ethnicity, using complex and ecologically valid stimuli that go beyond the presentation of labels or photographs only. The first aim is to follow up on key findings from the first funding period. Series of experiments will test with experimental paradigms, supplemented by ERPs, the conditions under which expectancy violations determine impressions of speakers with dialects or foreign accents. The second aim is based on the idea that changeable information may be diagnostic and thus used for social categorization when crossed with ethnicity and gender information (e.g., wearing headscarves or ethnically-associated hats; powerful/powerless speech). The third aim represents an extension of our current work to a new area. In a cooperation between psychology and phonetics, we will critically examine the finding that information about individuals’ sexual orientation is manifest in phonetic speech characteristics. The relevant speech markers in German speech will be extracted (which will be an extension of existing findings from Anglo-Saxon speakers); they will be related to variables pertaining to speakers’ gender role orientation and respective social-group identification; and to the perception of sexual orientation in voices and voice-face stimuli. The ultimate aim of this project in relation to the entire Research Unit is contributing to the elaboration of person perception models with regard to the early integration of social-category information from different modalities.

  • 8. Cooperation between people: Facial and Interactional Signals as Coordination Devices; T. Kessler, F.J. Neyer

    Prof. Dr. Thomas Kessler de, Prof. Dr. F.J. Neyer de

    Human groups are characterized by both cooperative and competitive behaviours. Cooperation always bears the risk of individual defection or cheating, but people seem to be sensitive for cheating, may have better memories for cheaters, and deal with cheaters in certain ways. While previous studies on memory for cheaters focused on interpersonal contexts, we propose to extend the research on detection and memory for cheaters to an intergroup context. We also expect substantial individual differences in cheater detection that particularly emerge in ingroup contexts. A first series of studies will attempt to replicate and extend existing studies on memory for cheaters in ingroup and outgroup. We expect that participants exhibit enhanced memory for ingroup but not for outgroup cheaters, since outgroup members tend to be processed in a more categorical and depersonalized way. In a second line of research, we will disentangle effects of group membership of cheater and victim of cheating. In a final line of research, we will examine the influence of coordination, synchronous or mutual behaviour on the detection of cheating and its influence on group formation. These three lines of research will refine our understanding of detecting, memorizing, and dealing with cheaters, with a particular focus on the maintenance of cooperation within one’s group.

  • 9. Automatic brain activation to faces and voices in social phobia before and after psychotherapy; T. Straube an W.H.R. Miltner

    Prof. Dr. T. StraubeExternal link, Prof. emer. W.H.R. Miltner de

    Patients suffering from social phobia show information processing biases, and also exhibit increased brain responses during the processing of socially threatening stimuli (such as angry faces or voices). However, it is unknown to what extent automatic brain responses to social threat signals in social phobia depend on cognitive resources, threat-relevance, modality and intensity of emotional social stimuli. It is also unclear whether brain responses can be modified by successful interventions, such as cognitive-behavioural therapy (CBT). Based on stimuli, methods and results from the first funding period, the current project extends the research questions into the applied clinical domain. Our aim is to investigate brain activation during automatic processing of emotional facial expressions and prosody in social phobia before and after CBT. We will use parallel event-related fMRI and EEG recordings, and experimentally vary emotional expression (anger, happy, neutral), emotional intensity (low, high), attentional load (low, high) and sensory modality (face, voice), to answer the following questions. First, are there rapid threat-specific brain responses to faces and voices in social phobia? Second, what is the role of attention for emotion-specific activation patterns? Third, do overlapping/similar brain mechanisms mediate the processing of emotional information from voices and faces in social phobia? Fourth, (how) are phobiarelated automatic brain responses modified by successful CBT? And fifth, can brain responses during the automatic processing of social stimuli predict treatment outcomes?

  • 10. Age and Aging in Face Perception and Memory; H. Wiese

    Dr. H. WieseExternal link

    Humans are often considered to be experts in face recognition, but such expertise is not comparable for all different classes of faces. For instance, young adults show more accurate memory for own-age faces, whereas a corresponding own-age bias (OAB) has not consistently been observed in elderly participants (Wiese, Schweinberger, & Hansen, 2008). During the first funding period, several experiments were conducted to describe and understand the OAB and its ERP correlates in more detail. For instance, we found that young adult participants show similar recognition memory for young and young middle-aged (up to approximately 45 years) faces, but decreased recognition for older faces, arguing against an interpretation in terms of a social “in-group” bias. Moreover, no OAB for other-race faces was detected, suggesting that belonging to multiple “out-groups” simultaneously does not result in additive disadvantages. In other experiments, an OAB in elderly participants was found to depend on the amount of their contact with own-age compared to younger persons, with those participants who exhibited a predominance of own-age contact also showing respective memory effects.

    The research proposed for the second funding period will focus more closely on changes in face processing and memory and corresponding neural correlates with increasing participant age. Accordingly, effects of aging are investigated on (i) perceptual face processing (using categorical adaptation and the composite face effect, which assess holistic face processing), (ii) the acquisition of new representations of faces (by studying face recognition memory and learning), and (iii) the access to semantic and name representations (using semantic priming paradigms and learning studies focusing on face-name and faceoccupation associations). ERP correlates of these respective processing levels (such as N170, N250, and N400, respectively) will also be compared between young and elderly participants. The project contributes to the overall mission of the Research Unit by adding novel and theoretically relevant information about age-related changes at all processing levels suggested by current models of person perception.

Research Projects of the 1st Funding Period (2009-2012)

  1. The role of temporal context in face recognition, G. Kovács
  2. Face learning, J.M. Kaufmann
  3. Voice perception, S.R. Schweinberger
  4. Interactions of visual and auditory information in gender and ethnicity perception, M.C. Steffens
  5. Neural mechanisms of processing emotional information from faces and voices under attentional load, T. Straube, W.H.R. Miltner
  6. Effects of age and ageing on face memory and perception, H. Wiese

Publications of the Person Perception Research Unit