Our Research Seminars
Our Research Seminars are regular talks featuring current research from investigators of our department, research units or partners.
We are able to understand speech more efficiently if we are able to see the speaker´s lips moving, in addition to the acoustic signal. The so-called McGurk illusion demonstrates that the visual signal has an involuntary influence on the perception of the spoken sound. Brain imaging methods have shown that attending to a speaking face activates areas in the auditory cortex even when acoustic stimulation is absent. It seems therefore, that in speech perception, the visual signal is able to directly modulate acoustic processing. In addition to speech perception, voices and faces are also naturally important sources of information for the recognition of people. As the result of a successful pilot study in our laboratory, this project investigated, for the first time, the integration processes involved in person recognition. The main aims were: (1) to explore the required conditions and the mechanisms of this phenomenon, (2) to investigate the neuronal correlates of the integration processes for person recognition, (3) to compare the audiovisual integration processes involved in person recognition and speech recognition. The intention of this project is to attain a better understanding of person recognition in everyday conditions, where dynamic audiovisual processing regularly occurs. The results of this project so far have indicated that audiovisual face-voice integration is an important factor in the recognition of people, depends on familiarity with a speaker, shows sensitivity to temporal synchronization of the facial and vocal articulation, and can occur in a bidirectional manner. Moreover, event-related brain recordings suggested multiple loci of audiovisual integration. Specifically, perceiving time-synchronized speaking faces triggers early (~50-80 ms) audiovisual processing, although audiovisual speaker identity is only computed ~200 ms later.
Neuronal adaptation can be regarded as a mechanism by which perceptual processing is constantly re-calibrated as a result of specific characteristics of incoming stimuli. Adaptation has been demonstrated in the form of perceptual illusions or aftereffects. The first written record of this is ascribed to Aristotle, who observed that, following prolonged fixation of the downward motion of a waterfall, a static visual scene appears to move upward. In this "waterfall illusion" or, more generally, the "motion aftereffect", a stationary stimulus appears to move in opposite direction to that of a previously fixated continuous visual motion. Perceptual adaptation is thought to result from the selective habituation after prolonged firing of neuronal populations that code specific stimulus attributes, a phenomenon sometimes referred to as the "psychologist´s microelectrode", as it can provide valuable insight into the neural fine tuning to special stimulus attributes in visual perception. However, while adaptation to simple stimulus attributes such as motion or colour has been known for literally hundreds of years, a striking novel discovery within the last couple of years is that adaptation is also of central importance for how humans perceive complex visual stimuli. Adaptation to male faces, for example, has been found to bias the subsequent perception of androgynous faces towards female gender. Similar adaptation effects have been observed for one of the most important visual social signals: Human eye gaze. Jenkins et al. (2006) found that adaptation to gaze into one direction virtually eliminated participants' ability to perceive smaller gaze deviations into the same direction. With our project, we aim at a deeper understanding of high-level adaptation effects. We are interested into several aspects of high-level adaptation effects such as the interrelationship of similarity of adaptation and test stimulus, the longevity of adaptation effects, as well as the neural correlates of adaptation as investigated with EEG.
Research on perceptual priming has furthered our understanding of the nature of representations mediating the recognition and categorisation of everyday stimuli such as words, objects, or faces. Despite intensive research however, the neural correlates underlying perceptual priming are poorly understood. The fundamental question we ask here is how perceptual representation systems identify information at both abstract (e.g., "a pen") and stimulus-specific (e.g., "a particular image of a pen") levels. We aim to combine priming - a well established experimental paradigm - with state-of-the art cognitive neuroscience methods in order to investigate the following current questions:
To pursue these questions we use a novel cross disciplinary approach, combining our expertise in event-related potentials (ERPs) and face and names processing, transcranial magnetic stimulation (TMS) and word processing, and hemispheric differences and priming paradigms. Overall, we aim at a more complete understanding of the neurocognitive mechanisms that drive abstract and specific representation systems which mediate the recognition and categorization of everyday stimuli.
While the perception of faces from static portraits has been investigated in many studies, little research has been devoted to processes mediating auditory recognition of people via their voices. This is despite the fact that the voice is by far the most important auditory stimulus that supports person identification, and that it carries a wealth of further social information including emotion, gender, or age. This project is intended to fill a major gap in the research on auditory person perception, by addressing three key aspects of voice perception. First, using a design that incorporates both recognition memory and priming approaches, we explore the role of attention for explicit and implicit voice memory. Second, using novel voice morphing technology, we recently presented the first behavioural evidence that adaptation to non-linguistic information in voices elicits systematic auditory aftereffects in the perception of gender (Schweinberger et al., 2008). Here we will build on this new line of research, and will study behavioural and neurocognitive correlates of auditory adaptation to two other important social signals conveyed by voices: person identity and age. The studies on voice identity adaptation can be expected to have far-reaching theoretical implications with respect to the question of whether individual voices are represented in a prototype-referenced manner, similar to what has been suggested for the representation of facial identity. Building on findings from the visual modality that different visual adaptation effects depend on attention and conscious perception to very different degrees, we study the combined effects of attention and voice adaptation.
With increasing progress in developing basic research paradigms, we have now also begun a number of more applied projects. In one, we look at individual differences in voice recognition abilities and their potential links with autistic behavioral tendencies. In another, we target voice recognition in prosopagnosic individuals. In a clinical project, we have begun to use parameter-specific voice morphing technology to investigate the perception of social signals in the voice by cochlear implant users.
DFG Grant Schw 511/10-1 and 10-2
DFG Grant ZA 745/1-1 and 1-2
Failures to retrieve familiar personal names are among the frequently reported everyday memory errors. Failures to correctly retrieve semantic information (e.g. occupation, place of living, etc.) for familiar people are relatively less frequent. In particular, situations in which a familiar face can be successfully named even though no semantic information can be accessed appear to be extremely rare or nonexistent. In the model of face recognition by Bruce and Young (1986), it was proposed that the access to semantic information and names of familiar people occurs in a sequential manner, such that the access of semantic information is mandatory before a name can be retrieved. In the context of this topic, we collected experimental data as well as electrophysiological evidence which have been challenging this view to some extent, and which has been interpreted to suggest that the access to semantic information and names occurs in a parallel fashion, involving different brain systems. Another controversy has been about whether semantic information for people is organized in a categorical (i.e., driven by semantic category membership) or purely associative (i.e., driven by co-occurrence) manner. Recent data from the lab have provided evidence that both category membership and co-occurrence of people contribute independently to semantic person memory.
Familiar faces can be easily recognized even from poor quality images and across a large range of viewing conditions. By contrast, it is surprisingly difficult to recognize or even to match unfamiliar faces across different images.
Familiar faces can be easily recognized even from poor quality images and across a large range of viewing conditions. By contrast, it is surprisingly difficult to recognize or even to match unfamiliar faces across different images. While this suggests qualitative differences between the processing of familiar and unfamiliar faces, little is known about how new representations of faces are formed during learning. However, recent research suggests that mental representations of faces essentially may code an average across the perceptual instances perceived during familiarization. At the same time, natural variability in visual encounters of familiar faces also appears to play an important role for the acquisition of robust representations of well-known faces, in a way that remains to be precisely understood. Building on previous work from our group that identified event-related brain potential (ERP) correlates of face recognition, we use behavioural and ERP experiments to improve our understanding of face learning. The project aims at understanding how non-visual cues such as voices and semantic information contribute to face recognition and investigates how these links develop during face learning. Importantly, the project also utilises sophisticated methods of image manipulation (such as selective photorealistic caricaturing in either shape or reflectance information) to determine the relative role of different kinds of visual information for face learning and recognition. We also currently use these techniques in order to devise and explore training programmes for individuals with poor face recognition skills.
BBSRC, British Academy
DFG Grant: KA 2997/ 2-1
The efficient analysis and representation of person-related information is one of the most challenging and important tasks of human social perception. In particular, efficient processing is achieved by person categorisation (e.g. old vs. young, male vs. female, own vs. other eth-nic group etc.). However, it remains controversial whether relevant categories (and if necessary the associated stereotypical behaviour) are activated automatically during perception. Alternatively, category activation may be determined by controlling factors, such as attention, processing strategies or goals. The current project investigates this prominent question by means of priming, while event-related potentials (ERP) are recorded. Various ERP components (e.g. N170, N250r, N400) are analysed to examine perceptual and semantic categorisation processes for faces. In particular, we will examine the extent to which these priming effects are modulated by selective attention as well as by the categorisation task at hand. Further neuroscientific studies investigate the recently reported 'own age bias', or 'other age effect' - the observation that faces belonging to other age groups than the viewer´s group are recognized less effectively. Finally, the role of face familiarisation for categorisation will be examined. The project aims at an enhanced understanding of the cognitive and neural bases of person perception and categorisation.
DFG-Projekt Schw 511/8-1
Although research in our department has a clear focus on interpersonal perception and interactions between humans, technological progress has already changed the reality of social interactions for many people. Accordingly, the importance of interactions between humans and machines (e.g., in the form of smartphones, computers, or robots) not only is increasing quickly – there are also many potential applications to psychology which come with both chances and challenges. For instance, the extent to which service robots can make a positive contribution to care services for the elderly – already a partial reality in countries such as Japan – is discussed controversially. Another example are virtual reality applications, which are beginning to play a role in clinical interventions, such as in cases of specific affective disorders. Such applications can also be used for training purposes to enhance cognitive and social abilities, and may be complemented by techniques of bio- or neurofeedback that are currently becoming validated as effective treatments for disorders such as ADHD or autism. Moreover, there is currently intense research to understand the conditions for the so-called sense of agency (the subjective impression that one´s own action has been the cause of an external response), which may be a crucial factor for smooth and pleasant interactions between humans and machines. In one project, we investigate the role of several variables (latency, (multi-)sensory nature, and affective valence of machine-generated responses to human actions) for the sense of agency. In a second related project, we specifically target human-robot interactions, and investigate the degree to which both human (e.g., age, personality, anxiety levels) and robot (e.g., degree of humanoid appearance, size, motion parameters) variables can reduce anxiety and promote smooth interactions between humans and robots.
BMBF-Network 3D-LivingLab; Project Response Latencies and Multisensory Feedback, FKZ: 03ZZ0439E (2017-2019)
BMBF-Network 3D-IMiR; Project PAMRI, FKZ: 03ZZ0459B (2017-2019)
Misinformation in social networks is considered to have manifold adverse effects on the individuals using them, such as erosion of trust in politicians and traditional media, or reduced adherence to pro-environmental behavior and public health measures in a pandemic situation. Acquiring a better understanding of how misinformation diffuses through a social network and what can be effective countermeasures therefore seems necessary. However, relatively little is known about the effects which specific countermeasures social media platforms can implement (e.g., fact-checking or post-deletion) can have on the spread of misinformation. In a recently published work, our group has yielded first but preliminary evidence about the effectiveness and sufficiency of such countermeasures in containing the spread of misinformation in social networks. Future work on this subject will uncover the role of memory in misinformation consolidation and will reveal how effective specific countermeasures can contain the spread of misinformation in social networks.
Kauk, J., Kreysa, H., Voigt, A., & Schweinberger, S.R. (2022). #flattenthecurve: Wie begrenzen wir die Welle von Falschinformationen und Verschwörungserzählungen in digitalen sozialen Netzwerken? In: F. Hessel, P. Chakkarath, and M. Luy (Eds.): Verschwörungsdenken.Zwischen Populärkultur und politischer Mobilisierung.(pp. 259-279). ießen: Psychosozial-Verlag.
Kauk, J., Kreysa, H., & Schweinberger, S.R. (2021). Understanding and countering the spread of conspiracy theories in social networks: Evidence from epidemiological models of Twitter data. PLoS One, 16(8), e0256179. (Link to PDF)External link
Logo of the DogStudies Lab
Graphic: Nora TippmannAnimal minds can inform us about the factors driving the evolution of cognition. For a number of reasons, the domestic dog (Canis familiaris) is a very interesting model for investigating different questions regarding the evolution of cognitive abilities. The fact that dogs have been living with humans for at least 15.000 years may have led to the selection of certain social cognitive skills by humans or even the co-evolution of dogs’ abilities with those of humans.
DogStudies addresses domestication and dog-human interactions. In this project, we investigate social cognitive skills of family and working dogs and their relationship with humans. For example, we are interested in communicative and cooperative skills of dogs, and how odor perception and cognition are linked together. We also investigate the relationship between dogs and humans and the way dogs are kept, used, and perceived in different cultures all over the world. The results of this project contribute to a better understanding not only of dog cognition and the dog-human relationship, but also of the relationship between cultural evolution and domestication, i.e. how cultural and evolutionary processes mutually influence each other.
People
Human ageing is typically accompanied by some degree of cognitive slowing. In addition, problems in person memory are among the frequent complaints of older adults. At the same time, age does not invariably affect all aspects of performance, and older adults can even outperform younger people in specific tasks. Neurophysiological research demonstrates that high performing older adults show compensatory activity in brain areas that are not activated in younger or low performing older participants, possibly via bilateral hemispheric activation. Thus, increased bilateral hemispheric activation may contribute to 'successful ageing'. Event-related brain potentials (ERPs) are a most sensitive means to investigate age-related changes in neurocognitive processing. In the early phase of this project, we studied face and word processing in older adults. We focussed on bilateral vs unilateral hemispheric activation differences, and on explicit and implicit person-related memory. In more recent work, we also investigated older adult's interaction with technology and robots, and age-related aspects of hearing and voice perception.
DFG Grant: WI 3219/ 4-1
Selected Relevant Publications
Based on long-standing experience of the group in research on cognitive aging and its neuronal correlates, as well as with individuals experiencing handicaps of social communication, this project develops and evaluates assessment tools, perceptual and cognitive training programs, and tailor-made interventions to improve various aspects of social interaction. Our approach is based on current neurocognitive models of social perception and interaction. Selected sub-projects use current technology to synthesize naturalistic facial and vocal stimuli with paremeter-specific morphing methods. This technology allows us to create stimuli with augmented („caricatured“) social signals which have been demonstrated to be efficient in improving social perception. Individual aspects of this research programme include (1) an assessment of emotion perception abilities in hearing-impaired individuals with a cochlear implant, (2) the development and evaluation of a training program for improving nonverbal vocal communication in older adults, (3) a systematic assessment of the potential of mu-rhythm neurofeedback training to improve socio-emotional communication and its related cortical correlates in adolescents and young adults with autism, (4) the development of improved methods for assessing central auditory processing disorders (CAPD; dt: AVWS) - a frequent but incompletely understood cause of learning problems for school children, and (5) the development of new diagnostic tools. The subprojects are all characterized by use of state-of-the-art digital technology to assess and improve social interaction abilities.
For more information on the interdiciplinary research and participatory on autism spectum disorder (ASD) in Jena, please see the Social Potential in Autism research unit (link), which is also coordinated by this department.
To date, cognitive and neuronal correlates of person perception and human interaction are surprisingly poorly understood. One example is face recognition and face learning skills: Individual differences in these skills have only recently become a focus of research. In this project, we (1) study face learning in people with good and poor face recognition skills, and specifically address the question whether poor performers might benefit disproportionately from an enhancement of a face 's idiosyncratic shape or texture (by means of selective spatial caricaturing). We also (2) study neural correlates of individual differences in face recognition skills. In specific experiments, we (3) investigate individual differences in processing the second-order spatial configuration of facial features, by using a metric manipulation of feature placement. In broader studies with larger groups of participants, we (4) assess relationships between face and voice perception skills and more general skills relevant to social cognition and interaction (such as perspective-taking or theory of mind), as well as with personality characteristics (such as the BIG FIVE, or autistic traits).
DFG-grant KA2997/3-1
Our "social brain" and its machinery may be severely compromised by brain lesions following stroke or traumatic injury, but disorders of person perception may also arise as a consequence of interindividual variability. standard neuropsychological tests typically do not include tests on face perception, and human intraspection about one's own ability to recognize faces or voices is known to be very unreliable, such that many patients may not spontaneously report such difficulties. Our work in this area includes the study of patients with brain lesions. Prosopagnosia is not a unitary condition, but can result from breakdown at various functional and neuroanatomical levels of face processing. Several detailed case studies with dense prosopagnosia show how such impairments can exist even when other aspects of visual object recognition are surprisingly well-preserved. Moreover, "covert recognition" of unrecognized faces can be demonstrated in many of these patients. Systematic studies with larger groups of unselected patients from a neurological rehabilitation clinic also demonstrate that clinically relevant disorders in the perception of faces, voices, or names are more frequent than previously assumed. More recently, "developmental" or "congenital" prosopagnosia has drawn strong interest, in which face recognition is very poor in the abscence of known neurological disorders. We emphasize that all these cases require careful functional diagnosis similar to what has become state-of-the-art in acquired prosopagnosia, but that these should also be complemented by appropiate and standardized self-report-instruments. In parallel, an analogous case can be made for developmental phonagnosia, which has been documented more recently. Together, systematic research on developmental disorders in person recognition will further increase our understanding of the human system for face and person perception.
DFG-Projekt Schw 511/6-1
While functional hemispheric asymmetries in information processing have been known for some time, more recent research has focussed on the specific ways in which the two cerebral hemispheres collaborate in the processing of complex stimuli. Interhemispheric cooperation may be indicated by enhanced performance when stimuli are presented tachistoscopically to both visual fields/hemispheres relative to one visual field alone. Such a "bilateral gain" has been reported for words but not pseudowords in lexical decision tasks, and has been attributed to the operation of interhemispheric cell assemblies that exist only for meaningful words with acquired cortical representations. Similarly, a bilateral gain has been reported for famous but not unfamiliar faces in face recognition tasks. In this line of research we further investigate prerequisites of interhemispheric cooperation in face perception. Particular interest is given to the role of face learning. Of further interest is the question whether interhemispheric cooperation is equally important for other person related information such as emotional expressions and personal names. Behavioural and ERP methods are used to study interhemispheric cooperation and its underlying neural correlates.
Current models of face perception propose independent brain systems allowing for parallel analysis of identity and expression. Our aim is to examine whether or not there is some cross-talk between mechanisms by which we perceive different facial signals. Specifically, we will investigate both the timing and possible processing hierarchies in perception of identity and expression, and for this we combine approaches from face perception research with new electrophysiological techniques to reveal the timing of mental events. By studying how the brain processes identity and expression from faces, this project motivates a strong conceptual linkage between two areas of social cognition which as yet have been largely treated as separate.
BBSRC (UK)
The JAVMEPS (Jena Audiovisual Stimuli of Morphed Emotional Pseudospeech). JAVMEPS is an audiovisual database for emotional voice and face stimuli. In includes 2256 stimulus files with recordings from 12 speakers, 4 bisillabic pseudowords and 6 naturalistic induced basic emotions in auditory-only, visual-only and congruent AV conditions. It further comprises caricatures, original voices and anticaricatures and time-synchronised congruent and incongruent AV emotions. JAVMEPS is a useful open resource for research into auditory emotion perception especially when adaptivtive testing or calibration of task difficoulty.
Go to Publication: https://doi.org/10.3758/s13428-023-02249-4External link
Jena Eyewitness Research Stimuli (JERS, Kruse et Schweinberger, 2023). A database of mock crime videos in 2D and Virtual Reality formats, with corresponding 2D and 3D lineup images. This validated stimulus database is freely accessible for the scientific community interested in eyewitness memory, and is compatible with enhanced ecological validity in eyewitness research as provided by virtual reality.
Go to Publication: https://doi.org/10.1371/journal.pone.0295033External link
The JVLMT: Jena voice learning and memory test. A standardized test for assesssing the ability to learn and recognize voices. The test is based on item-response theory and applicable accross languages. The format is similar to the cambridge face memory test (CFMT), and takes appropriately 22 minutes to complete.
Go to Publication: https://doi.org/10.3758/s13428-022-01818-3External link
The compassion of others scale COS-7 (2023). A psychometrically tested 7-Item scale for measuring compassion in time-contrained research settings. (Schlosser, Klimecki et al. 2023)
Go to Publication: https://doi.org/10.1007/s12144-020-01344-5External link
The Jena Speaker Set (JESS, 2020): Database of voice Stimuli from unfamiliar young and old adult speakers. A free database for unfamiliar adult voice stimuli, with 61 young and 58 old female and male speakers uttering various sentences, syllables, read text, semi-spontaneous speech and vowels. Ample annotated information per speaker is available, making this database a valuable resource for secondary research by the scientific community.
Go to Publication: https://doi.org/10.3758/s13428-019-01296-0External link