Stockholm university

Petri LaukkaProfessor

Research projects

Publications

A selection from Stockholm University publication database

  • Blended Emotions can be Accurately Recognized from Dynamic Facial and Vocal Expressions

    2023. Alexandra Israelsson, Anja Seiger, Petri Laukka. Journal of nonverbal behavior 47 (3), 267-284

    Article

    People frequently report feeling more than one emotion at the same time (i.e., blended emotions), but studies on nonverbal communication of such complex states remain scarce. Actors (N = 18) expressed blended emotions consisting of all pairwise combinations of anger, disgust, fear, happiness, and sadness – using facial gestures, body movement, and vocal sounds – with the intention that both emotions should be equally prominent in the resulting expression. Accuracy of blended emotion recognition was assessed in two preregistered studies using a combined forced-choice and rating scale task. For each recording, participants were instructed to choose two scales (out of 5 available scales: anger, disgust, fear, happiness, and sadness) that best described their perception of the emotional content and judge how clearly each of the two chosen emotions were perceived. Study 1 (N = 38) showed that all emotion combinations were accurately recognized from multimodal (facial/bodily/vocal) expressions, with significantly higher ratings on scales corresponding to intended vs. non-intended emotions. Study 2 (N = 51) showed that all emotion combinations were also accurately perceived when the recordings were presented in unimodal visual (facial/bodily) and auditory (vocal) conditions, although accuracy was lower in the auditory condition. To summarize, results suggest that blended emotions, including combinations of both same-valence and other-valence emotions, can be accurately recognized from dynamic facial/bodily and vocal expressions. The validated recordings of blended emotion expressions are freely available for research purposes. 

    Read more about Blended Emotions can be Accurately Recognized from Dynamic Facial and Vocal Expressions
  • Cognition, prior aggression, and psychopathic traits in relation to impaired multimodal emotion recognition in psychotic spectrum disorders

    2023. Lennart Högman (et al.). Frontiers in Psychiatry 14

    Article

    Background: Psychopathic traits have been associated with impaired emotion recognition in criminal, clinical and community samples. A recent study however, suggested that cognitive impairment reduced the relationship between psychopathy and emotion recognition. We therefore investigated if reasoning ability and psychomotor speed were impacting emotion recognition in individuals with psychotic spectrum disorders (PSD) with and without a history of aggression, as well as in healthy individuals, more than self-rated psychopathy ratings on the Triarchic Psychopathy Measure (TriPM). 

    Methods: Eighty individuals with PSD (schizophrenia, schizoaffective disorder, delusional disorder, other psychoses, psychotic bipolar disorder) and documented history of aggression (PSD+Agg) were compared with 54 individuals with PSD without prior aggression (PSD-Agg) and with 86 healthy individuals on the Emotion Recognition Assessment in Multiple Modalities (ERAM test). Individuals were psychiatrically stable and in remission from possible substance use disorders. Scaled scores on matrix reasoning, averages of dominant hand psychomotor speed and self-rated TriPM scores were obtained. 

    Results: Associations existed between low reasoning ability, low psychomotor speed, patient status and prior aggression with total accuracy on the ERAM test. PSD groups performed worse than the healthy group. Whole group correlations between total and subscale scores of TriPM to ERAM were found, but no associations with TriPM scores within each group or in general linear models when accounting for reasoning ability, psychomotor speed, understanding of emotion words and prior aggression. 

    Conclusion: Self-rated psychopathy was not independently linked to emotion recognition in PSD groups when considering prior aggression, patient status, reasoning ability, psychomotor speed and emotion word understanding. 

    Read more about Cognition, prior aggression, and psychopathic traits in relation to impaired multimodal emotion recognition in psychotic spectrum disorders
  • The Voice of Eyewitness Accuracy

    2023. Philip U. Gustafsson, Petri Laukka, Torun Lindholm. ICPS 2023 Brussels, 41-41

    Conference

    In two studies, we examined vocal characteristics of accuracy. Participants watched a staged-crime film and were interviewed as eyewitnesses. A mega- analysis showed that correct responses were uttered with 1) a higher pitch, 2) greater energy in the first formant region, 3) higher speech rate and 4) shorter pauses.

    Read more about The Voice of Eyewitness Accuracy
  • Vocal characteristics of accuracy in eyewitness testimony

    2023. Philip U. Gustafsson, Petri Laukka, Torun Lindholm. Speech Communication 146, 82-92

    Article

    In two studies, we examined if correct and incorrect testimony statements were produced with vocally distinct characteristics. Participants watched a staged crime film and were interviewed as eyewitnesses. Witness responses were recorded and then analysed along 16 vocal dimensions. Results from Study 1 showed six vocal characteristics of accuracy, which included dimensions of frequency, energy, spectral balance and temporality. Study 2 attempted to replicate Study 1, and also examined effects of emotion on the vocal characteristic-accuracy relationship. Although the results from Study 1 were not directly replicated in Study 2, a mega-analysis of the two datasets showed four distinct vocal characteristics of accuracy; correct responses were uttered with a higher pitch (F0 [M]), greater energy in the first formant region (F1 [amp]), higher speech rate (VoicedSegPerSec) and shorter pauses (UnvoicedSegM). Taken together, this study advances previous knowledge by showing that accuracy is not only indicated by what we say, but also by how we say it.

    Read more about Vocal characteristics of accuracy in eyewitness testimony
  • Comparing supervised and unsupervised approaches to multimodal emotion recognition

    2021. Marcos Fernández Carbonell, Magnus Boman, Petri Laukka. PeerJ Computer Science 7

    Article

    We investigated emotion classification from brief video recordings from the GEMEP database wherein actors portrayed 18 emotions. Vocal features consisted of acoustic parameters related to frequency, intensity, spectral distribution, and durations. Facial features consisted of facial action units. We first performed a series of person-independent supervised classification experiments. Best performance (AUC = 0.88) was obtained by merging the output from the best unimodal vocal (Elastic Net, AUC = 0.82) and facial (Random Forest, AUC = 0.80) classifiers using a late fusion approach and the product rule method. All 18 emotions were recognized with above-chance recall, although recognition rates varied widely across emotions (e.g., high for amusement, anger, and disgust; and low for shame). Multimodal feature patterns for each emotion are described in terms of the vocal and facial features that contributed most to classifier performance. Next, a series of exploratory unsupervised classification experiments were performed to gain more insight into how emotion expressions are organized. Solutions from traditional clustering techniques were interpreted using decision trees in order to explore which features underlie clustering. Another approach utilized various dimensionality reduction techniques paired with inspection of data visualizations. Unsupervised methods did not cluster stimuli in terms of emotion categories, but several explanatory patterns were observed. Some could be interpreted in terms of valence and arousal, but actor and gender specific aspects also contributed to clustering. Identifying explanatory patterns holds great potential as a meta-heuristic when unsupervised methods are used in complex classification tasks.

    Read more about Comparing supervised and unsupervised approaches to multimodal emotion recognition
  • Effects of aging on emotion recognition from dynamic multimodal expressions and vocalizations

    2021. Diana S. Cortes (et al.). Scientific Reports 11 (1)

    Article

    Age-related differences in emotion recognition have predominantly been investigated using static pictures of facial expressions, and positive emotions beyond happiness have rarely been included. The current study instead used dynamic facial and vocal stimuli, and included a wider than usual range of positive emotions. In Task 1, younger and older adults were tested for their abilities to recognize 12 emotions from brief video recordings presented in visual, auditory, and multimodal blocks. Task 2 assessed recognition of 18 emotions conveyed by non-linguistic vocalizations (e.g., laughter, sobs, and sighs). Results from both tasks showed that younger adults had significantly higher overall recognition rates than older adults. In Task 1, significant group differences (younger > older) were only observed for the auditory block (across all emotions), and for expressions of anger, irritation, and relief (across all presentation blocks). In Task 2, significant group differences were observed for 6 out of 9 positive, and 8 out of 9 negative emotions. Overall, results indicate that recognition of both positive and negative emotions show age-related differences. This suggests that the age-related positivity effect in emotion recognition may become less evident when dynamic emotional stimuli are used and happiness is not the only positive emotion under study.

    Read more about Effects of aging on emotion recognition from dynamic multimodal expressions and vocalizations
  • Investigating individual differences in emotion recognition ability using the ERAM test

    2021. Petri Laukka (et al.). Acta Psychologica 220

    Article

    Individuals vary in emotion recognition ability (ERA), but the causes and correlates of this variability are not well understood. Previous studies have largely focused on unimodal facial or vocal expressions and a small number of emotion categories, which may not reflect how emotions are expressed in everyday interactions. We investigated individual differences in ERA using a brief test containing dynamic multimodal (facial and vocal) expressions of 5 positive and 7 negative emotions (the ERAM test). Study 1 (N = 593) showed that ERA was positively correlated with emotional understanding, empathy, and openness, and negatively correlated with alexithymia. Women also had higher ERA than men. Study 2 was conducted online and replicated the recognition rates from Study 1 (which was conducted in lab) in a different sample (N = 106). Study 2 also showed that participants who had higher ERA were more accurate in their meta-cognitive judgments about their own accuracy. Recognition rates for visual, auditory, and audio-visual expressions were substantially correlated in both studies. Results provide further clues about the underlying structure of ERA and its links to broader affective processes. The ERAM test can be used for both lab and online research, and is freely available for academic research.

    Read more about Investigating individual differences in emotion recognition ability using the ERAM test
  • Spontaneous vocal expressions from everyday life convey discrete emotions to listeners

    2021. Patrik N. Juslin (et al.). Emotion 21 (6), 1281-1301

    Article

    Emotional expression is crucial for social interaction. Yet researchers disagree about whether nonverbal expressions truly reflect felt emotions and whether they convey discrete emotions to perceivers in everyday life. In the present study, 384 clips of vocal expression recorded in a field setting were rated by the speakers themselves and by naïve listeners with regard to their emotional contents. Results suggested that most expressions in everyday life are reflective of felt emotions in speakers. Seventy-three percent of the voice clips involved moderate to high emotion intensity. Speaker–listener agreement concerning expressed emotions was 5 times higher than would be expected from chance alone, and agreement was significantly higher for voice clips with high emotion intensity than for clips with low intensity. Acoustic analysis of the clips revealed emotion-specific patterns of voice cues. “Mixed emotions” occurred in 41% of the clips. Such expressions were typically interpreted by listeners as conveying one or the other of the two felt emotions. Mixed emotions were rarely recognized as such. The results are discussed regarding their implications for the domain of emotional expression in general, and vocal expression in particular.

    Read more about Spontaneous vocal expressions from everyday life convey discrete emotions to listeners
  • Training Emotion Recognition Accuracy

    2021. Lillian Döllinger (et al.). Frontiers in Psychology 12

    Article

    Nonverbal emotion recognition accuracy (ERA) is a central feature of successful communication and interaction, and is of importance for many professions. We developed and evaluated two ERA training programs—one focusing on dynamic multimodal expressions (audio, video, audio-video) and one focusing on facial micro expressions. Sixty-seven subjects were randomized to one of two experimental groups (multimodal, micro expression) or an active control group (emotional working memory task). Participants trained once weekly with a brief computerized training program for three consecutive weeks. Pre-post outcome measures consisted of a multimodal ERA task, a micro expression recognition task, and a task about patients' emotional cues. Post measurement took place approximately a week after the last training session. Non-parametric mixed analyses of variance using the Aligned Rank Transform were used to evaluate the effectiveness of the training programs. Results showed that multimodal training was significantly more effective in improving multimodal ERA compared to micro expression training or the control training; and the micro expression training was significantly more effective in improving micro expression ERA compared to the other two training conditions. Both pre-post effects can be interpreted as large. No group differences were found for the outcome measure about recognizing patients' emotion cues. There were no transfer effects of the training programs, meaning that participants only improved significantly for the specific facet of ERA that they had trained on. Further, low baseline ERA was associated with larger ERA improvements. Results are discussed with regard to methodological and conceptual aspects, and practical implications and future directions are explored.

    Read more about Training Emotion Recognition Accuracy
  • What Do We Hear in the Voice?

    2021. Hillary Anger Elfenbein (et al.). Personality and Social Psychology Bulletin

    Article

    The current study investigated what can be understood from another person's tone of voice. Participants from five English-speaking nations (Australia, India, Kenya, Singapore, and the United States) listened to vocal expressions of nine positive and nine negative affective states recorded by actors from their own nation. In response, they wrote open-ended judgments of what they believed the actor was trying to express. Responses cut across the chronological emotion process and included descriptions of situations, cognitive appraisals, feeling states, physiological arousal, expressive behaviors, emotion regulation, and attempts at social influence. Accuracy in terms of emotion categories was overall modest, whereas accuracy in terms of valence and arousal was more substantial. Coding participants' 57,380 responses yielded a taxonomy of 56 categories, which included affective states as well as person descriptors, communication behaviors, and abnormal states. Open-ended responses thus reveal a wide range of ways in which people spontaneously perceive the intent behind emotional speech prosody.

    Read more about What Do We Hear in the Voice?
  • Cross-Cultural Emotion Recognition and In-Group Advantage in Vocal Expression

    2020. Petri Laukka, Hillary Anger Elfenbein. Emotion Review 13 (1), 3-11

    Article

    Most research on cross-cultural emotion recognition has focused on facial expressions. To integrate the body of evidence on vocal expression, we present a meta-analysis of 37 cross-cultural studies of emotion recognition from speech prosody and nonlinguistic vocalizations, including expressers from 26 cultural groups and perceivers from 44 different cultures. Results showed that a wide variety of positive and negative emotions could be recognized with above-chance accuracy in cross-cultural conditions. However, there was also evidence for in-group advantage with higher accuracy in within- versus cross-cultural conditions. The distance between expresser and perceiver culture, measured via Hofstede's cultural dimensions, was negatively correlated with recognition accuracy and positively correlated with in-group advantage. Results are discussed in relation to the dialect theory of emotion.

    Read more about Cross-Cultural Emotion Recognition and In-Group Advantage in Vocal Expression
  • Oxytocin may facilitate neural recruitment in medial prefrontal cortex and superior temporal gyrus during emotion recognition in young but not older adults

    2020. Diana S. Cortes (et al.). 2020 Cognitive Aging Conference, 22-23

    Conference

    Normal adult aging is associated with decline in some socioemotional abilities, such as the ability to recognize emotions in others, and age-related neurobiological processes may contribute to these deficits. There is increasing evidence that the neuropeptide oxytocin plays a key role in social cognition, including emotion recognition. The mechanisms through which oxytocin promotes emotion recognition are not well understood yet, and particularly in aging. In a randomized, double-blind, placebo-controlled within-subjects design, we investigated the extent to which a single dose of 40 IU of intranasal oxytocin facilitates emotion recognition in 40 younger (M = 24.90 yrs., SD = 2.97, 48% women) and 40 older (M = 69.70 yrs., SD = 2.99, 55% women) men and women. During two fMRI sessions, participants viewed dynamic positive and negative emotional displays. Preliminary analyses show that younger participants recognized positive and negative emotions more accurately than older participants (p < .001), with this behavioral effect not modulated by oxytocin. In the brain data, however, we found an age x treatment interaction in medial prefrontal cortex (xyz [14, 14, 6], p = .007) and superior temporal gyrus (xyz [53, 9, 2], p = .031). In particular, oxytocin (vs. placebo) reduced activity in these regions for older participants, while it enhanced activity in these regions for younger participants. In line with previous research, these findings support the notion that the effects of oxytocin vary by context and individual factors (e.g., social proficiency, age).

    Read more about Oxytocin may facilitate neural recruitment in medial prefrontal cortex and superior temporal gyrus during emotion recognition in young but not older adults

Show all publications by Petri Laukka at Stockholm University