開催日 2024年6月5日
文学部心理学研究室では、第64回心理学研究室セミナーとして以下のような講演会を開催しますので、ふるってご参加ください。なお、講演内容は大学院生以上を想定しております。講演は英語で行われます。

第64回心理学研究室セミナー
日時:2024年6月5日(水)午後4時から
場所:文学部法文2号館1番大教室


【講師】 David Whitney先生(University of California, Berkeley)
【演題】 The aperture problem of emotion perception
【要旨】
Understanding emotion is of paramount importance for humans. Although most work on emotion recognition focuses on face perception, my lab has taken a different approach and shown that there is a fundamental aperture problem in emotion perception. We developed a novel inferential emotion tracking (IET) task to measure observers’ abilities to track emotion information in natural movies when faces were completely masked and only the background context was visible (Chen & Whitney, PNAS, 2019). We found that observers use the spatial and temporal context to perceive emotion and that relying on face information alone is misleading, highlighting an aperture problem in emotion perception (Chen & Whitney, Emotion, 2020). Our finding that the use of context in emotion perception is not delayed relative to faces (Chen & Whitney, Cognition, 2021) indicates parallel pathways for face-based and context-based emotion recognition. The brain solves the emotion aperture problem by integrating different cues (i.e., background context and face information) but does so in a heuristic or naïve Bayesian manner, which, surprisingly, does not weigh cue reliability (Ortega & Whitney, submitted). Some atypical populations, including those with autism, do not integrate background context information successfully, and because of this are not as sensitive or accurate in emotion recognition tasks (Ortega et al, Sci Reports, 2023). Indeed, individual differences reveal that those observers who are better able to code and incorporate the background context are more successful at accurately tracking the emotions of others (Ortega & Whitney, VSS, 2023). The aperture problem extends beyond just emotion perception to all visuo-social understanding. As an example, we have used it to measure the trustworthiness of faces in movies (Ortega, et al., submitted). Collectively, our research demonstrates a fundamental aperture problem in social and emotional perception, and it reveals why computer vision models of emotion and trustworthiness fail so spectacularly when tested with naturally dynamic scenes: they overemphasize the value of facial expression information at the expense of dynamic background context. To address this shortcoming, we created the largest psychophysical dataset of continuous emotion tracking in natural movies (Ren, et al., IEEE/CVF, 2024), which serves as a benchmark to improve computer vision models of emotion perception, diagnostic testing in atypical populations, and computational models of emotion recognition.


(問い合わせ先)心理学研究室 shinri -at- L.u-tokyo.ac.jp  -at-@に変更)