1 Full Chinese Name, Full English Name and English Abbreviation of the Technical Committee
Full Chinese Name: 情感计算与理解专业委员会
Full English Name: Technical Committee on Affective Computing and Understanding
English Abbreviation: TCACU
2 Brief Introduction to the Technical Committee
The Technical Committee on Affective Computing and Understanding of the China Society for Image and Graphics (CSIG) is positioned to promote the understanding of the mechanisms of emotion induction, study the corresponding relationships between physiological signals, behavioral actions, multimedia big data and emotions, and break through the semantic gap between low-level features and high-level emotions. It explores the mechanisms of emotional feedback, expression and generation from the perspectives of brain science, neuroscience and cognitive science, reveals the essential principle that humans can perform emotion recognition with small data, develops emotional robots serving humans, generates emotional intelligence comparable to humans, and improves the harmony of human-computer interaction.
3 Information of the Person in Charge of the Technical Committee
Director:
Yao Hongxun, Harbin Institute of Technology
Deputy Directors:
Xue Xiangyang, Fudan University
Zheng Wenming, Southeast University
Jia Jia, Tsinghua University
Secretary-General:
Zhao Sicheng, Tsinghua University
4 Contact Person and Contact Information of the Technical Committee
Contact Person: Zhao Sicheng
Contact Email: schzhao@yeah.net
5 Representative Achievements
1) Key Technologies for Image and Video Quality Assessment
Core Researcher: Li Leida
Award: Won the First-Class Shaanxi Provincial Natural Science Award in 2020; carried out industry-academia-research cooperation with Tencent, OPPO, etc.
Image quality assessment predicts image quality by establishing quantitative models that are consistent with human perception. Based on the perceptual characteristics of the human visual system and visual psychological mechanisms, a series of image and video distortion assessment models and image aesthetic evaluation models have been proposed; the evaluation process does not require reference information and is interpretable. It has important application prospects in fields such as imaging optimization, image quality monitoring, smart phones, photography, photo editing, and image retrieval. Relevant achievements have been published in more than 50 papers in CAS Tier 1 journals and CCF A-class conferences.
2) A Large-scale, Multi-modal, Compound Affective Database for Dynamic Facial Expression Recognition in the Wild
Core Researchers: Liu Yuanyuan, Shan Shiguang, et al.
A large-scale multi-modal compound affective database under natural scenes—MAFW is proposed. MAFW is the world's first multi-modal database annotated with compound emotions and emotion-related subtitle annotations. Compared with existing databases, MAFW includes 10,045 video clips from movies, reality shows, talk shows, news, variety shows, etc.; it covers three modalities: visual, audio, and text, and has three annotation forms: single-expression labels, multi-expression labels, and bilingual emotional descriptive texts. Currently, the MAFW database has been made public at https://mafw-database.github.io/MAFW/. In addition, the paper proposes an EM algorithm-based label reliability analysis method and a Transformer-based multi-modal emotional segment representation learning method, which uses the facial change relationships of different emotions to recognize compound emotions. Extensive experiments on the MAFW database show that our method outperforms other state-of-the-art methods for both single-label and multi-label multi-modal emotion recognition.
Copyright © 2025 China Society of Image Graphics JGGBA No. 11010802035643 ICP No. 12009057-1
Address:No. 95, Zhongguancun East Road,Haidian District, Beijing Postcode: 100190