I am a Researcher at International Digital Economy Academy (IDEA). I am now working on talking head generation, face tracking, and video content generation research. If you are seeking any form of academic cooperation, please feel free to email me at liuyunfei@idea.edu.cn.

I got my Ph.D. degree from Beihang University, advised by Prof. Feng Lu. Previously, I received my BSc degree in Computer Science from Beijing Institute of Technology in 2017.

My research interests include the intersection of machine learning, deep learning, pattern recognition, and statistical modeling/inference with applications for computer vision, computational photography, low-level vision, human-computer interaction, and AR/MR.

I serve as a reviewer for international conferences and journals, e.g., CVPR, ICCV, NeuIPS, ACM MM, TPAMI, IJCV, PR, TVCG, etc..

📢 We are hiring interns at IDEA, base - Shenzhen. Feel free to contact me if you are interested.

 details

🔥 News

  • [February, 2024]: 🎉 Two CVPR papers have been accepted.
  • [Jan, 2024]:  🎉 Our GPAvatar has been accepted by ICLR 2024.
  • [December, 2023]:  🎉 Our PnP-GA+ has been accepted by TPAMI.
  • [July, 2023]:  🎉 Two ICCV papers have been accepted.
  • [June, 2023]:  🎉 One TPAMI paper has been published.
  • [April, 2023]:  🎉 One CVMJ paper has been accepted.
  • [Mar, 2022]:  🎉 Two CVPR papers have been accepted.

📝 Publications

CVPR 2024
sym

DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation

Junming Chen, Yunfei Liu, Jianan Wang, Ailing Zeng, Yu Li, Qifeng Chen

Project

  • We propose DiffSHEG, a Diffusion-based approach for Speech-driven Holistic 3D Expression and Gesture generation with arbitrary length.
  • Our diffusion-based co-speech motion generation transformer enables uni-directional information flow from expression to gesture, facilitating improved matching of joint expression-gesture distributions
ICCV 2023
sym

MODA: Mapping-Once Audio-driven Portrait Animation with Dual Attentions

Yunfei Liu, Lijian Lin, Fei Yu, Changyin Zhou, Yu Li

Project

  • We propose a unified system for multi-person, diverse, and high-fidelity talking portrait video generation.
  • Extensive evaluations demonstrate that the proposed system produces more natural and realistic video portraits compared to previous methods.
TPAMI 2023
sym

First- And Third-person Video Co-analysis By Learning Spatial-temporal Joint Attention

Huangyue Yu, Minjie Cai, Yunfei Liu, Feng Lu

Project | IF=17.730

  • We propose a multi-branch deep network, which extracts cross-view joint attention and shared representation from static frames with spatial constraints, in a self-supervised and simultaneous manner.
  • We demonstrate how the learnt joint information can benefit various applications.
CVPR 2022
sym

GazeOnce: Real-Time Multi-Person Gaze Estimation

Mingfang Zhang, Yunfei Liu, Feng Lu

Project

  • GazeOnce is the first one-stage endto-end gaze estimation method.
  • This unified framework not only offers a faster speed, but also provides a lower gaze estimation error compared with other SOTA methods.
ICCV 2021
sym

Generalizing Gaze Estimation with Outlier-guided Collaborative Adaptation

Yunfei Liu*, Ruicong Liu*, Haofei Wang, Feng Lu

Project

  • PnP-GA is an ensemble of networks that learn collaboratively with the guidance of outliers.
  • Existing gaze estimation networks can be directly plugged into PnP-GA and generalize the algorithms to new domains.
CVPR 2020
sym

Unsupervised Learning for Intrinsic Image Decomposition from a Single Image

Yunfei Liu, Yu Li, Shaodi You, Feng Lu

Project

  • USI3D is the first intrinsic image decomposition method that learns from uncorrelected image sets.
  • Academic Impact: This work is included by many low-level vision projects, such as Relighting4D , IntrinsicHarmony, DIB-R++. Discussions in Zhihu.
ECCV 2020
sym

Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks

Yunfei Liu, Xingju Ma, James Bailey, Feng Lu

[Project | (Citations 300+)]

  • We present a new type of backdoor attack: natural reflection phenomenon.
  • Academic Impact: This work is included by many backdoor attack/defense works, Such as NAD . This work is at the first place at google scholar .

🎖 Honors and Awards

ICCV 2019 Workshop
sym

Winner of Hyperspectral City Challenge 1.0, Rank: 1.

Yunfei Liu

Project

  • Adopt adopts multi-channel multi-spectum data to guide semantic segmentation for cityscapes.
CADC 2015
sym

Vertical Take-Off and Landing (VTOL) track. First prize

Yunfei Liu, Changjing Wang, Chao Ma, Haonan Zheng, Yongzhen Pan.

Project

  • China Aeromodelling Design Challenge. Vertical Take-Off and Landing (VTOL) track. First prize, Rank: 6/70.
  • 2021.10 National Scholarship.
  • 2021.11 Principal scholarship.

📖 Educations

  • 2017.09 - 2022.11, Beihang University, School of Computer Science and Engineering, State Key Labratory of Virtual Reality Technology and Systems.
  • 2013.08 - 2017.06, Beijing Insitute of Technology, School of Computer Science and Technology.

💬 Invited Talks

  • 2021.06, Visual intelligence for enhanced perception, Huawei internal talk
  • 2021.06, Digital Image Processing, Beihang international class
  • 2020.06, Deep learning interpretability, Meituan internal talk

💻 Internships

  • 2022.03 - 2022.10, IDEA, Vistring Lab, Shenzhen, China.
  • 2016.07 - 2017.05, DJI, Visual Perception Group, Shenzhen, China.
Last updated on Aug. 2023