I am a Researcher at International Digital Economy Academy (IDEA). I am now working on talking head generation, face tracking, and video content generation research. If you are seeking any form of academic cooperation, please feel free to email me at liuyunfei@idea.edu.cn.
I got my Ph.D. degree from Beihang University, advised by Prof. Feng Lu. Previously, I received my BSc degree in Computer Science from Beijing Institute of Technology in 2017.
My research interests include the intersection of machine learning, deep learning, pattern recognition, and statistical modeling/inference with applications for computer vision, computational photography, low-level vision, human-computer interaction, and AR/MR.
I serve as a reviewer for international conferences and journals, e.g., CVPR, ICCV, NeuIPS, ACM MM, TPAMI, IJCV, PR, TVCG, etc..
📢 We are hiring interns at IDEA, base - Shenzhen. Feel free to contact me if you are interested.🔥 News
- [August, 2024]: I will serve as a Program Committee for AAAI 2025.
- [July, 2024]: One paper is accepted to ECCV 2024.
- [February, 2024]: 🎉 Two CVPR papers have been accepted.
- [Jan, 2024]: 🎉 Our
GPAvatar
has been accepted to ICLR 2024. - [December, 2023]: 🎉 Our
PnP-GA+
has been accepted by TPAMI. - [July, 2023]: 🎉 Two ICCV papers have been accepted.
- [June, 2023]: 🎉 One TPAMI paper has been published.
- [April, 2023]: 🎉 One CVMJ paper has been accepted.
- [Mar, 2022]: 🎉 Two CVPR papers have been accepted.
📝 Publications
Junming Chen, Yunfei Liu, Jianan Wang, Ailing Zeng, Yu Li, Qifeng Chen
- We propose DiffSHEG, a Diffusion-based approach for Speech-driven Holistic 3D Expression and Gesture generation with arbitrary length.
- Our diffusion-based co-speech motion generation transformer enables uni-directional information flow from expression to gesture, facilitating improved matching of joint expression-gesture distributions
MODA: Mapping-Once Audio-driven Portrait Animation with Dual Attentions
Yunfei Liu, Lijian Lin, Fei Yu, Changyin Zhou, Yu Li
- We propose a unified system for multi-person, diverse, and high-fidelity talking portrait video generation.
- Extensive evaluations demonstrate that the proposed system produces more natural and realistic video portraits compared to previous methods.
First- And Third-person Video Co-analysis By Learning Spatial-temporal Joint Attention
Huangyue Yu, Minjie Cai, Yunfei Liu, Feng Lu
Project | IF=17.730
- We propose a multi-branch deep network, which extracts cross-view joint attention and shared representation from static frames with spatial constraints, in a self-supervised and simultaneous manner.
- We demonstrate how the learnt joint information can benefit various applications.
GazeOnce: Real-Time Multi-Person Gaze Estimation
Mingfang Zhang, Yunfei Liu, Feng Lu
- GazeOnce is the first one-stage endto-end gaze estimation method.
- This unified framework not only offers a faster speed, but also provides a lower gaze estimation error compared with other SOTA methods.
Generalizing Gaze Estimation with Outlier-guided Collaborative Adaptation
Yunfei Liu*, Ruicong Liu*, Haofei Wang, Feng Lu
- PnP-GA is an ensemble of networks that learn collaboratively with the guidance of outliers.
- Existing gaze estimation networks can be directly plugged into PnP-GA and generalize the algorithms to new domains.
Unsupervised Learning for Intrinsic Image Decomposition from a Single Image
Yunfei Liu, Yu Li, Shaodi You, Feng Lu
- USI3D is the first intrinsic image decomposition method that learns from uncorrelected image sets.
- Academic Impact: This work is included by many low-level vision projects, such as Relighting4D , IntrinsicHarmony, DIB-R++. Discussions in Zhihu.
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Yunfei Liu, Xingju Ma, James Bailey, Feng Lu
[Project | (Citations 300+)]
- We present a new type of backdoor attack: natural reflection phenomenon.
- Academic Impact: This work is included by many backdoor attack/defense works, Such as NAD . This work is at the first place at google scholar .
ECCV 2024
AddMe: Zero-shot Group-photo Synthesis by Inserting People into Scenes, Dongxu Yue, Maomao Li, Yunfei Liu, Ailing Zeng, Tianyu Yang, Qin Guo, Yu LiCVPR 2024
A Video is Worth 256 Bases: Spatial-Temporal Expectation-Maximization Inversion for Zero-Shot Video Editing, Maomao Li, Yu Li, Tianyu Yang, Yunfei Liu, Dongxu Yue, Zhihui Lin, Dong XuICLR 2024
GPAvatar: Generalizable and Precise Head Avatar from Image(s), Xuangeng Chu, Yu Li, Ailing Zeng, Tianyu Yang, Lijian Lin, Yunfei Liu, Tatsuya HaradaPRCV 2024
Visibility Enhancement for Low-light Hazy Scenarios, Chaoqun Zhuang, Yunfei Liu, Sijia Wen, Feng Lu.ICCV 2023
Accurate 3D Face Reconstruction with Facial Component Tokens, Tianke Zhang, Xuangeng Chu, Yunfei Liu, Lijian Lin, Zhendong Yang, et al..CVMJ 2023
Discriminative feature encoding for intrinsic image decomposition, Zhongji Wang, Yunfei Liu, Feng Lu.CVPR 2022
Generalizing Gaze Estimation with Rotation Consistency, Yiwei Bao, Yunfei Liu, Haofei Wang, Feng Lu.IEEE-VR 2022
Reconstructing 3D Virtual Face with Eye Gaze from a Single Image, Jiadong Liang, Yunfei Liu, Feng Lu. [Oral]TOMM 2022
Semantic Guided Single Image Reflection Removal, Yunfei Liu, Yu Li, Shaodi You, Feng Lu, GitHub.arXiv 2022
Jitter Does Matter: Adapting Gaze Estimation to New Domains, Mingjie Xu, Haofei Wang, Yunfei Liu, Feng Lu.ISAMR 2021
3D Photography with One-shot Portrait Relighting, Yunfei Liu, Sijia Wen, Feng Lu.ISAMR 2021
Edge-Guided Near-Eye Image Analysis for Head Mounted Displays, Zhimin Wang*, Yuxin Zhao*, Yunfei Liu, Feng Lu. [Oral] GitHub, Demo videoBMVC 2021
Separating Content and Style for Unsupervised Image-to-Image Translation, Yunfei Liu, Haofei Wang, Yang Yue, Feng Lu. GitHub.arXiv 2021
Unsupervised Two-Stage Anomaly Detection, Yunfei Liu, Chaoqun Zhuang, Feng Lu, GitHub.arXiv 2021
Cloud Sphere: A 3D Shape Representation via Progressive Deformation, Zongji Wang, Yunfei Liu, Feng Lu.arXiv 2021
Vulnerability of Appearance-based Gaze Estimation, Mingjie Xu, Haofei Wang, Yunfei Liu, Feng Lu.AAAI 2020
Separate In Latent Space: Unsupervised Single Image Layer Separation, Yunfei Liu, Feng Lu. GitHub [Oral]ICPR 2020
Adaptive Feature Fusion Network for Gaze Tracking in Mobile Tablets, Yiwei Bao, Yihua Cheng, Yunfei Liu, Feng Lu. GitHub.ACM-MM 2019
What I See Is What You See: Joint Attention Learning for First and Third Person Video Co-analysis, Huangyue Yu, Minjie Cai, Yunfei Liu, Feng Lu.
🎖 Honors and Awards
-
Shenzhen Artificial Intelligence Natural Science Award, 2023
-
Shenzhen Pengcheng special talent award, 2023
Winner of Hyperspectral City Challenge 1.0, Rank: 1.
Yunfei Liu
Project
- Adopt adopts multi-channel multi-spectum data to guide semantic segmentation for cityscapes.
Vertical Take-Off and Landing (VTOL) track. First prize
Yunfei Liu, Changjing Wang, Chao Ma, Haonan Zheng, Yongzhen Pan.
Project
- China Aeromodelling Design Challenge. Vertical Take-Off and Landing (VTOL) track. First prize, Rank: 6/70.
- 2021.10 National Scholarship.
- 2021.11 Principal scholarship.
📖 Educations
- 2017.09 - 2022.11, Beihang University, School of Computer Science and Engineering, State Key Labratory of Virtual Reality Technology and Systems.
- 2013.08 - 2017.06, Beijing Insitute of Technology, School of Computer Science and Technology.
💬 Invited Talks
- 2021.06, Visual intelligence for enhanced perception, Huawei internal talk
- 2021.06, Digital Image Processing, Beihang international class
- 2020.06, Deep learning interpretability, Meituan internal talk
💻 Internships
- 2022.03 - 2022.10, IDEA, Vistring Lab, Shenzhen, China.
- 2016.07 - 2017.05, DJI, Visual Perception Group, Shenzhen, China.