🌇 Hi there👋🏻, I am Yunfei (☁️🪽), a senior researcher at International Digital Economy Academy (IDEA).
I am now leading an effort on talking head generation, face tracking, humuan video generation, human-centric 3DGS and video content generation research. If you are seeking any form of academic cooperation, please feel free to email me at liuyunfei@idea.edu.cn.
I got my Ph.D. degree from Beihang University, advised by Prof. Feng Lu. Previously, I received my BSc degree in Computer Science from Beijing Institute of Technology in 2017.
My research aims to build multimodal, highly expressive, lifelike, and immersive interactive agents, covering perception, understanding, reconstruction, and generation of humans and the world. Specifically:
-
👀 Human–environment interaction perception: PnP-GA, GazeOnce, WISWYS.
-
😏 2D/3D Head and body reconstruction, animation, and generation: TEASER, GUAVA, GPAvatar, MODA, HRAvatar, DiffSHEG, TokenFace.
-
🎞 Image/Video editing and generation: Qffusion, STEM-inv, AddMe.
Previously, I also worked on network interpretability AR/VR, and industrial anomaly detection: Refool, EGNIA, 3DEG, UTAD.
I serve as a reviewer for international conferences and journals, e.g., CVPR, ICCV, NeuIPS, ICLR, ICML, ACM MM, TPAMI, IJCV, PR, TVCG, etc..
👏 We are currently looking for self-motivated interns to explore cutting-edge techniques such as Gaussian Splatting and DM/FM. Feel free to contact me if you are interested. zhihu