Hello! 👋 I am a Postdoctoral AI Research Scientist at Meta (Codec Avatars Lab), where I am fortunate to work with Shunsuke Saito. Previously, I obtained my Ph.D. at KAIST advised by Tae-Kyun (T-K) Kim and Minhyuk Sung. In 2024, I visited Meta Reality Labs as a research scientist intern, hosted by Jason Saragih.
The goal of my research is to enable artificial intelligence to visually understand and simulate ourselves and our interactions with the environment. Lately, my focus has been on reconstructing and generating hand interactions, but my research interests are not limited to that. 🙂
jihyunlee [at] meta.com
Integrated Master's & Ph.D in Computer Science, KAIST
Sep. 2020 - Aug. 2025
MPMAvatar: Learning 3D Gaussian Avatars with Accurate and Robust Physics-Based Dynamics
NeurIPS 2025
ORIGEN: Zero-Shot 3D Orientation Grounding in Text-to-Image Generation
NeurIPS 2025
Generative Modeling of Shape-Dependent Self-Contact Human Poses
ICCV 2025
Affordance-Guided Diffusion Prior for 3D Hand Reconstruction
ICCVW 2025
REWIND: Real-Time Egocentric Whole-Body Motion Diffusion with Exemplar-Based Identity Conditioning
CVPR 2025
Dense Hand-Object(HO) GraspNet with Full Grasping Taxonomy and Dynamics
ECCV 2024
InterHandGen: Two-Hand Interaction Generation via Cascaded Reverse Diffusion
CVPR 2024
FourierHandFlow: Neural 4D Hand Representation Using Fourier Query Flow
NeurIPS 2023
Im2Hands: Learning Attentive Implicit Representation of Interacting Two-Hand Shapes
CVPR 2023
[Project] [Paper] [Code] [Presentation] [Poster]
Contrastive Knowledge Distillation for Anomaly Detection in Multi-Illumination/Focus Display Images
MVA 2023 (oral)
Writen as a report of an industrial research project with Samsung Display
Pop-Out Motion: 3D-Aware Image Deformation via Learning Shape Laplacian
CVPR 2022