Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction

CVPR 2024

1State Key Laboratory of CAD&CG, Zhejiang University 2ByteDance Inc.

Results on the D-NeRF Synthetic Dataset

Results on the HyperNeRF Dataset

Results on the NeRF-DS Dataset

Compared to HyperNeRF, camera poses in the NeRF-DS dataset is more accurate.
This provides robust evidence of the efficacy of our approach.

Results on the View-Synthesis Tasks Part-1

Results on the View-Synthesis Tasks Part-2

Real-Time Viewer

Pipeline of Deformable 3D Gaussians

HyperNeRF architecture.

Abstract

Implicit neural representation has opened up new avenues for dynamic scene reconstruction and rendering. Nonetheless, state-of-the-art methods of dynamic neural rendering rely heavily on these implicit representations, which frequently struggle with accurately capturing the intricate details of objects in the scene. Furthermore, implicit methods struggle to achieve real-time rendering in general dynamic scenes, limiting their use in a wide range of tasks. To address the issues, we propose a deformable 3D Gaussians Splatting method that reconstructs scenes using explicit 3D Gaussians and learns Gaussians in canonical space with a deformation field to model monocular dynamic scenes. We also introduced a smoothing training mechanism with no extra overhead to mitigate the impact of inaccurate poses in real datasets on the smoothness of time interpolation tasks. Through differential gaussian rasterization, the deformable 3D Gaussians not only achieve higher rendering quality but also real-time rendering speed. Experiments show that our method outperforms existing methods significantly in terms of both rendering quality and speed, making it well-suited for tasks such as novel-view synthesis, time synthesis, and real-time rendering. We plan to release our code and data soon.

BibTeX

@article{yang2023deformable3dgs,
    title={Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction},
    author={Yang, Ziyi and Gao, Xinyu and Zhou, Wen and Jiao, Shaohui and Zhang, Yuqing and Jin, Xiaogang},
    journal={arXiv preprint arXiv:2309.13101},
    year={2023}
}