HybridPC: a hybrid implicit-explicit framework for zero-shot point cloud completion
Yongwei MIAO , Yijun LI , Ran FAN , Zhenghui HU , Fuchang LIU
Front. Comput. Sci. ›› 2027, Vol. 21 ›› Issue (4) : 2104703
Point cloud completion is a fundamental task in 3D perception and 3D vision. Existing point cloud completion methods typically rely on supervised learning with limited 3D data, resulting in poor generalization and suboptimal recovery in scenarios involving complex shape structures or large missing regions. To overcome these limitations, we propose a novel zero-shot point cloud completion framework (called HybridPC) that achieves high-fidelity 3D reconstruction without any 3D supervision or task-specific training. HybridPC leverages powerful 2D diffusion priors and a progressive implicit-explicit architecture to address severe incompleteness and complex geometries. The framework comprises three key stages: 1) Edge-aware neural field initialization: ControlNet-guided stable diffusion synthesizes multi-view images conditioned on text prompts and orthographic edge projections of the incomplete point cloud, providing strong shape constraints to initialize a coarse NeRF field via Score Distillation Sampling (SDS). 2) Multi-view diffusion collaborative completion: A pre-trained multi-view diffusion model enforces cross-view consistency, collaboratively completing the entire neural radiance field (NeRF) with globally coherent geometry. To reconcile gradient conflicts between ControlNet and multi-view diffusion during joint SDS optimization, a PCGrad-based multi-objective optimization strategy is introduced to balance the structural and semantic guidance, yielding higher-fidelity shape completion. 3) Geometry-aware tetrahedral refinement: The implicit field is converted into a tetrahedral mesh using DMTet, which is further refined via implicit SDS-based normal optimization and explicit geometric constraints on the mesh surface, ensuring structural fidelity to the partial input. Extensive experiments on the ShapeNetPart and Redwood datasets demonstrate that HybridPC outperforms existing supervised and zero-shot methods in both qualitative and quantitative comparisons. Specifically, HybridPC preserves the input structure more faithfully, completes missing regions more accurately, and shows stronger generalization ability, with particularly significant improvements on real-world scans from Redwood dataset. Our results show the strong potential of coupling 2D diffusion priors with 3D geometric modeling for scalable, training-free point cloud completion.
point cloud completion / zero-shot / neural radiance field (NeRF) / stable diffusion / differentiable tetrahedral mesh
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
Poole B, Jain A, Barron J T, Mildenhall B. DreamFusion: text-to-3D using 2D diffusion. In: Proceedings of the 11th International Conference on Learning Representations. 2023 |
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
Sun Y, Wang Y, Liu Z, Siegel J E, Sarma S E. PointGrow: autoregressively learned point cloud generation with self-attention. In: Proceedings of 2020 IEEE Winter Conference on Applications of Computer Vision (WACV). 2020, 61−70 |
| [23] |
|
| [24] |
|
| [25] |
Shi Y, Wang P, Ye J, Mai L, Li K, Yang X. MVDream: multi-view diffusion for 3D generation. In: Proceedings of the 12th International Conference on Learning Representations. 2024 |
| [26] |
|
| [27] |
|
| [28] |
Liu Z, Feng Y, Black M J, Nowrouzezahrai D, Paull L, Liu W. MeshDiffusion: score-based generative 3D mesh modeling. In: Proceedings of the 11th International Conference on Learning Representations. 2023 |
| [29] |
|
| [30] |
Gadelha M, Maji S, Wang R. 3D shape induction from 2D views of multiple objects. In: Proceedings of 2017 International Conference on 3D Vision (3DV). 2017, 402−411 |
| [31] |
|
| [32] |
|
| [33] |
Kazhdan M, Bolitho M, Hoppe H. Poisson surface reconstruction. In: Proceedings of the 4th Eurographics Symposium on Geometry Processing. 2006, 61−70 |
| [34] |
|
| [35] |
|
| [36] |
|
| [37] |
|
| [38] |
|
| [39] |
|
| [40] |
|
| [41] |
|
| [42] |
|
| [43] |
|
| [44] |
|
| [45] |
|
| [46] |
|
| [47] |
|
Higher Education Press
/
| 〈 |
|
〉 |