Deep Points Consolidation


ACM Transactions on Graphics

(Proceedings of SIGGRAPH Asia 2015)

Shihao Wu1         Hui Huang1*     Minglun Gong3        Matthias Zwicker1        Daniel Cohen-Or2   

University of Bern         2Shenzhen VisuCA Key Lab / SIAT         3Memorial University        4Tel Aviv University  


  

Figure 1: The deep points representation (left) is a set of line sections, each with one end (red) on the surface (middle) and the other (blue) on the meso-skeleton (right)

Abstract 
In this paper, we present a consolidation method that is based on a new representation of 3D point sets. The key idea is to augment each surface point into a deep point by associating it with an inner point that resides on the meso-skeleton, which consists of a mixture of skeletal curves and sheets. The deep points representation is a result of a joint optimization applied to both ends of the deep points. The optimization objective is to fairly distribute the end points across the surface and the meso-skeleton, such that the deep point orientations agree with the surface normals. The optimization converges where the inner points form a coherent meso-skeleton, and the surface points are consolidated with the missing regions completed. The strength of this new representation stems from the fact that it is comprised of both local and non-local geometric information. We demonstrate the advantages of the deep points consolidation technique by employing it to consolidate and complete noisy point-sampled geometry with large missing parts. 

API and Data: download from below. 
[To reference our software or data in a publication, please include the bibtex below and a link to this website.] 

Video@Youtub


Video@Youku 



Overview 

Figure 2: Deep points consolidation. Given the input point cloud (a) and its initial consolidation results (b), our approach creates deep points by sinking the inner points to form a meso-skeleton (c) and moving the outer points along the surface to complete missing areas (d). The final representation consists of a set of coherent vectors that connects the surface with the meso-skeleton. 


Results

Figure 3: Autoscanning overview: given an incomplete point cloud (b) obtained by a blind scanning of an unknown object (a), we first reconstruct a Poisson iso-surface and estimate its confidence map (c), where high confidence areas are shown in red and low confidence in blue. A 3D viewing vector field (VVF) is then generated to determine a set of next-best-views (NBVs). A slice of the VVF is visualized in (d), where black arrows show the NBVs. Scanning from these NBVs captures more points (red in (e)) in low confidence areas. The scanning process is iterated until convergence to a high quality reconstruction (f).


 Figure 4: The input point cloud (a) contains noise and large missing regions. Applying Poisson surface reconstruction [Kazhdan and Hoppe 2013] on either the input (a) or the WLOP consolidation [Huang et al. 2009] result (c) does not yield satisfactory models; see (b) and (d), respectively. The surface points shown in (e) are consolidated and completed by our dpoints technique. This leads to a much better Poisson surface reconstruction (f). In (c) and (e), the errors of the surface point normals estimated by local PCA are evaluated based on the ground truth and color coded (blue means higher error). 


Figure 5: A comparison among the Poisson surface reconstructions [Kazhdan and Hoppe 2013] obtained using input points directly (a), ROSA skeleton [Tagliasacchi et al. 2009] (b), L1-medial skeleton [Huang et al. 2013b] (c), and our dpoints consolidation (d). 

Figure 6: Results on standard benchmark 3D scans (a), which are downloaded from the SHREC 2015 dataset [NIST 2015]. The direct Poisson reconstruction results (b) incorrectly fused multiple parts together. Using the consolidated dpoints (c & d), the thin and adjacent structures are better preserved. 


Comparison 

Figure 7: Handling objects (a) with complicated thin and non-tubular structures. Directly applying Poisson reconstruction over WLOP (b) failed to provide satisfying results (d). Our reconstruction results (e) based on dpoints consolidation (c) better preserve the thin and non-tubular structures while maintaining the correct connectivity of different parts. 



Figure 8: Comparison with the visibility-based algorithm [Khalfaoui et al. 2013] (a) and the PVS approach [Kriegel et al. 2013] (b) on the virtual model shown in Figure 5(a). 



Figure 9: Reconstruction results under different confidence measures. 



Figure 10: Post-processing for reconstructing fine geometry details and sharp features. While due to downsampling, the Poisson reconstruction results (d) on dpoints (c) cannot preserve fine details and sharp features as well as on the original shapes (a, b), the post EAR [Huang et al. 2013a] step (e) effectively helps to recover them (f) through inserting and projecting additional dpoints. 



Figure 11: Quantitative evaluation on reconstruction accuracy using virtual scans of a ground truth synthetic model. When a single scan (a) is used, the direct Poisson reconstruction result (inset in (b)) does not resemble the model (shown in (b)). In comparison, the Poisson reconstructed model (inset in (d)) based on dpoints (c) is visually much more accurate. The reconstruction errors, measured using the distances between vertices on the ground truth model and their closest points on the reconstructed surface, are visualized in (b) and (d). The error distributions under clean and noise-corrupted scans are plotted in (e) and (f), respectively. 

Acknowledgments 
We thank the reviewers for their valuable feedback. This work was supported in part by NSFC (61522213, 61379090), 973 Program (2014CB360503), Guangdong Science and Technology Program (2015A030312015, 2014B050502009, 2014TX01X033), Shenzhen VisuCA Key Lab (CXB201104220029A), NSERC (293127) and BSF (2012376). 

BibTex 
@ARTICLE{Dpoints 15, 
title = {Deep Points Consolidation}, 
author = {Shihao Wu and and Hui Huang and Minglun Gong and Matthias Zwicker and Daniel Cohen-Or }, 
journal = {ACM Transactions on Graphics(Proc. of SIGGRAPH Asia 2015)}, 
volume = {}, 
number = {}, 
pages = {}, 

year = {2015},

}

Copyright © 1998-2018 Visual Computing Research Center