Mixed reality head mounted displays for enhanced indoor point cloud segmentation with virtual seeds

Juan C. Navares-Vázquez , Pedro Arias , Lucía Díaz-Vilariño , Jesús Balado

Resilient Cities and Structures ›› 2024, Vol. 3 ›› Issue (3) : 43 -52.

PDF (3189KB)
Resilient Cities and Structures ›› 2024, Vol. 3 ›› Issue (3) : 43 -52. DOI: 10.1016/j.rcns.2024.06.005
Research article
research-article

Mixed reality head mounted displays for enhanced indoor point cloud segmentation with virtual seeds

Author information +
History +
PDF (3189KB)

Abstract

Mixed Reality (MR) Head Mounted Displays (HMDs) offer a hitherto underutilized set of advantages compared to conventional 3D scanners. These benefits, inherent to MR-HMDs albeit not originally intended for such applications, encompass the freedom of hand movement, hand tracking capabilities, and real-time mesh visualization. This study leverages these attributes to enhance indoor scanning process. The primary innovation lies in the conceptualization of manual-positioned MR virtual seeds for the purpose of indoor point cloud segmentation via a region-growing approach. The proposed methodology is effectively implemented using the HoloLens 2 platform. An application is designed to enable the remote placement of virtual tags based on the user's visual focus on the MR-HMD display. This non-intrusive interface is further enriched with expedited tag saving and deletion functionalities, as well as augmented tag visualization through overlaying them on real-world objects. To assess the practicality of the proposed method, a comprehensive real-world case study spanning an area of 330 s2 is conducted. Remarkably, the survey demonstrates remarkable efficiency, with 20 virtual tags swiftly deployed, each requiring a mere 2 s for precise positioning. Subsequently, these virtual tags are employed as seeds in a region-growing algorithm for point cloud segmentation. The accuracy of virtual tag positioning is found to be exceptional, with an average error of 2.4 ± 1.8 cm. Importantly, the user experience is significantly enhanced, leading to improved seed positioning and, consequently, more accurate final segmentation results.

Keywords

Augmented reality / eXtended Reality / Handheld mobile laser scanning / Region growing / Semantic segmentation

Cite this article

Download citation ▾
Juan C. Navares-Vázquez, Pedro Arias, Lucía Díaz-Vilariño, Jesús Balado. Mixed reality head mounted displays for enhanced indoor point cloud segmentation with virtual seeds. Resilient Cities and Structures, 2024, 3(3): 43-52 DOI:10.1016/j.rcns.2024.06.005

登录浏览全文

4963

注册一个新账户 忘记密码

Relevance for resilience

This work carries substantial relevance to the resilience of urban environments and structures, notably in the context of city planning and building management. By harnessing the capabilities of Mixed Reality Head Mounted Displays (MR-HMDs) for indoor scanning and point cloud segmentation, this research paves the way for more efficient and accurate methods of capturing critical structural information. In the realm of urban resilience, the ability to assess the state of buildings and infrastructure rapidly and precisely is paramount. MR-HMD technology enables swift inspections and monitoring, crucial in the aftermath of natural disasters or unforeseen events, facilitating quicker response times and decision-making processes. Moreover, the non-intrusive nature of the MR-HMD interface, coupled with user-friendly tag placement, ensures minimal disruption to the survey process, a crucial factor in urban settings where time is of the essence during emergency assessments. Furthermore, the potential applications extend to the maintenance and management of urban structures. As cities grapple with the challenges of ageing infrastructure and the need for sustainable urban development, having an efficient and accurate tool for surveying and monitoring becomes indispensable.

CRediT authorship contribution statement

Juan C. Navares-Vázquez: Methodology, Software. Pedro Arias: Funding acquisition, Project administration. Lucía Díaz-Vilariño: Funding acquisition, Project administration, Resources. Jesús Balado: Conceptualization, Supervision, Writing - original draft.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

This work was partially supported by RYC2022-038100-I and RYC2020-029193-I funded by MCIN/AEI/10.13039/ 501100011033 and FSE ‘El FSE invierte en tu futuro’. The paper is a result of the project PID2021-123475OA-I00, funded by MCIN/AEI/10.13039/501100011033/ FEDER, UE.". This paper was carried out in the framework of the SUM4Re project (Creating materials banks from digital urban mining), which has received funding from the Horizon Europe research and innovation program under grant agreement no. 101129961. Funded by the European Union. Views and opinions expressed are those of the authors only and do not necessarily reflect those of the EU or HADEA. Neither the EU nor the granting authority can be held responsible for them.

References

[1]

Tang S, Shelden DR, Eastman CM, Pishdad-Bozorgi P, Gao X. A review of building information modeling (BIM) and the internet of things (IoT) devices integration: present status and future trends. Autom Constr 2019;101:127-39. doi: 10.1016/j.autcon.2019.01.020.

[2]

Cui Y, Li Q, Dong Z.Structural 3D reconstruction of indoor space for 5G signal simulation with mobile laser scanning point clouds. Remote Sens 2019;11. doi: 10.3390/rs11192262.

[3]

Ma JW, Czerniawski T, Leite F. Semantic segmentation of point clouds of building interiors with deep learning: augmenting training datasets with synthetic BIM-based point clouds. Autom Constr 2020;113:103144. doi: 10.1016/j.autcon.2020.103144.

[4]

Maset E, Scalera L, Beinat A, Cazorzi F, Crosilla F, Fusiello A, Gasparetto A. Preliminary comparison between handheld and mobile robotic mapping systems. In: Proc. I4SDG work. 2021; 2021. p. 290-8. doi: 10.1007/978-3-030-87383-7_32.

[5]

Khoshelham K, Tran H, Acharya D. Indoor mapping eyewear: geometric evaluation of spatial mapping capability of HoloLens. In: ISPRS - int arch photogramm remote sens spat inf sci XLII-2/W13; 2019. p. 805-10. doi: 10.5194/isprs-archives-XLII-2-W13-805-2019.

[6]

Chen C, Tang L, Hancock C, Yan J, de Ligt H, Zhang P.2D-based indoor mobile laser scanning for construction mapping application; 2018.

[7]

Weinmann M, Jäger MA, Wursthorn S, Jutzi B, Weinmann M, Hübner P.3D indoor mapping with the Microsoft HoloLens: qualitative and quantitative evaluation by means of geometric features. ISPRS Ann Photogramm Remote Sens Spat Inf Sci2020;V-1-2020:165-72. doi: 10.5194/isprs-annals-V-1-2020-165-2020.

[8]

Salgues H, Macher H, Landes T. Evaluation of Mobile Mapping Systems for indoor surveys. Int Arch Photogramm Remote Sens Spat Inf Sci 2020;XLIV-4/W1-:119-25. doi: 10.5194/isprs-archives-XLIV-4-W1-2020-119-2020.

[9]

Lehtola VV, Kaartinen H, Nüchter A, Kaijaluoto R, Kukko A, Litkey P, Honkavaara E, Rosnell T, Vaaja MT, Virtanen J-P, Kurkela M, El Issaoui A, Zhu L, Jaakkola A, Hyyppä J.Comparison of the selected State-Of-The-Art 3D indoor scanning and point cloud generation methods. Remote Sens 2017;9. doi: 10.3390/rs9080796.

[10]

Otero R, Lagüela S, Garrido I, Arias P. Mobile indoor mapping technologies: a review. Autom Constr 2020;120:103399. doi: 10.1016/j.autcon.2020.103399.

[11]

Puljiz D, Krebs F, Bosing F, Hein B. What the hololens maps is your workspace: fast mapping and set-up of robot cells via head mounted displays and augmented reality. In: 2020 IEEE/RSJ int. conf. intell. robot. syst. IEEE Press; 2020. p. 11445-51. doi: 10.1109/IROS45743.2020.9340879.

[12]

Wu Z, Zhao T, Nguyen C.3D reconstruction and object detection for HoloLens. In: 2020 digit image comput tech appl; 2020. p. 1-2. doi: 10.1109/DICTA51227.2020.9363378.

[13]

Hübner P, Weinmann M, Wursthorn S, Hinz S. Automatic voxel-based 3D indoor reconstruction and room partitioning from triangle meshes. ISPRS J Photogramm Remote Sens 2021;181:254-78. doi: 10.1016/j.isprsjprs.2021.07.002.

[14]

Park S, Bokijonov S, Choi Y. Review of Microsoft HoloLens applications over the past five years. Appl Sci 2021;11. doi: 10.3390/app11167259.

[15]

Wang L, Sun Z, Zhang X, Sun Z, Wang J. A HoloLens based augmented reality navigation system for minimally invasive total knee arthroplasty. ICIRA 2019 Intell Robot Appl 2019:519-30. doi: 10.1007/978-3-030-27529-7_44.

[16]

Moro C, Phelps C, Redmond P, Stromberga Z. HoloLens and mobile augmented reality in medical and health science education: a randomised controlled trial. Br J Educ Technol 2021;52:680-94. doi: 10.1111/bjet.13049.

[17]

Wyss C, Bührer W, Furrer F, Degonda A, Hiss JA. Innovative teacher education with the augmented reality device Microsoft HoloLens —results of an exploratory study and pedagogical considerations. Multimodal Technol Interact 2021;5. doi: 10.3390/mti5080045.

[18]

Das MP, Dong Z, Scherer S. Joint point cloud and image based localization for efficient inspection in mixed reality. In: 2018 IEEE/RSJ int. conf. intell. robot. syst; 2018. p. 6357-63. doi: 10.1109/IROS.2018.8594318.

[19]

Radanovic M, Khoshelham K, Fraser C. Virtual element retrieval in Mixed Reality. ISPRS Ann Photogramm Remote Sens Spat Inf Sci 2022;V-4-2022:227-34. doi: 10.5194/isprs-annals-V-4-2022-227-2022.

[20]

Stigall J, Bodempudi ST, Sharma S, Scribner D, Grynovicki J, Grazaitis P. Building evacuation using Microsoft HoloLens; 2018.

[21]

Hübner P, Landgraf S, Weinmann M, Wursthorn S. Evaluation of the Microsoft HoloLens for the mapping of indoor building environments; 2019.

[22]

Weinmann M, Wursthorn S, Weinmann M, Hübner P. Efficient 3D mapping and modelling of indoor scenes with the microsoft HoloLens: a survey. PFG - J Photogramm Remote Sens Geoinf Sci 2021;89:319-33. doi: 10.1007/s41064-021-00163-y.

[23]

Keil J, Edler D, Dickmann F. Preparing the HoloLens for user studies: an augmented reality interface for the spatial adjustment of holographic objects in 3D indoor environments. KN - J Cartogr Geogr Inf 2019;69:205-15. doi: 10.1007/s42489-019-00025-z.

[24]

Trotta GF, Mazzola S, Gelardi G, Brunetti A, Marino N, Bevilacqua V. Reconstruction, optimization and quality check of microsoft hololens-acquired 3D point clouds. In: Esposito A, Faundez-Zanuy M, Morabito FC, Pasero E, editors. Neural Approaches to dynamics of signal exchanges. Singapore: Springer Singapore; 2020. p. 83-93. doi: 10.1007/978-981-13-8950-4_9.

[25]

Balado J, Frías E, González-Collazo SM, Díaz-Vilariño L. New trends in laser scanning for cultural heritage BT. In: Bienvenido-Huertas D, Moyano- Campos J, editors. New technologies in building and construction towards sustainable development. Singapore: Springer Nature Singapore; 2022. p. 167-86. doi: 10.1007/978-981-19-1894-0_10.

[26]

Hu Z, Hu Y, Liu J, Wu B, Han D, Kurfess T. 3D separable convolutional neural network for dynamic hand gesture recognition. Neurocomputing 2018;318:151-61. doi: 10.1016/j.neucom.2018.08.042.

[27]

Xie Y, Tian J, Zhu XX. Linking points with labels in 3D: a review of point cloud semantic segmentation. IEEE Geosci Remote Sens Mag 2020;8:38-59. doi: 10.1109/MGRS.2019.2937630.

[28]

Ruan X, Liu B.Review of 3D point cloud data segmentation methods. Int J Adv Netw Monit Control 2020;5:66-71. doi: 10.21307/ijanmc-2020-010.

[29]

Law ACC, Southon N, Senin N, Stavroulakis P, Leach R, Goodridge R, Kong Z. Curvature- based segmentation of powder bed point clouds for in-process monitoring. Int Solid Free Fabr Symp 2018.

[30]

Jingdao C, Zsolt K, K CY. Deep learning approach to point cloud scene understanding for automated scan to 3D reconstruction. J Comput Civ Eng 2019;33:4019027. doi: 10.1061/(ASCE)CP.1943-5487.0000842.

[31]

Zhao L, Tao W. JSNet: joint instance and semantic segmentation of 3D point clouds. In: Proc. AAAI conf. artif. intell, 34; 2020. p. 12951-8. doi: 10.1609/aaai.v34i07.6994.

[32]

Kang CL, Wang F, Zong MM, Cheng Y, Lu TN. Research on improved region growing point cloud algorithm. Int Arch Photogramm Remote Sens Spat Inf Sci 2020;XLII- 3/W10:153-7. doi: 10.5194/isprs-archives-XLII-3-W10-153-2020.

[33]

Poux F, Mattes C, Kobbelt L.Unsupervised segmentation of indoor 3d point cloud: application to object-based classification. ISPRS - Int Arch Photogramm Remote Sens Spat Inf Sci 2020;44W1:111-18. doi: 10.5194/isprs-archives-XLIV-4-W1-2020-111-2020.

[34]

Poux F, Mattes C, Selman Z, Kobbelt L. Automatic region-growing system for the segmentation of large point clouds. Autom Constr 2022;138:104250. doi: 10.1016/j.autcon.2022.104250.

[35]

Khaloo A, Lattanzi D.Robust normal estimation and region growing segmentation of infrastructure 3D point cloud models. Adv Eng Inform 2017;34:1-16. doi: 10.1016/j.aei.2017.07.002.

[36]

Li D, Cao Y, Tang X-S, Yan S, Cai X. Leaf segmentation on dense plant point clouds with facet region growing. Sensors 2018;18. doi: 10.3390/s18113625.

[37]

Huang Z, Wang X, Wang J, Liu W, Wang J. Weakly-supervised semantic segmentation network with deep seeded region growing. In: 2018 IEEE/CVF conf. comput. vis. pattern recognit; 2018. p. 7014-23. doi: 10.1109/CVPR.2018.00733.

[38]

Hossain MD, Chen D. Segmentation for Object-Based Image Analysis (OBIA): a review of algorithms and challenges from remote sensing perspective. ISPRS J Photogramm Remote Sens 2019;150:115-34. doi: 10.1016/j.isprsjprs.2019.02.009.

[39]

Ma X, Luo W, Chen M, Li J, Yan X, Zhang X, Wei W. A fast point cloud segmentation algorithm based on region growth. In: 2019 18th int. conf. opt. commun. networks; 2019. p. 1-2. doi: 10.1109/ICOCN.2019.8934726.

[40]

Luo N, Jiang Y, Wang Q. Supervoxel-based region growing segmentation for point cloud data. Int J Pattern Recognit Artif Intell 2020;35:2154007. doi: 10.1142/S0218001421540070.

[41]

Roynard X, Deschaud J-E, Goulette F. Fast and robust segmentation and classification for change detection in urban point clouds. ISPRS - Int Arch Photogramm Remote Sens Spat Inf Sci 2016;XLI-B3:693-9. doi: 10.5194/isprsarchives-XLI-B3-693-2016.

[42]

Wang X, Zou L, Shen X, Ren Y, Qin Y. A region-growing approach for automatic outcrop fracture extraction from a three-dimensional point cloud. Comput Geosci 2017;99:100-6. doi: 10.1016/j.cageo.2016.11.002.

[43]

Tian Y, Song W, Chen L, Sung Y, Kwak J, Sun S.Fast planar detection system using a GPU-based 3D Hough transform for LiDAR point clouds. Appl Sci 2020;10. doi: 10.3390/app10051744.

[44]

Xu B, Chen Z, Zhu Q, Ge X, Huang S, Zhang Y, Liu T, Wu D. Geometrical segmentation of multi-shape point clouds based on adaptive shape prediction and hybrid voting RANSAC. Remote Sens 2022;14. doi: 10.3390/rs14092024.

[45]

Zhao B, Hua X, Yu K, Xuan W, Chen X, Tao W. Indoor point cloud segmentation using iterative gaussian mapping and improved model fitting. IEEE Trans Geosci Remote Sens 2020;58:7890-907. doi: 10.1109/TGRS.2020.2984943.

[46]

Balado J, van Oosterom P, Díaz-Vilariño L, Lorenzo H. Point-based Morphological Opening with input data retrieval. ISPRS Ann Photogramm Remote Sens Spat Inf Sci 2021;VIII-4/W2-:53-8. doi: 10.5194/isprs-annals-VIII-4-W2-2021-53-2021.

[47]

Döllner J.Geospatial artificial intelligence: potentials of machine learning for 3D point clouds and geospatial digital twins. PFG - J Photogramm Remote Sens Geoinf Sci 2020;88:15-24. doi: 10.1007/s41064-020-00102-3.

[48]

Nguyen A, Le B. 3D point cloud segmentation: a survey. In: 2013 6th IEEE conf. robot. autom. mechatronics; 2013. p. 225-30. doi: 10.1109/RAM.2013.6758588.

[49]

Chen X-T, Li Y, Fan J-H, Wang R.RGAM: a novel network architecture for 3D point cloud semantic segmentation in indoor scenes. Inf Sci 2021;571:87-103. doi: 10.1016/j.ins.2021.04.069.

[50]

Liang X, Fu Z, Sun C, Hu Y. MHIBS-Net: multiscale hierarchical network for indoor building structure point clouds semantic segmentation. Int J Appl Earth Obs Geoinf 2021;102:102449. doi: 10.1016/j.jag.2021.102449.

[51]

Zhang M, Kadam P, Liu S, Kuo C. GSIP: green semantic segmentation of large-scale indoor point clouds; 2021.

[52]

Cai G, Pan Y.Understanding the imperfection of 3D point cloud and semantic segmentation algorithms for 3D models of indoor environment. Agil GIScience Ser 2022;3:2. doi: 10.5194/agile-giss-3-2-2022.

[53]

Ye X, Li J, Huang H, Du L, Zhang X. 3D recurrent neural networks with context fusion for point cloud semantic segmentation. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y, editors. Comput. vis. - ECCV 2018. Cham: Springer International Publishing; 2018. p. 415-30.

[54]

Ishikawa Y, Hachiuma R, Ienaga N, Kuno W, Sugiura Y, Saito H. Semantic segmentation of 3D point cloud to virtually manipulate real living space. In: 2019 12th Asia Pacific work. mix. augment. real.; 2019. p. 1-7. doi: 10.1109/APMAR.2019.8709156.

[55]

Cui Y, Li Q, Yang B, Xiao W, Chen C, Dong Z. Automatic 3-D reconstruction of indoor environment with mobile laser scanning point clouds. IEEE J Sel Top Appl Earth Obs Remote Sens 2019;12:3117-30. doi: 10.1109/JSTARS.2019.2918937.

[56]

Nikoohemat S, Peter M, Oude Elberink S, Vosselman G. Semantic interpretation of mobile laser scanner point clouds in indoor scenes using trajectories. Remote Sens 2018;10. doi: 10.3390/rs10111754.

[57]

Schütt P, Schwarz M, Behnke S. Semantic interaction in augmented reality environments for Microsoft HoloLens. In: 2019 Eur. conf. mob. robot; 2019. p. 1-6. doi: 10.1109/ECMR.2019.8870937.

[58]

Ungureanu D, Bogo F, Galliani S, Sama P, Duan X, Meekhof C, Stühmer J, Cashman TJ, Tekin B, Schönberger JL, Olszta P, Pollefeys M. HoloLens 2 research mode as a tool for computer vision research; 2020. doi: 1048550/ARXIV200811239.

[59]

Díaz-Vilariño L, Tran H, Frías E, Balado J, Khoshelham K. 3d mapping of indoor and outdoor environments using Apple smart devices. Int Arch Photogramm Remote Sens Spat Inf Sci2022;XLIII-B4-2:303-8. doi: 10.5194/isprs-archives-XLIII-B4-2022-303-2022.

[60]

Navares-Vázquez J.C. Reality-Mesher; 2022. https://github.com/JucaNavazReque/Reality-Mesher (Accessed 21 October 2022).

[61]

Balado J. Point based region growing. 2022. https://github.com/jbalado/point-based-region-growing (Accessed 14 October 2022).

AI Summary AI Mindmap
PDF (3189KB)

144

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/