Location monitoring approach of underground pipelines using time-sequential images

Haoruo Xu , Lei He , Yuyang Chu , Junchen He , Huaiguang Xiao , Chengmeng Shao

Underground Space ›› 2024, Vol. 15 ›› Issue (2) : 59 -75.

PDF (6156KB)
Underground Space ›› 2024, Vol. 15 ›› Issue (2) :59 -75. DOI: 10.1016/j.undsp.2023.08.003
Research articl
research-article

Location monitoring approach of underground pipelines using time-sequential images

Author information +
History +
PDF (6156KB)

Abstract

The location monitoring of underground pipelines is of utmost significance as it helps the effective management and maintenance of the pipelines, and facilitates the planning of nearby projects, preventing damage to the pipelines. However, currently there is a serious lack of data on the locations of underground pipelines. This paper proposes an image-based approach for monitoring the locations of underground pipelines by combing deep learning and visual-based reconstruction. The proposed approach can build the monitoring model for underground pipelines and characterize their locations through their centroid curve. Its advantages are: (1) simplicity: it only requires time-sequential images of the inner walls of underground pipelines; (2) clarity: the location model and the location curve of underground pipelines can be provided quickly; (3) robustness: it can cope with some existing problems in underground pipelines, such as light variations and small viewing angles. A lightweight approach for monitoring the locations of underground pipelines is achieved. The proposed approach’s effectiveness has been validated through laboratory simulation experiments, demonstrating accuracy at the millimeter level.

Keywords

Underground pipelines / Location monitoring / Time-sequential images / Visual-based reconstruction / Deep learning

Cite this article

Download citation ▾
Haoruo Xu, Lei He, Yuyang Chu, Junchen He, Huaiguang Xiao, Chengmeng Shao. Location monitoring approach of underground pipelines using time-sequential images. Underground Space, 2024, 15(2): 59-75 DOI:10.1016/j.undsp.2023.08.003

登录浏览全文

4963

注册一个新账户 忘记密码

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgement

The project was supported by the Fundamental Research Funds for the Central Universities (Grant No. 2242023K5006).

References

[1]

Azizpour, H., Razavian, A. S., & Sullivan, J. (2015). From generic to specific deep representations for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36-45.

[2]

Bay, H., Ess, A., & Tuytelaars, T. (2008). Speeded-up robust features (SURF). Computer Vision and Image Understanding, 110(3), 346-359.

[3]

Chapman, D., Providakis, S., & Rogers, C. (2020). BIM for the Underground - An enabler of trenchless construction. Underground Space, 5(4), 354-361.

[4]

Cheeseman, P., Smith, R., & Self, M. (1987). A stochastic map for uncertain spatial relationships. 4th international symposium on robotic research, 467-474.

[5]

DeTone, D., Malisiewicz, T., & Rabinovich, A. (2018). Superpoint: selfsupervised interest point detection and description. Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 224-236.

[6]

Dong, J., & Soatto, S. (2015). Domain-size pooling in local descriptors: DSP-SIFT. Proceedings of the IEEE conference on computer vision and pattern recognition, 5097-5106.

[7]

Durrant-Whyte, H. F. (1988). Uncertain geometry in robotics. IEEE Journal on Robotics and Automation, 4(1), 23-31.

[8]

Gill, H. S., & Khehra, B. S. (2022). An integrated approach using CNNRNN- LSTM for classification of fruit images. Materials Today: Proceedings, 51(1), 591-595.

[9]

Harris, C. G., & Stephens, M. J. (1988). A combined corner and edge detector. Alvey Vision Conference, 15(50), 147-152.

[10]

Hu, Z. J., & Cheneler, D. (2021). Bio-inspired soft robot for locomotion and navigation in restricted spaces. Journal of Robotics and Automation, 5(1), 236-250.

[11]

Klein, G., & Murray, D. (2007). Parallel tracking and mapping for small AR workspaces. 2007 6th IEEE and ACM international symposium on mixed and augmented reality, 225-234.

[12]

Lee, S. H., & Civera, J. (2019). Triangulation: why optimize? arXiv preprint arXiv:1907.11917.

[13]

Lepetit, V., Moreno-Noguer, F., & Fua, P. (2009). EPnP: An accurate O (n) solution to the PnP problem. International Journal of Computer Vision, 81(1), 155-166.

[14]

Liu, Y., Cai, Y., & Tian, C. (2020). Experimental investigation of a Portevin-Le Chatelier band in Ni-Co-based superalloys in relation to cʹ precipitates at 500 ℃. Journal of Materials Science and Technology, 49 (15), 35-41.

[15]

Mur-Artal, R., Montiel, J. M. M., & Tardós, J. D. (2015). ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Education Society, 31(5), 1147-1163.

[16]

Noh, H., Araujo, A., & Sim, J. (2017). Large-scale image retrieval with attentive deep local features. Proceedings of the IEEE international conference on computer vision, 3476-3485.

[17]

Noshahri, H., olde Scholtenhuis, L. L., & Doree, A. G. (2021). Linking sewer condition assessment methods to asset managers’ data-needs. Automation in Construction, 131(1), 1-10.

[18]

O’Dwyer, K. G., McCabe, B. A., & Sheil, B. B. (2020). Interpretation of pipe-jacking and lubrication records for drives in silty soil. Underground Space, 5(3), 199-209.

[19]

Parrott, C., Dodd, T. J., & Boxall, J. (2020). Simulation of the behavior of biologically-inspired swarm robots for the autonomous inspection of buried pipes. Tunnelling and Underground Space Technology, 101(1), 1-13.

[20]

Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.

[21]

Rusu, R. B., Blodow, N., & Marton, Z. C. (2008). Aligning point cloud views using persistent feature histograms. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008, 3384-3391.

[22]

Schonberger, J. L., & Frahm, J. M. (2016). Structure-from-motion revisited. Proceedings of the IEEE conference on computer vision and pattern recognition, 4104-4113.

[23]

Shahrour, I., Bian, H., Xie, X., & Zhang, Z. (2021). Smart technology applications for the optimal management of underground facilities. Underground Space, 6(5), 551-559.

[24]

Shen, Y., Lin, Y., & Li, P. (2020). Simulation and detection leakage of underground water pipeline by ground penetrating radar. Journal of Testing and Evaluation, 48(3), 2003-2027.

[25]

Snavely, N., Seitz, S. M., & Szeliski, R. (2006). Photo tourism: exploring photo collections in 3D. ACM siggraph 2006 papers, 1(12), 835-846.

[26]

Srinivasan, S., & Balram, N. (2006). Adaptive contrast enhancement using local region stretching. Proceedings of the 9th Asian symposium on information display,152-155.

[27]

Tang, C., Du, B., & Jiang, S. (2022). A pipeline inspection robot for navigating tubular environments in the sub-centimeter scale. Science Robotics, 7(66), 1-13.

[28]

Tolooiyan, A., Dyson, A., & Karami, M. (2018). Application of ground penetrating radar (GPR) to detect joints in organic soft rock. Geotechnical Testing Journal, 42(2), 1-27.

[29]

Wang, H. (2020). Finding patterns in subsurface using Bayesian machine learning approach. Underground Space, 5(1), 84-92.

[30]

Wang, M., & Yin, X. (2022). Construction and maintenance of urban underground infrastructure with digital technologies. Automation in Construction, 141(1), 1-16.

[31]

Wang, X. (2018). Automatic Detection and Sorting Algorithm for Checkerboard Corner Points. Fuzzy Systems and Data Mining IV, 710-716.

[32]

Watanabe, T., Ito, S., & Yokoi, K. (2010). Co-occurrence histograms of oriented gradients for human detection. IPSJ Transactions on Computer Vision and Applications, 2(1), 39-47.

[33]

Wei, Y. M., Kang, L., Yang, B., & Wu, L. D. (2013). Applications of structure from motion: A survey. Journal of Zhejiang University SCIENCE C, 14(7), 486-494.

[34]

Yamamoto, T., Sakama, S., & Kamimura, A. (2020). Pneumatic duplexchambered inchworm mechanism for narrow pipes driven by only two air supply lines. IEEE Robotics and Automation Letters, 5(4), 5034-5042.

[35]

Yan, Q., Yang, L., & Zhang, L. (2017). Distinguishing the indistinguishable: Exploring structural ambiguities via geodesic context. Proceedings of the IEEE conference on computer vision and pattern recognition, 3836-3844.

[36]

Ye, Z., Zhang, C., & Ye, Y. (2022). Principle of a low-frequency transient electromagnetic radar system and its application in the detection of underground pipelines and voids. Tunnelling and Underground Space Technology, 122(1), 1-15.

[37]

Zhang, Z. (1999). Flexible camera calibration by viewing a plane from unknown orientations. Proceedings of the seventh ieee international conference on computer vision, 666-673.

[38]

Zhao, B., Hu, T., & Shen, L. (2015). Visual odometry-A review of approaches. IEEE International Conference on Information and Automation, 2015, 2569-2573.

PDF (6156KB)

37

Accesses

0

Citation

Detail

Sections
Recommended

/