An adaptive compensation strategy for sensors based on the degree of degradation

Yanbin Li , Wei Zhang , Zhiguo Zhang , Xiaogang Shi , Ziruo Li , Mingming Zhang , Wenzheng Chi

Biomimetic Intelligence and Robotics ›› 2025, Vol. 5 ›› Issue (4) : 100235 -100235.

PDF (1990KB)
Biomimetic Intelligence and Robotics ›› 2025, Vol. 5 ›› Issue (4) : 100235 -100235. DOI: 10.1016/j.birob.2025.100235
Research Article
research-article

An adaptive compensation strategy for sensors based on the degree of degradation

Author information +
History +
PDF (1990KB)

Abstract

Simultaneous Localization and Mapping (SLAM) is widely used to solve the localization problem of unmanned devices such as robots. However, in degraded environments, the accuracy of SLAM is greatly reduced due to the lack of constrained features. In this article, we propose a deep learning-based adaptive compensation strategy for sensors. First, we create a dataset dedicated to training a degradation detection model, which contains coordinate data of particle swarms with different distributional features, and endow the model with degradation detection capability through supervised learning. Second, we design a lightweight network model with short computation time and good accuracy for real-time degradation detection tasks. Finally, an adaptive compensation strategy for sensors based on the degree of degradation is designed, where the SLAM is able to assign different weights to the sensor information according to the degree of degradation given by the model, to adjust the contribution of different sensors in the pose optimization process. We demonstrate through simulation experiments and real experiments that the robustness of the improved SLAM in degraded environments is significantly enhanced, and the accuracy of localization and mapping are improved.

Keywords

Anti-degradation / Multilayer perceptron / Lidar SLAM / Particle filter

Cite this article

Download citation ▾
Yanbin Li, Wei Zhang, Zhiguo Zhang, Xiaogang Shi, Ziruo Li, Mingming Zhang, Wenzheng Chi. An adaptive compensation strategy for sensors based on the degree of degradation. Biomimetic Intelligence and Robotics, 2025, 5(4): 100235-100235 DOI:10.1016/j.birob.2025.100235

登录浏览全文

4963

注册一个新账户 忘记密码

CRediT authorship contribution statement

Yanbin Li: Writing - review & editing, Writing - original draft, Visualization, Validation, Supervision, Software, Resources, Project administration, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Wei Zhang: Writing - original draft, Funding acquisition, Conceptualization. Zhiguo Zhang: Supervision, Funding acquisition. Xiaogang Shi: Project administration, Methodology. Ziruo Li: Software, Data curation. Mingming Zhang: Software, Methodology. Wenzheng Chi: Writing - review & editing, Writing - original draft, Methodology.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work is supported by the National Science Foundation of China (62273246), Science and Technology Research Foundation of State Grid Co. Ltd (5700-202318270A-1-1-ZN).

References

[1]

W. Chen, W. Chi, S. Ji, H. Ye, J. Liu, Y. Jia, J. Yu, J. Cheng, A survey of autonomous robots and multi-robot navigation: Perception, planning and collaboration, Biomim. Intell. Robot. (2024) 100203.

[2]

G. Grisetti, Improved techniques for grid mapping with rao-blackwellized particle filters, IEEE Trans. Robot. 23 (1) (2007) 34-46.

[3]

L. Xu, H. Yin, T. Shi, D. Jiang, B. Huang, EPLF-VINS: Real-time monocular visual-inertial SLAM with efficient point-line flow features, IEEE Robot. Autom. Lett. 8 (2) (2023) 752-759, http://dx.doi.org/10.1109/LRA.2022.3231983.

[4]

Z. Chen, Z. Miao, M. Liu, C. Wu, Y. Wang, A fast and accurate visual inertial odometry using hybrid point-line features, IEEE Robot. Autom. Lett. 9 (12)(2024) 11345-11352, http://dx.doi.org/10.1109/LRA.2024.3490406.

[5]

J. Wang, Y. Ren, Z. Li, X. Xie, Z. Chen, T. Shen, H. Liu, K. Wang, USD-SLAM: A universal visual SLAM based on large segmentation model in dynamic environments, IEEE Robot. Autom. Lett. 9 (12) (2024) 11810-11817, http: //dx.doi.org/10.1109/LRA.2024.3498781.

[6]

Y. Ge, L. Zhang, Y. Wu, D. Hu, PIPO-SLAM: Lightweight visual-inertial SLAM with preintegration merging theory and pose-only descriptions of multiple view geometry, IEEE Trans. Robot. 40 (2024) 2046-2059, http: //dx.doi.org/10.1109/TRO.2024.3366815.

[7]

B. Zhang, Y. Dong, Y. Zhao, X. Qi, DynPL-SLAM: A robust stereo visual SLAM system for dynamic scenes using points and lines, IEEE Trans. Intell. Veh.(2024) 1-13, http://dx.doi.org/10.1109/TIV.2024.3415814.

[8]

H. Xu, P. Liu, X. Chen, S. Shen, D2SLAM: Decentralized and distributed collaborative visual-inertial SLAM system for aerial swarm, IEEE Trans. Robot. 40 (2024) 3445-3464, http://dx.doi.org/10.1109/TRO.2024.3422003.

[9]

C. Campos, Orb-slam3: An accurate open-source library for visual, visual-inertial, and multimap slam, IEEE Trans. Robot. 37 (6) (2021) 1874-1890.

[10]

T. Shan, Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping, in: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, IEEE, 2020, pp. 5135-5142.

[11]

W. Xu, Fast-lio2: Fast direct lidar-inertial odometry, IEEE Trans. Robot. 38 (4)(2022) 2053-2073.

[12]

H. Wang, F-loam: Fast lidar odometry and mapping, in: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, IEEE, 2021, pp. 4390-4396.

[13]

C. Zheng, FAST-LIVO2: Fast, direct LiDAR-Inertial-Visual odometry, IEEE Trans. Robot. 41 (2025) 326-346, http://dx.doi.org/10.1109/TRO.2024.3502198.

[14]

T. Wen, LIVER: A tightly coupled LiDAR-inertial-visual state estimator with high robustness for underground environments, IEEE Robot. Autom. Lett. 9 (3) (2024) 2399-2406, http://dx.doi.org/10.1109/LRA.2024.3355778.

[15]

J. Xu, Intermittent VIO-assisted LiDAR SLAM against degeneracy: Recog-nition and mitigation, IEEE Trans. Instrum. Meas. 74 (2025) 1-13, http://dx.doi.org/10.1109/TIM.2024.3507053.

[16]

Q.H. Hoang, G.-W. Kim, IMU augment tightly coupled lidar-visual-inertial odometry for agricultural environments, IEEE Robot. Autom. Lett. 9 (10)(2024) 8483-8490, http://dx.doi.org/10.1109/LRA.2024.3440728.

[17]

J. Lee, R. Komatsu, M. Shinozaki, T. Kitajima, H. Asama, Q. An, A. Yamashita, Switch-SLAM: Switching-based LiDAR-inertial-visual SLAM for degenerate environments, IEEE Robot. Autom. Lett. 9 (8) (2024) 7270-7277, http://dx.doi.org/10.1109/LRA.2024.3421792.

[18]

W. Liu, Dloam: Real-time and robust lidar slam system based on cnn in dynamic urban environments, IEEE Open J. Intell. Transp. Syst. (2021).

[19]

Q. Zhang, GAN-SLAM: GAN based monocular visual-inertial simultaneous localization and mapping in dark environments, in: 2022 5th International Symposium on Autonomous Systems, ISAS, 2022, pp. 1-6, http://dx.doi.org/10.1109/ISAS55863.2022.9757267.

[20]

C. Li, Tlcd: A transformer based loop closure detection for robotic vi-sual slam, in: 2022 International Conference on Advanced Robotics and Mechatronics, ICARM, IEEE, 2022, pp. 261-267.

[21]

X. Lian, DMN-SLAM: Multi-MLPs neural implicit representation SLAM for dynamic environments, IEEE Access (2025).

[22]

Y. Li, Q. Xiong, Z. Li, X. Shi, W. Chi, A unimodal degradation detection method for particle filter-based slam algorithms, Procedia Comput. Sci. 250 (2024) 265-273.

[23]

S. Rahman, A. Quattrini Li, I. Rekleitis, SVIn2: A multi-sensor fusion-based underwater SLAM system, Int. J. Robot. Res. 41 (11-12) (2022) 1022-1042.

[24]

Y. Jia, H. Luo, F. Zhao, G. Jiang, Y. Li, J. Yan, Z. Jiang, Z. Wang, Lvio-fusion: A self-adaptive multi-sensor fusion slam framework using actor-critic method, in: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, IEEE, 2021, pp. 286-293.

[25]

M. Frosi, M. Matteucci, MCS-SLAM: Multi-cues multi-sensors fusion SLAM, in: 2022 IEEE Intelligent Vehicles Symposium, IV, IEEE, 2022, pp. 1423-1429.

[26]

Y. Li, W. Yang, D. Lin, Q. Wang, Z. Cui, X. Qin, AVM-SLAM: Semantic visual SLAM with multi-sensor fusion in a bird’s eye view for automated valet parking, in: 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, IEEE, 2024, pp. 7937-7943.

[27]

Y. Wang, H. Ma, mvil-fusion: Monocular visual-inertial-lidar simultaneous localization and mapping in challenging environments, IEEE Robot. Autom. Lett. 8 (2) (2022) 504-511.

[28]

X. Tao, B. Zhu, S. Xuan, J. Zhao, H. Jiang, J. Du, W. Deng, A multi-sensor fusion positioning strategy for intelligent vehicles using global pose graph optimization, IEEE Trans. Veh. Technol. 71 (3) (2021) 2614-2627.

[29]

R. Gao, Y. Li, B. Li, G. Li, FELC-SLAM: feature extraction and loop closure optimized lidar SLAM system, Meas. Sci. Technol. 35 (11) (2024) 115112.

[30]

X. Lin, W. Yao, T. Yan, A LiDAR SLAM algorithm considering dy-namic extraction of feature points in underground coal mine, Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 48 (2024) 659-664.

[31]

T.-C. Tsai, C.-C. Peng, Ground segmentation based point cloud feature extraction for 3D LiDAR SLAM enhancement, Measurement 236 (2024) 114890.

[32]

Z. Xiao, S. Li, SL-SLAM: A robust visual-inertial SLAM based deep feature extraction and matching, 2024, arXiv preprint arXiv:2405.03413.

[33]

W. Hess, Real-time loop closure in 2D LIDAR SLAM, in: 2016 IEEE International Conference on Robotics and Automation, ICRA, 2016, pp. 1271-1278, http://dx.doi.org/10.1109/ICRA.2016.7487258.

[34]

Stanford Artificial Intelligence Laboratory, et al., Robotic operating system, 2018, URL:https://www.ros.org.

AI Summary AI Mindmap
PDF (1990KB)

305

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/