Methodology and workflow for road lane recognition based on millimeter-wave radar point clouds
Yunqian Xu
Smart Construction and Sustainable Cities ›› 2025, Vol. 3 ›› Issue (1) : 20
Accurate road lane detection is critical for intelligent transportation, but existing camera- and LiDAR-based methods face challenges: LiDAR is ex- pensive, and cameras are sensitive to lighting and weather conditions. This study proposes a method using millimeter-wave radar data, which is cost- effective and robust under various conditions. This work applys an optical flow algorithm to compute point correspondences in radar point clouds, gen- erate lane line bitmaps, and fit polygonal lane regions. The approach effec- tively handles nonlinear lanes and noisy radar data. Experiments with data from multiple radar manufacturers at different intersections and traffic sce- narios demonstrate strong robustness and reliability. The results show that the method is practical for real-time traffic management, providing a reliable alternative to traditional sensors.
Millimeter-wave radar technology / Lane detection and tracking / Optical flow analysis / Urban planning / Traffic flow optimization
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
Garnett N, Cohen R, Pe'er T, Lahav R, Levi D. 3d-lanenet: End- to-end 3d multiple lane detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019. |
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
Zhang T, Zhang H, Li Y, Nakamura Y, Zhang L. Flow- fusion: Dynamic dense rgb-d slam based on optical flow. In 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020:7322-7328. |
| [14] |
Horn BK, Schunck BG. Determining optical flow. artificial intelli- gence 17. Article in Artificial Intelligence, 1981. |
| [15] |
Chen Y, Zhu D, Shi W, Zhang G, Zhang T, Zhang X, Li J. Mfcflow: A motion feature compensated multi-frame recur- rent network for optical flow estimation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023:5068–5077. |
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
Chen Y, Wang Z, Peng Y, Zhang Z, Yu G, Sun J. Cascaded pyramid network for multi-person pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. |
| [23] |
|
| [24] |
|
| [25] |
Mark De Berg. Computational geometry: algorithms and applications. Springer Science & Business Media, 2000. |
| [26] |
WHST. Whst official website. http://www.whst.com/. |
| [27] |
Calterah. Calterah official website. https://www.calterah.com/. |
| [28] |
Continental. Continental official website. https://www.continental-corporation. cn/zh-cn/. |
| [29] |
|
| [30] |
|
| [31] |
Oechslin R, Wellig P, Hinrichsen S, Wieland S, Aulenbacher U, Rech K. Cognitive radar parameter optimization in a congested spectrum environment. In 2018 IEEE Radar Conference (RadarConf18), 2018:0218-0223. IEEE. |
| [32] |
|
| [33] |
Baur A. Lane Model Validation: Ground Truth Generation and Lane Model Evaluation. PhD thesis, 2022. |
| [34] |
|
| [35] |
Rock J, Toth M, Meissner P, Pernkopf F. Deep interference mitigation and denoising of real-world fmcw radar signals. 2020;624-629. |
| [36] |
|
| [37] |
|
| [38] |
|
| [39] |
Luo Y, Cui S, Li Z. Dv-3dlane: End-to-end multi-modal 3d lane detection with dual-view representation. arXiv preprint arXiv:2406.16072, 2024. |
| [40] |
Rudra N Hota, Shahanaz Syed, Subhadip Bandyopadhyay, and P Radha Krishna. A simple and efficient lane detection using clustering and weighted regression. In COMAD, 2009. |
| [41] |
|
| [42] |
Bai M, Mattyus G, Homayounfar N, Wang S, Lakshmikanth SK, Urtasun R. Deep multi-sensor lane detection. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018:3102-3109. IEEE. |
| [43] |
Heng L. Automatic targetless extrinsic calibration of multiple 3d lidars and radars. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020:10669-10675. |
| [44] |
|
The Author(s)
/
| 〈 |
|
〉 |