Consecutive-frame latent space normal estimation under sparse point clouds for 4D millimeter-wave radar

Yangxu WU , Xinfang YUAN , Ping CHEN

Journal of Measurement Science and Instrumentation ›› 2024, Vol. 15 ›› Issue (2) : 276 -284.

PDF (3145KB)
Journal of Measurement Science and Instrumentation ›› 2024, Vol. 15 ›› Issue (2) :276 -284. DOI: 10.62756/jmsi.1674-8042.2024028
Test and detection technology
research-article

Consecutive-frame latent space normal estimation under sparse point clouds for 4D millimeter-wave radar

Author information +
History +
PDF (3145KB)

Abstract

Aiming at the sparsity of point cloud data and the low accuracy of spatial alignment exhibited by millimeter-wave frequency-modulated continuous-wave (FMCW) radar in outdoor motion scenarios, a lightweight model for spatial alignment was proposed. This method was specifically tailored for point cloud processing across consecutive multi-frames in outdoor motion scenes captured by millimeter-wave radar. Leveraging spatio-temporal graph neural networks (ST-GNNs), it accurately estimated the hidden spatial normals of adjacent multi-frame point clouds, eliminating the need for position sensors. By transforming radar point cloud data from each frame into a unified observation coordinate system, the method facilitated multi-frame fusion of 4D point clouds and ensured precise scene alignment. Experimental results demonstrated that the proposed approach not only accurately assessed the spatial attitude of 4D point clouds but also effectively corrected and fused the coordinates of each point cloud frame. This enabled precise coordinate alignment during motion and vibration. Furthermore, the algorithm significantly enhanced point cloud imaging density, improved image accuracy and readability, and was capable of imaging both static and dynamic targets. It provided robust support for the application of millimeter-wave radar in outdoor motion scenes.

Keywords

millimeter-wave radar / 4D point cloud / spatial alignment / latent space normal estimation / spatio-temporal graph neural networks

Cite this article

Download citation ▾
Yangxu WU, Xinfang YUAN, Ping CHEN. Consecutive-frame latent space normal estimation under sparse point clouds for 4D millimeter-wave radar. Journal of Measurement Science and Instrumentation, 2024, 15(2): 276-284 DOI:10.62756/jmsi.1674-8042.2024028

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

SUN Y P, HE S F, QU L L. Human identification based on micro-Doppler signal separation and SqueezeNet. Radar Science and Technology, 2023, 21(5): 511-516.

[2]

GONG R, WANG L, XU C, et al. A spatial target rotation vector estimation method combining InISAR imaging and micro-Doppler feature extraction. Journal of Electronics and Information, 2021, 43(3): 640-649.

[3]

YAAKOB T, RASHID N E ABD, PASYA I. Analysis of pedestrian’s Doppler characteristics using MIMO CW radar//2017 IEEE Asia Pacific Microwave Conference (APMC), November 13-16, 2017, Kuala Lumpur, Malaysia. New York: IEEE, 2017: 857-860.

[4]

QI F G, LI Z, MA Y Y, et al. Generalization of channel micro-Doppler capacity evaluation for improved finer-grained human activity classification using MIMO UWB radar. IEEE Transactions on Microwave Theory and Techniques, 2021, 69(11): 4748-4761.

[5]

KHOMCHUK P, STAINVAS I, BILIK I. Pedestrian motion direction estimation using simulated automotive MIMO radar. IEEE Transactions on Aerospace and Electronic Systems, 2016, 52(3): 1132-1145.

[6]

DIAO S, YANG H, XIANG Y, et al. Research on splicing method of point cloud with insufficient features based on spatial reference. Journal of Electronic Imaging, 2021, 30(4): 043008.

[7]

CAO J J, ZHU H R, BAI Y P, et al. Latent tangent space representation for normal estimation. IEEE Transactions on Industrial Electronics, 2022, 69(1): 921-929.

[8]

JIN F, SENGUPTA A, CAO S Y. mmFall: Fall detection using 4-D mmwave radar and a hybrid variational RNN autoencoder. IEEE Transactions on Automation Science and Engineering, 2022, 19(2): 1245-1257.

[9]

XIAO Y Y, CHEN Z G, LIN Z T, et al. Merge-swap optimization framework for supervoxel generation from three-dimensional point clouds. Remote Sensing, 2020, 12(3): 473.

[10]

JUNG T W, JEONG C S, KIM I S, et al. Graph convolutional network for 3D object pose estimation in a point cloud. Sensors, 2022, 22(21): 8166.

[11]

CUI Y M, LIU X, LIU H M, et al. Geometric attentional dynamic graph convolutional neural networks for point cloud analysis. Neurocomputing, 2021, 432: 300-310.

[12]

HONG Y, ZHEN H, CHEN P, et al. 3D-llm: In-jecting the 3d world into large language models. Advances in Neural Information Processing Systems, 2023, 36: 20482-20494.

[13]

WANG G M, HU Y Z, WU X R, et al. Residual 3-D scene flow learning with context-aware feature extraction. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 5009609.

[14]

ZHANG Z C, NIE J H, YU M J, et al. SharpNet: a deep learning method for normal vector estimation of point cloud with sharp features. Graphical Models, 2022, 124: 101167.

[15]

MIKAMO M, FURUKAWA R, OKA S, et al. Active stereo method for 3D endoscopes using deep-layer GCN and graph representation with proximity information//2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), November 1-5, 2021, Mexico. New York: IEEE, 2021: 7551-7555.

[16]

YI L, KIM V G, CEYLAN D, et al. A scalable active framework for region annotation in 3D shape collections. ACM Transactions on Graphics, 2016, 35(6): 210.

[17]

SU H, JAMPANI V, SUN D Q, et al. SPLATNet: sparse lattice networks for point cloud processing//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 18-23, 2018, Salt Lake City, UT, USA. New York: IEEE, 2018: 2530-2539.

[18]

LIU Y C, FAN B, XIANG S M, et al. Relation-shape convolutional neural network for point cloud analysis//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 15-20, 2019, Long Beach, CA, USA. New York: IEEE, 2019: 8887-8896.

PDF (3145KB)

40

Accesses

0

Citation

Detail

Sections
Recommended

/