DtCFS-Net: A Dual-threshold Coding Feature Sampling Network Method for Maritime Targets Visual Saliency Detection and Application

Bo Shi , Tianyu Cao , Haifan Su , Xuanzhi Zhu , Hong Zhao , Qiqi Ge

Journal of Marine Science and Application ›› : 1 -17.

PDF
Journal of Marine Science and Application ›› :1 -17. DOI: 10.1007/s11804-025-00760-y
Research Article
research-article

DtCFS-Net: A Dual-threshold Coding Feature Sampling Network Method for Maritime Targets Visual Saliency Detection and Application

Author information +
History +
PDF

Abstract

Sea surface image detection is crucial for accurately identifying maritime targets in complex environments, supporting marine environmental perception and engineering applications. This study proposes a dual-threshold coding feature sampling network (DtCFS-Net) inspired by human visual perception. By constructing an image saliency matrix based on visual attention parameters and integrating saturation, hue, and contour features, the proposed model employs layered encoding, adaptive Gaussian filtering, and coefficient transformation to enhance multiscale saliency detection. A dual-threshold function is introduced to suppress non-maximum high-frequency noise and refine contour extraction. The experimental results show that the proposed DtCFS-Net outperforms existing saliency-based feature extraction algorithms, achieving 63.6% and 18.8% improvements in correlation coefficient and normalized scan-path saliency, respectively, along with superior performance in area under the curve (AUC), shuffled-AUC, peak signal-to-noise ratio, and structural similarity index measure. Integrated into a backbone network, it effectively reduces missed detections and false alarms. Compared with YOLOv10, the proposed model improves mAP50 and mAP50–95 by 4.71% and 1.23%, respectively. These findings underscore its potential applications in marine engineering, where the integration of multimodal sensor data can enhance navigation, positioning, and path planning for unmanned vessels in the future.

Keywords

Maritime targets detection / Visual saliency enhancement / DtCFS-Net / USV navigation application

Cite this article

Download citation ▾
Bo Shi, Tianyu Cao, Haifan Su, Xuanzhi Zhu, Hong Zhao, Qiqi Ge. DtCFS-Net: A Dual-threshold Coding Feature Sampling Network Method for Maritime Targets Visual Saliency Detection and Application. Journal of Marine Science and Application 1-17 DOI:10.1007/s11804-025-00760-y

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Anju P, Roy A, Sheethal MS, Rajeswari M. Visual Saliency Prediction Using Deep Learning. 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS), 20211747-1750. Presented at the 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS)

[2]

Feng W, Dong X, Li J, Yang Z, Niu Q. High-efficiency Distributed Image Compression Algorithm Based on Soft Threshold Iteration for Wildlife Images with Wireless Image Sensor Networks. Sensors and Materials, 2024, 36: 2521.

[3]

Gong C, Sun Y, Zou C, Jiang D, Huang L, Tao B. SFD-SLAM: a novel dynamic RGB-D SLAM based on saliency region detection. Meas. Sci. Technol., 2024, 35: 106304.

[4]

Gopalakrishnan V, Hu YQ, Rajan D. Random Walks on Graphs for Salient Object Detection in Images. IEEE Trans on Image Process, 2010, 19: 3232-3242.

[5]

Hao T, Bai S, Wu T, Zhang L. Weakly Supervised Semantic Segmentation of Remote Sensing Images Based on Progressive Mining and Saliency-Enhanced Self-Attention. IEEE Geosci. Remote Sensing Lett., 2024, 21: 1-5.

[6]

Hou X, Zhang L. Saliency Detection: A Spectral Residual Approach. 2007 IEEE Conference on Computer Vision and Pattern Recognition, 20071-8Presented at the 2007 IEEE Conference on Computer Vision and Pattern Recognition

[7]

Hu J, Li Y, Zhi X, Shi T, Zhang W. Complementarity-Aware Feature Fusion for Aircraft Detection via Unpaired Opt2SAR Image Translation. IEEE Trans. Remote Sensing, 2025, 63: 1-19

[8]

Hu XY, Wang Y, Shan J. Automatic Recognition of Cloud Images by Using Visual Saliency Features. IEEE Geosci. Remote Sensing Lett., 2015, 12: 1760-1764.

[9]

Hu Z, Cai Y, Li Q, Su K, Lyu C. Context-Aware Driver Attention Estimation Using Multi-Hierarchy Saliency Fusion With Gaze Tracking. IEEE Trans. Intell. Transport. Syst., 2024, 25: 8602-8614.

[10]

Huang N, Yang Y, Zhang D, Zhang Q, Han J. Employing Bilinear Fusion and Saliency Prior Information for RGB-D Salient Object Detection. IEEE Trans. Multimedia, 2022, 24: 1651-1664.

[11]

Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Machine Intell., 1998, 20: 1254-1259.

[12]

Li X, Zhang T, Liu Z, Liu B, Rehman SU, Rehman B, Sun C (2025) Saliency Guided Siamese Attention Network for Infrared Ship Target Tracking. IEEE Trans. Intell. Veh: 1–18. https://doi.org/10.1109/TIV.2024.3370233

[13]

Liao Y, Wu Y, Mo Y, He Y, Hu Y, Zhao J (2024) SAPT: Saliency Augmentation and Unsupervised Pre-trained Model Fusion for Few-Shot Object Detection. J Sign Process Syst. https://doi.org/10.1007/s11265-024-01924-9

[14]

Liu G, Jia M, Wang X, Bavirisetti DP. Infrared and visible image fusion based on saliency detection and deep multi-scale orientational features. SIViP, 2025, 19: 36.

[15]

Liu J, He J, Chen H, Yang R, Huang Y. A Lightweight Semantic- and Graph-Guided Network for Advanced Optical Remote Sensing Image Salient Object Detection. Remote Sensing, 2025, 17: 861.

[16]

Liu T, Zheng NN, Ding W, Yuan ZJ. Video Attention: Learning to Detect a Salient Object Sequence. 2008 19th International Conference on Pattern Recognition, 20081-4Presented at the ICPR 2008 19th International Conference on Pattern Recognition

[17]

Liu Y, Zhang XY, Bian JW, Zhang L, Cheng MM. SAMNet: Stereoscopically Attentive Multi-Scale Network for Lightweight Salient Object Detection. IEEE Trans. on Image Process, 2021, 30: 3804-3814.

[18]

Luo Y, Li W, Ma X, Zhang K. Image Retrieval Algorithm Based on Locality-Sensitive Hash Using Convolutional Neural Network and Attention Mechanism. Information, 2022, 13: 446.

[19]

Ren M, Li Y, Li H, Cong R, Kwong S. GCRPNet: Graph-Enhanced Contextual and Regional Perception Network for Salient Object Detection in Optical Remote Sensing Images, 2025

[20]

Shi B, Cao T, Ge Q, Lin Y, Wang Z. Sonar image intelligent processing in seabed pipeline detection: review and application. Meas. Sci. Technol., 2024, 35: 045405.

[21]

Tang Y, Gao P, Wang Z. SalDA: DeepConvNet Greets Attention for Visual Saliency Prediction. IEEE Trans. Cogn. Dev. Syst, 2024, 16: 319-331.

[22]

Wang Z, Liu Z, Wei W, Duan H. SalED: Saliency prediction with a pithy encoder-decoder architecture sensing local and global information. Image and Vision Computing, 2021, 109: 104149.

[23]

Xiong J, Liu G, Tang H, Gu X, Bavirisetti DP. SeGFusion: A semantic saliency guided infrared and visible image fusion method. Infrared Physics & Technology, 2024, 140: 105344.

[24]

Zhang C, Gao G, Liu J, Duan D. Oriented Ship Detection Based on Soft Thresholding and Context Information in SAR Images of Complex Scenes. IEEE Trans. Geosci. Remote Sensing, 2024, 62: 1-15.

[25]

Zhang Q, Zhao R, Zhang L. TCRNet: A Trifurcated Cascaded Refinement Network for Salient Object Detection. IEEE Trans. Circuits Syst. Video Technol, 2023, 33: 298-311.

[26]

Zhang T, Jiang G, Liu Z, Rehman US, Li Y. Advanced integrated segmentation approach for semi-supervised infrared ship target identification. Alexandria Engineering Journal, 2024, 87: 17-30.

[27]

Zhu J, Qin X, Elsaddik A. DC-Net: Divide-and-conquer for salient object detection. Pattern Recognition, 2025, 157: 110903.

RIGHTS & PERMISSIONS

Harbin Engineering University and Springer-Verlag GmbH Germany, part of Springer Nature

PDF

17

Accesses

0

Citation

Detail

Sections
Recommended

/