GLF-Segformer: an improved Segformer model integrating local and global information for skin cancer image segmentation

Xiangyu Deng , Yapeng Zheng

Optoelectronics Letters ›› 2026, Vol. 22 ›› Issue (4) : 236 -242.

PDF
Optoelectronics Letters ›› 2026, Vol. 22 ›› Issue (4) :236 -242. DOI: 10.1007/s11801-026-4282-2
Original Paper
research-article
GLF-Segformer: an improved Segformer model integrating local and global information for skin cancer image segmentation
Author information +
History +
PDF

Abstract

More accurate segmentation of skin cancers in dermoscopy images is crucial for clinical treatment. However, the prevalence of interfering noise in dermoscopy images poses a challenge to its accurate segmentation. For this reason, this paper proposes an improved GLF-Segformer to improve segmentation. The model adds polarized self-attention (PSA) module and R-convolution and attention fusion module (R-CAFM) to the Segformer’s encoder to enhance the ability to capture local information and facilitate the effective fusion of local and global information. The decoder employs an innovative two-stage hybrid up-sampling to effectively reduce information loss. In addition, a new hybrid loss function is designed to further improve the segmentation accuracy of the model at complex boundaries. The experimental results show that GLF-Segformer achieves 90.73% and 89.85% mean intersection over union (mIoU) on two standard datasets, ISIC2017 and ISIC2018, respectively, and exhibits better segmentation performance compared to other comparison algorithms.

Keywords

A

Cite this article

Download citation ▾
Xiangyu Deng, Yapeng Zheng. GLF-Segformer: an improved Segformer model integrating local and global information for skin cancer image segmentation. Optoelectronics Letters, 2026, 22(4): 236-242 DOI:10.1007/s11801-026-4282-2

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Abedini R, Nasimi M, Nourmohammad Pour P, et al. . Skin cancer awareness and sun protection behavior before and following treatment among skin cancer-treated patients. Journal of cancer education. 2019, 34: 285-290 J]

[2]

Marks R. An overview of skin cancers. Cancer. 1995, 75(S2): 607-612 J]

[3]

Kharazmi P, Aljasser M I, Lui H, et al. . Automated detection and segmentation of vascular structures of skin lesions seen in Dermoscopy, with an application to basal cell carcinoma classification. IEEE journal of biomedical and health informatics. 2016, 21(6): 1675-1684 J]

[4]

Deng X Y, Pang L H, Dang Z Y. F-Net: breast cancerous lesion region segmentation based on improved U-Net. Optoelectronics letters. 2025, 21(12): 761-768 J]

[5]

XIN C, LIU Z, MA Y, et al. Transformer guided self-adaptive network for multi-scale skin lesion image segmentation[J]. Computers in biology and medicine, 2024: 169.

[6]

Wu H, Chen S, Chen G, et al. . FAT-Net: feature adaptive transformers for automated skin lesion segmentation. Medical image analysis. 2022, 76: 102327 J]

[7]

Karimi A, Faez K, Nazari S. DEU-Net: dual-encoder U-Net for automated skin lesion segmentation. IEEE access. 2023, 11: 134804-134821 J]

[8]

Naveed A, Naqvi S S, Iqbal S, et al. . RA-Net: region-aware attention network for skin lesion segmentation. Cognitive computation. 2024, 16(5): 2279-2296 J]

[9]

Xie E, Wang W, Yu Z, et al. . SegFormer: simple and efficient design for semantic segmentation with transformers. Advances in neural information processing systems. 2021, 34: 12077-12090[J]

[10]

LIU H, LIU F, FAN X, et al. Polarized self-attention: towards high-quality pixel-wise regression[EB/OL]. (2021-07-02) [2025-03-25]. https://arxiv.org/abs/2107.00782.

[11]

WANG L, FANG S, ZHANG C, et al. Efficient hybrid transformer: learning global-local context for urban sence segmentation[EB/OL]. (2021) [2025-03-25]. https://arxiv.org/abs/2109.08937.

[12]

HU S, GAO F, ZHOU X, et al. Hybrid convolutional and attention network for hyperspectral image denoising[J]. IEEE geoscience and remote sensing letters, 2024.

[13]

Noh H, Hong S, Han B. Learning deconvolution network for semantic segmentation. Proceedings of the IEEE International Conference on Computer Vision, December 7–13, 2015, Santiago, Chile. 2015, New York, IEEE: 15201528[C]

[14]

Lin T Y, Gdry S, Girshick R, et al. . Focal loss for dense object detection. Proceedings of the IEEE International Conference on Computer Vision, October 22–29, 2017, Venice, Italy. 2017, New York, IEEE: 29802988[C]

[15]

LI X, SUN X, MENG Y, et al. Dice loss for data-imbalanced NLP tasks[EB/OL]. (2019-11-07) [2025-03-25]. https://arxiv.org/abs/1911.02855.

[16]

Codella N C F, Gutman D, Celebi M E, et al. . Skin lesion analysis toward melanoma detection: a challenge at the 2017 International Symposium on Biomedical Imaging (ISBI). IEEE 15th International Symposium on Biomedical Imaging, April 4–7, 2018, Washington, DC, USA. 2018, New York, IEEE: 168172[C]

[17]

CODELLA N, ROTEMBERG V, TSCHANDL P, et al. Skin lesion analysis toward melanoma detection 2018: a challenge hosted by the international skin imaging collaboration (ISIC)[EB/OL]. (2019-02-09) [2025-03-25]. https://arxiv.org/abs/1902.03368.

[18]

Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention, October 5–9, 2015, Munich, Germany. 2015, Cham, Springer International Publishing: 234241[C]

[19]

CHEN L C. Rethinking Atrous convolution for semantic image segmentation[EB/OL]. (2017-06-17) [2025-03-25]. https://arxiv.org/abs/1706.05587.

[20]

Zhao H, Shi J, Qi X, et al. . Pyramid scene parsing network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, July 21–26, 2017, Honolulu, HI, USA. 2017, New York, IEEE: 28812890[C]

[21]

Wang J, Sun K, Cheng T, et al. . Deep high-resolution representation learning for visual recognition. IEEE transactions on pattern analysis and machine intelligence. 2020, 43(10): 3349 J]

[22]

Jiang Y, Qiao H, Zhang Z, et al. . MDSC-Net: a multi-scale depthwise separable convolutional neural network for skin lesion segmentation. IET image processing. 2023, 17(13): 3713-3727 J]

[23]

Liu S, Wang P, Lin Y, et al. . SMRU-Net: skin disease image segmentation using channel-space separate attention with depthwise separable convolutions. Pattern analysis and applications. 2024, 27(3): 93 J]

RIGHTS & PERMISSIONS

Tianjin University of Technology

PDF

4

Accesses

0

Citation

Detail

Sections
Recommended

/