EAE-Net: effective and efficient X-ray joint detection

Zhichao Wu, Mingxuan Wan, Haohao Bai, Jianxiong Ma, Xinlong Ma

Optoelectronics Letters ›› 2024, Vol. 20 ›› Issue (10) : 629-635. DOI: 10.1007/s11801-024-3129-y
Article

EAE-Net: effective and efficient X-ray joint detection

Author information +
History +

Abstract

The detection and localization of bone joint regions in medical X-ray images are essential for contemporary medical diagnostics. Traditional methods rely on subjective interpretation by physicians, leading to variability and potential errors. Automated bone joint detection techniques have become feasible with advancements in general-purpose object detection. However, applying these algorithms to X-ray images faces challenges due to the domain gap. To overcome these challenges, a novel framework called effective and efficient network (EAE-Net) is proposed. It incorporates a context augment module (CAM) to leverage global structural information and a ghost bottleneck module (GBM) to reduce redundant features. The EAE-Net model achieves exceptional detection performance, striking a balance between accuracy and speed. This advancement improves efficiency, enabling clinicians to focus on critical aspects of diagnosis and treatment.

Cite this article

Download citation ▾
Zhichao Wu, Mingxuan Wan, Haohao Bai, Jianxiong Ma, Xinlong Ma. EAE-Net: effective and efficient X-ray joint detection. Optoelectronics Letters, 2024, 20(10): 629‒635 https://doi.org/10.1007/s11801-024-3129-y

References

[[1]]
Litjens G J S, Kooi T, Bejnordi B E, et al.. A survey on deep learning in medical image analysis. Medical image analysis, 2017, 42: 60-88, J]
CrossRef Google scholar
[[2]]
Miotto R, Wang F, Wang S, et al.. Deep learning for healthcare: review, opportunities and challenges. Briefings in bioinformatics, 2018, 19(6): 1236-1246, J]
CrossRef Google scholar
[[3]]
Wang C Y, Liao H Y M, Yeh I H, et al.. CSPNet: a new backbone that can enhance learning capability of CNN. Conference on Computer Vision and Pattern Recognition (CVPR), June 14–19, 2020, Seattle, WA, USA, 2020 New York IEEE 1571-1580 [C]
[[4]]
Taghanaki S A, Abhishek K, Cohen J P, et al.. Deep semantic segmentation of natural and medical images: a review. Artificial intelligence review, 2021, 54(1): 137-178, J]
CrossRef Google scholar
[[5]]
Ren S, He K, Girshick R B, et al.. Faster RCNN: towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems 28, December 7–12, 2015, Montreal, Quebec, Canada, 2015 Ottawa NIPS 91-99 [C]
[[6]]
REDMON J, FARHADI A. YOLOV3: an incremental improvement[EB/OL]. (2018-04-08) [2023-05-23]. https://arxiv.org/abs/1804.02767.
[[7]]
HAN Y, CHEN C, TEWFIK A H, et al. Pneumonia detection on chest X-ray using radiomic features and contrastive learning[C]//International Symposium on Biomedical Imaging, April 13–16, 2021, Nice, France. 247–251.
[[8]]
Dosovitskiy A, Beyer L, Kolesnikov A, et al.. An image is worth 16x16 words: transformers for image recognition at scale. IEEE International Conference on Learning Representations, May 3–7, 2021, Austria, 2021 New York IEEE [C]
[[9]]
Han K, Wang Y, Tian Q, et al.. GhostNet: more features from cheap operations. Conference on Computer Vision and Pattern Recognition (CVPR), June 13–19, 2020, Seattle, WA, USA, 2020 New York IEEE 1577-1586 [C]
[[10]]
Li J, Xu Z, Xu L. Vehicle and pedestrian detection method based on improved YOLOV4-tiny. Optoelectronics letters, 2023, 19(10): 623-628, J]
CrossRef Google scholar
[[11]]
He K, Zhang X, Ren S, et al.. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 2015, 37(9): 1904-1916, J]
CrossRef Google scholar
[[12]]
He K M, Zhang X Y. Deep residual learning for image recognition. Conference on Computer Vision and Pattern Recognition (CVPR), June 27–30, 2016, Las Vegas, NV, USA, 2016 New York IEEE 770-778, C]
CrossRef Google scholar
[[13]]
Jiang P, Ergu D, Liu F, et al.. A review of YOLO algorithm developments. Proceedings of the 8th International Conference on Information Technology and Quantitative Management, July 9–11, 2021, Chengdu, China, 2021 Amsterdam Elsevier 1066-1073 [C]
[[14]]
ZHOU X Y, WANG D Q, KRHENBÜHL P. Objects as points[EB/OL]. (2019-04-16) [2023-05-23]. https://arxiv.org/abs/1904.07850v1.
[[15]]
Zhang H, Lu C, Chen E. Obstacle detection: improved YOLOX-S based on swin transformer-tiny. Optoelectronics letters, 2023, 19(11): 698-704, J]
CrossRef Google scholar

Accesses

Citations

Detail

Sections
Recommended

/