Deep Learning in Digital Breast Tomosynthesis: Current Status, Challenges, and Future Trends

Ruoyun Wang , Fanxuan Chen , Haoman Chen , Chenxing Lin , Jincen Shuai , Yutong Wu , Lixiang Ma , Xiaoqu Hu , Min Wu , Jin Wang , Qi Zhao , Jianwei Shuai , Jingye Pan

MedComm ›› 2025, Vol. 6 ›› Issue (6) : e70247

PDF
MedComm ›› 2025, Vol. 6 ›› Issue (6) :e70247 DOI: 10.1002/mco2.70247
REVIEW

Deep Learning in Digital Breast Tomosynthesis: Current Status, Challenges, and Future Trends

Author information +
History +
PDF

Abstract

The high-resolution three-dimensional (3D) images generated with digital breast tomosynthesis (DBT) in the screening of breast cancer offer new possibilities for early disease diagnosis. Early detection is especially important as the incidence of breast cancer increases. However, DBT also presents challenges in terms of poorer results for dense breasts, increased false positive rates, slightly higher radiation doses, and increased reading times. Deep learning (DL) has been shown to effectively increase the processing efficiency and diagnostic accuracy of DBT images. This article reviews the application and outlook of DL in DBT-based breast cancer screening. First, the fundamentals and challenges of DBT technology are introduced. The applications of DL in DBT are then grouped into three categories: diagnostic classification of breast diseases, lesion segmentation and detection, and medical image generation. Additionally, the current public databases for mammography are summarized in detail. Finally, this paper analyzes the main challenges in the application of DL techniques in DBT, such as the lack of public datasets and model training issues, and proposes possible directions for future research, including large language models, multisource domain transfer, and data augmentation, to encourage innovative applications of DL in medical imaging.

Keywords

early breast cancer screening / digital breast tomosynthesis / deep learning / public database / medical image analysis

Cite this article

Download citation ▾
Ruoyun Wang, Fanxuan Chen, Haoman Chen, Chenxing Lin, Jincen Shuai, Yutong Wu, Lixiang Ma, Xiaoqu Hu, Min Wu, Jin Wang, Qi Zhao, Jianwei Shuai, Jingye Pan. Deep Learning in Digital Breast Tomosynthesis: Current Status, Challenges, and Future Trends. MedComm, 2025, 6(6): e70247 DOI:10.1002/mco2.70247

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

S. Lei, R. Zheng, S. Zhang, et al., “Global Patterns of Breast Cancer Incidence and Mortality: A Population-based Cancer Registry Data Analysis From 2000 to 2020,” Cancer Communications (London) 41, no. 11 (2021): 1183-1194.

[2]

A. Jemal, F. Bray, M. M. Center, J. Ferlay, E. Ward, and D. Forman, “Global Cancer Statistics,” CA: A Cancer Journal for Clinicians 61, no. 2 (2011): 69-90.

[3]

F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, “Global Cancer Statistics 2018: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries,” CA: A Cancer Journal for Clinicians 68, no. 6 (2018): 394-424.

[4]

J. Huang, P. S. Chan, V. Lok, et al., “Global Incidence and Mortality of Breast Cancer: A Trend Analysis,” Aging (Albany NY) 13, no. 4 (2021): 5748-5803.

[5]

F. Bray, M. Laversanne, H. Sung, et al., “Global Cancer Statistics 2022: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries,” CA: A Cancer Journal for Clinicians 74, no. 3 (2024): 229-263.

[6]

R. L. Siegel, K. D. Miller, H. E. Fuchs, and A. Jemal, “Cancer Statistics, 2022,” CA: A Cancer Journal for Clinicians 72, no. 1 (2022): 7-33.

[7]

S. Chen, Z. Cao, K. Prettner, et al., “Estimates and Projections of the Global Economic Cost of 29 Cancers in 204 Countries and Territories from 2020 to 2050,” JAMA Oncology 9, no. 4 (2023): 465-472.

[8]

N. Zacharakis, H. Chinnasamy, M. Black, et al., “Immune Recognition of Somatic Mutations Leading to Complete Durable Regression in Metastatic Breast Cancer,” Nature Medicine 24, no. 6 (2018): 724-730.

[9]

L. Chen, L. Yang, L. Yao, et al., “Characterization of PIK3CA and PIK3R1 Somatic Mutations in Chinese Breast Cancer Patients,” Nature Communications 9, no. 1 (2018): 1357.

[10]

L. Garcia-Martinez, Y. Zhang, Y. Nakata, H. L. Chan, and L. Morey, “Epigenetic Mechanisms in Breast Cancer Therapy and Resistance,” Nature Communications 12, no. 1 (2021): 1786.

[11]

Breast Cancer Association C, L. Dorling, S. Carvalho, et al., Breast Cancer Association C, “Breast Cancer Risk Genes—Association Analysis in More Than 113,000 Women,” New England Journal of Medicine 384, no. 5 (2021): 428-439.

[12]

S. S. Buys, J. F. Sandbach, A. Gammon, et al., “A Study of Over 35,000 Women With Breast Cancer Tested With a 25-gene Panel of Hereditary Cancer Genes,” Cancer 123, no. 10 (2017): 1721-1730.

[13]

C. Hu, S. N. Hart, R. Gnanaolivu, et al., “A Population-Based Study of Genes Previously Implicated in Breast Cancer,” New England Journal of Medicine 384, no. 5 (2021): 440-451.

[14]

J. Sun, H. Meng, L. Yao, et al., “Germline Mutations in Cancer Susceptibility Genes in a Large Series of Unselected Breast Cancer Patients,” Clinical Cancer Research 23, no. 20 (2017): 6113-6119.

[15]

J. D. Yager and N. E. Davidson, “Estrogen Carcinogenesis in Breast Cancer,” New England Journal of Medicine 354, no. 3 (2006): 270-282.

[16]

American College of Obstetricians and Gynecologists. “Hereditary Cancer Syndromes and Risk Assessment: ACOG COMMITTEE OPINION SUMMARY, Number 793,” Obstetrics and Gynecology 2019; 134(6): 1366-1367.

[17]

B. Xia, Q. Sheng, K. Nakanishi, et al., “Control of BRCA2 Cellular and Clinical Functions by a Nuclear Partner, PALB2,” Molecular Cell 22, no. 6 (2006): 719-729.

[18]

A. McTiernan, “Behavioral Risk Factors in Breast Cancer: Can Risk be Modified?,” The Oncologist 8, no. 4 (2003): 326-334.

[19]

C. Coleman, “Early Detection and Screening for Breast cancer,” Seminars in Oncology Nursing. (Elsevier, 2017): 141-155.

[20]

H. Qiu, S. Cao, and R. Xu, “Cancer Incidence, Mortality, and Burden in China: A Time-trend Analysis and Comparison With the United States and United Kingdom Based on the Global Epidemiological Data Released in 2020,” Cancer Communications (London) 41, no. 10 (2021): 1037-1048.

[21]

I. Sechopoulos, J. Teuwen, and R. Mann, “Artificial Intelligence for Breast Cancer Detection in Mammography and Digital Breast Tomosynthesis: State of the Art,” Seminars in Cancer Biology 72 (2021): 214-225.

[22]

A. Chong, S. P. Weinstein, E. S. McDonald, and E. F. Conant, “Digital Breast Tomosynthesis: Concepts and Clinical Practice,” Radiology 292, no. 1 (2019): 1-14.

[23]

L. R. Cochon, C. S. Giess, and R. Khorasani, “Comparing Diagnostic Performance of Digital Breast Tomosynthesis and Full-Field Digital Mammography,” Journal of the American College of Radiology 17, no. 8 (2020): 999-1003.

[24]

E. O. Cohen, O. O. Weaver, H. H. Tso, K. E. Gerlach, and J. W. T. Leung, “Breast Cancer Screening via Digital Mammography, Synthetic Mammography, and Tomosynthesis,” American Journal of Preventive Medicine 58, no. 3 (2020): 470-472.

[25]

X. Qian, J. Pei, H. Zheng, et al., “Prospective Assessment of Breast Cancer Risk From Multimodal Multiview Ultrasound Images via Clinically Applicable Deep Learning,” Nature Biomedical Engineering 5, no. 6 (2021): 522-532.

[26]

T. Hovda, A. S. Holen, K. Lang, et al., “Interval and Consecutive Round Breast Cancer After Digital Breast Tomosynthesis and Synthetic 2D Mammography versus Standard 2D Digital Mammography in BreastScreen Norway,” Radiology 294, no. 2 (2020): 256-264.

[27]

E. R. Myers, P. Moorman, J. M. Gierisch, et al., “Benefits and Harms of Breast Cancer Screening: A Systematic Review,” Jama 314, no. 15 (2015): 1615-1634.

[28]

R. Murakami, N. Uchiyama, H. Tani, T. Yoshida, and S. Kumita, “Comparative Analysis Between Synthetic Mammography Reconstructed From Digital Breast Tomosynthesis and Full-field Digital Mammography for Breast Cancer Detection and Visibility,” European Journal of Radiology Open 7 (2020): 100207.

[29]

S. Weigel, W. Heindel, H. W. Hense, et al., “Breast Density and Breast Cancer Screening With Digital Breast Tomosynthesis: A TOSYMA Trial Subanalysis,” Radiology 306, no. 2 (2023): e221006.

[30]

W. Wang, L. Zhang, J. Sun, Q. Zhao, and J. Shuai, “Predicting the Potential human lncRNA-miRNA Interactions Based on Graph Convolution Network With Conditional Random Field,” Briefings in Bioinformatics 23, no. 6 (2022): bbac463.

[31]

J. Lei, P. Yang, L. Zhang, Y. Wang, and K. Yang, “Diagnostic Accuracy of Digital Breast Tomosynthesis versus Digital Mammography for Benign and Malignant Lesions in Breasts: A Meta-analysis,” European Radio 24 (2014): 595-602.

[32]

P. Skaane, R. Gullien, H. Bjorndal, et al., “Digital Breast Tomosynthesis (DBT): Initial Experience in a Clinical Setting,” Acta Radiologica 53, no. 5 (2012): 524-529.

[33]

Y. Choi, O. H. Woo, H. S. Shin, K. R. Cho, B. K. Seo, and G. Y. Choi, “Quantitative Analysis of Radiation Dosage and Image Quality Between Digital Breast Tomosynthesis (DBT) With Two-dimensional Synthetic Mammography and Full-field Digital Mammography (FFDM),” Clinical Imaging 55 (2019): 12-17.

[34]

D. B. Kopans, “Time for Change in Digital Breast Tomosynthesis Research,” Radiology 302, no. 2 (2022): 293-294.

[35]

A. M. Mota, J. Mendes, and N. Matela, “Digital Breast Tomosynthesis: Towards Dose Reduction Through Image Quality Improvement,” Journal of Imaging 9, no. 6 (2023): 119.

[36]

R. G. Roth, A. D. Maidment, S. P. Weinstein, S. O. Roth, and E. F. Conant, “Digital Breast Tomosynthesis: Lessons Learned From Early Clinical Implementation,” Radiographics 34, no. 4 (2014): E89-E102.

[37]

E. Dhamija, M. Gulati, S. V. S. Deo, A. Gogia, and S. Hari, “Digital Breast Tomosynthesis: An Overview,” Indian Journal of Surgical Oncology 12, no. 2 (2021): 315-329.

[38]

A. Rodriguez-Ruiz, J. Teuwen, S. Vreemann, et al., “New Reconstruction Algorithm for Digital Breast Tomosynthesis: Better Image Quality for Humans and Computers,” Acta Radiologica 59, no. 9 (2018): 1051-1059.

[39]

R. Zeng, A. Badano, and K. J. Myers, “Optimization of Digital Breast Tomosynthesis (DBT) Acquisition Parameters for human Observers: Effect of Reconstruction Algorithms,” Physics in Medicine and Biology 62, no. 7 (2017): 2598-2611.

[40]

M. Ertas, I. Yildirim, M. Kamasak, and A. Akan, “Digital Breast Tomosynthesis Image Reconstruction Using 2D and 3D Total Variation Minimization,” Biomedical Engineering Online [Electronic Resource] 12 (2013): 112.

[41]

H. R. Peppard, B. E. Nicholson, C. M. Rochman, J. K. Merchant, R. C. Mayo, and J. A. Harvey, “Digital Breast Tomosynthesis in the Diagnostic Setting: Indications and Clinical Applications,” Radiographics 35, no. 4 (2015): 975-990.

[42]

S. Vedantham, L. Shi, K. E. Michaelsen, et al., “Digital Breast Tomosynthesis Guided Near Infrared Spectroscopy: Volumetric Estimates of Fibroglandular Fraction and Breast Density From Tomosynthesis Reconstructions,” Biomedical Physics & Engineering Express 1, no. 4 (2015): 045202.

[43]

V. Magni, A. Cozzi, S. Schiaffino, A. Colarieti, and F. Sardanelli, “Artificial Intelligence for Digital Breast Tomosynthesis: Impact on Diagnostic Performance, Reading Times, and Workload in the Era of Personalized Screening,” European Journal of Radiology 158 (2023): 110631.

[44]

F. Shaheen, B. Verma, and M. Asafuddoula, “Impact of Automatic Feature Extraction in Deep Learning Architecture,” In: 2016 International conference on digital image computing: techniques and applications (DICTA). IEEE; 2016: 1-8.

[45]

G. Farias, S. Dormido-Canto, J. Vega, et al., “Automatic Feature Extraction in Large Fusion Databases by Using Deep Learning Approach,” Fusion Engineering and Design 112 (2016): 979-983.

[46]

M. M. Adnan, M. S. M. Rahim, A. Rehman, Z. Mehmood, T. Saba, and R. A. Naqvi, “Automatic Image Annotation Based on Deep Learning Models: A Systematic Review and Future Challenges,” IEEE Access 9 (2021): 50253-50264.

[47]

A. Gordo, J. Almazan, J. Revaud, and D. Larlus, “End-to-end Learning of Deep Visual Representations for Image Retrieval,” International Journal of Computer Vision 124, no. 2 (2017): 237-254.

[48]

Z. Wang, L. Zhang, X. Shu, Q. Lv, and Z. Yi, “An End-to-end Mammogram Diagnosis: A New Multi-instance and Multiscale Method Based on Single-image Feature,” IEEE Transactions on Cognitive and Developmental Systems 13, no. 3 (2020): 535-545.

[49]

L. Wang, Q. He, X. Wang, et al., “Multi-criterion Decision Making-based Multi-channel Hierarchical Fusion of Digital Breast Tomosynthesis and Digital Mammography for Breast Mass Discrimination,” Knowledge-Based Systems 228 (2021): 107303.

[50]

C. Yu and J. Wang, “Data Mining and Mathematical Models in Cancer Prognosis and Prediction,” Medical Review (2021) 2, no. 3 (2022): 285-307.

[51]

R. A. Welikala, P. Remagnino, J. H. Lim, et al., “Automated Detection and Classification of Oral Lesions Using Deep Learning for Early Detection of Oral Cancer,” IEEE Access 8 (2020): 132677-132693.

[52]

D. Crosby, S. Bhatia, K. M. Brindle, et al., “Early Detection of Cancer,” Science 375, no. 6586 (2022): eaay9040.

[53]

A. Alsadoon, G. Al-Naymat, A. H. Osman, B. Alsinglawi, M. Maabreh, and M. R. Islam, “DFCV: A Framework for Evaluation Deep Learning in Early Detection and Classification of Lung Cancer,” Multimedia Tools and Applications 82, no. 28 (2023): 44387-44430.

[54]

R. Ricciardi, G. Mettivier, M. Staffa, et al., “A Deep Learning Classifier for Digital Breast Tomosynthesis,” Physical Medicine 83 (2021): 184-193.

[55]

D. Fornvik, S. Borgquist, M. Larsson, S. Zackrisson, and I. Skarping, “Deep Learning Analysis of Serial Digital Breast Tomosynthesis Images in a Prospective Cohort of Breast Cancer Patients Who Received Neoadjuvant Chemotherapy,” European Journal of Radiology 178 (2024): 111624.

[56]

W. Lee, H. Lee, H. Lee, E. K. Park, H. Nam, and T. Kooi, “Transformer-based Deep Neural Network for Breast Cancer Classification on Digital Breast Tomosynthesis Images,” Radiology: Artificial Intelligence 5, no. 3 (2023): e220159.

[57]

C. D. Lehman, R. D. Wellman, D. S. Buist, et al., “Diagnostic Accuracy of Digital Screening Mammography with and without Computer-Aided Detection,” JAMA Internal Medicine 175, no. 11 (2015): 1828-1837.

[58]

K. J. Geras, R. M. Mann, and L. Moy, “Artificial Intelligence for Mammography and Digital Breast Tomosynthesis: Current Concepts and Future Perspectives,” Radiology 293, no. 2 (2019): 246-259.

[59]

J. H. Yoon, F. Strand, P. A. T. Baltzer, et al., “Standalone AI for Breast Cancer Detection at Screening Digital Mammography and Digital Breast Tomosynthesis: A Systematic Review and Meta-Analysis,” Radiology 307, no. 5 (2023): e222639.

[60]

J. Bai, R. Posner, T. Wang, C. Yang, and S. Nabavi, “Applying Deep Learning in Digital Breast Tomosynthesis for Automatic Breast Cancer Detection: A Review,” Medical Image Analysis 71 (2021): 102049.

[61]

J. Zhang, J. Wu, X. S. Zhou, F. Shi, and D. Shen, “Recent Advancements in Artificial Intelligence for Breast Cancer: Image Augmentation, Segmentation, Diagnosis, and Prognosis Approaches,” Seminars in Cancer Biology 96 (2023): 11-25.

[62]

M. J. Yaffe, “Detectors for Digital Mammography,” Digital Mammography (2010): 13-31.

[63]

I. K. Maitra, S. Nag, and S. K. Bandyopadhyay, “Technique for Preprocessing of Digital Mammogram,” Computer Methods and Programs in Biomedicine 107, no. 2 (2012): 175-188.

[64]

E. F. Conant, E. F. Beaber, B. L. Sprague, et al., “Breast Cancer Screening Using Tomosynthesis in Combination With Digital Mammography Compared to Digital Mammography Alone: A Cohort Study Within the PROSPR Consortium,” Breast Cancer Research and Treatment 156, no. 1 (2016): 109-116.

[65]

S. K. Yang, W. K. Moon, N. Cho, et al., “Screening Mammography-detected Cancers: Sensitivity of a Computer-aided Detection System Applied to Full-field Digital Mammograms,” Radiology 244, no. 1 (2007): 104-111.

[66]

J. S. The, K. J. Schilling, J. W. Hoffmeister, E. Friedmann, R. McGinnis, and R. G. Holcomb, “Detection of Breast Cancer With Full-field Digital Mammography and Computer-aided Detection,” Ajr American Journal of Roentgenology 192, no. 2 (2009): 337-340.

[67]

S. P. Poplack, A. N. Tosteson, M. R. Grove, W. A. Wells, and P. A. Carney, “Mammography in 53,803 Women From the New Hampshire Mammography Network,” Radiology 217, no. 3 (2000): 832-840.

[68]

H. Sung, J. Ferlay, R. L. Siegel, et al., “Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries,” CA: A Cancer Journal for Clinicians 71, no. 3 (2021): 209-249.

[69]

P. Skaane, S. Sebuodegard, A. I. Bandos, et al., “Performance of Breast Cancer Screening Using Digital Breast Tomosynthesis: Results From the Prospective Population-based Oslo Tomosynthesis Screening Trial,” Breast Cancer Research and Treatment 169, no. 3 (2018): 489-496.

[70]

R. E. Hendrick, “Radiation Doses and Risks in Breast Screening,” Journal of Breast Imaging 2, no. 3 (2020): 188-200.

[71]

I. Sechopoulos, “A Review of Breast Tomosynthesis. Part II. Image Reconstruction, Processing and Analysis, and Advanced Applications,” Medical Physics 40, no. 1 (2013): 014302.

[72]

Y. Z. Tang, A. Al-Arnawoot, and A. Alabousi, “The Impact of Slice Thickness on Diagnostic Accuracy in Digital Breast Tomosynthesis,” Canadian Association of Radiologists Journal 73, no. 3 (2022): 535-541.

[73]

K. Nakashima, T. Uematsu, T. Itoh, et al., “Comparison of Visibility of Circumscribed Masses on Digital Breast Tomosynthesis (DBT) and 2D Mammography: Are Circumscribed Masses Better Visualized and Assured of Being Benign on DBT?,” European Radiology 27, no. 2 (2017): 570-577.

[74]

J. Krammer, S. Zolotarev, I. Hillman, et al., “Evaluation of a New Image Reconstruction Method for Digital Breast Tomosynthesis: Effects on the Visibility of Breast Lesions and Breast Density,” Bjr 92, no. 1103 (2019): 20190345.

[75]

J. M. Park, E. A. Franken, M. Garg, L. L. Fajardo, and L. T. Niklason, “Breast Tomosynthesis: Present Considerations and Future Applications,” Radiographics 27, no. suppl_1 (2007): S231-S240.

[76]

R. E. Sharpe, S. Venkataraman, J. Phillips, et al., “Increased Cancer Detection Rate and Variations in the Recall Rate Resulting From Implementation of 3D Digital Breast Tomosynthesis Into a Population-based Screening Program,” Radiology 278, no. 3 (2016): 698-706.

[77]

J. H. Yoon, E. K. Kim, G. R. Kim, et al., “Comparing Recall Rates Following Implementation of Digital Breast Tomosynthesis to Synthetic 2D Images and Digital Mammography on Women With Breast-conserving Surgery,” European Radiology 30, no. 11 (2020): 6072-6079.

[78]

H. J. Teertstra, C. E. Loo, M. A. van den Bosch, et al., “Breast Tomosynthesis in Clinical Practice: Initial Results,” European Radiology 20, no. 1 (2010): 16-24.

[79]

A. Vourtsis and W. A. Berg, “Breast Density Implications and Supplemental Screening,” European Radiology 29, no. 4 (2019): 1762-1777.

[80]

X.-A. Phi, A. Tagliafico, N. Houssami, M. J. Greuter, and G. H. de Bock, “Digital Breast Tomosynthesis for Breast Cancer Screening and Diagnosis in Women With Dense Breasts-a Systematic Review and Meta-analysis,” BMC Cancer 18 (2018): 1-9.

[81]

I. Hadadi, W. Rae, J. Clarke, M. McEntee, and E. Ekpo, “Breast Cancer Detection: Comparison of Digital Mammography and Digital Breast Tomosynthesis Across Non-dense and Dense Breasts,” Radiography (London) 27, no. 4 (2021): 1027-1032.

[82]

G. Gennaro, S. Del Genio, G. Manco, and F. Caumo, “Phantom-based Analysis of Variations in Automatic Exposure Control Across Three Mammography Systems: Implications for Radiation Dose and Image Quality in Mammography, DBT, and CEM,” European Radiology Experimental 8, no. 1 (2024): 49.

[83]

R. M. Ali, A. England, A. K. Tootell, and P. Hogg, “Radiation Dose From Digital Breast Tomosynthesis Screening-A Comparison With Full Field Digital Mammography,” Journal of Medical Imaging and Radiation Sciences 51, no. 4 (2020): 599-603.

[84]

N. Houssami, D. Bernardi, and G. Gennaro, “Radiation Dose With Digital Breast Tomosynthesis Compared to Digital Mammography: Per-view Analysis,” European Radio 28 (2018): 573-581.

[85]

B. Barufaldi, H. Schiabel, and A. D. A. Maidment, “Design and Implementation of a Radiation Dose Tracking and Reporting System for Mammography and Digital Breast Tomosynthesis,” Physical Medicine 58 (2019): 131-140.

[86]

Y. Shoshan, R. Bakalo, F. Gilboa-Solomon, et al., “Artificial Intelligence for Reducing Workload in Breast Cancer Screening With Digital Breast Tomosynthesis,” Radiology 303, no. 1 (2022): 69-77.

[87]

G. J. Partridge, I. Darker, J. J. James, et al., “How Long Does It Take to Read a Mammogram? Investigating the Reading Time of Digital Breast Tomosynthesis and Digital Mammography,” European Journal of Radiology 177 (2024): 111535.

[88]

S. A. Abdullah Suhaimi, A. Mohamed, M. Ahmad, and K. K. Chelliah, “Effects of Reduced Compression in Digital Breast Tomosynthesis on Pain, Anxiety, and Image Quality,” Malaysian Journal of Medical Sciences 22, no. 6 (2015): 40-46.

[89]

J. X. Hu, C. F. Zhao, S. L. Wang, et al., “Acute Pancreatitis: A Review of Diagnosis, Severity Prediction and Prognosis Assessment From Imaging Technology, Scoring System and Artificial Intelligence,” World Journal of Gastroenterology 29, no. 37 (2023): 5268-5291.

[90]

T. Lefevre and L. Tournois, “Artificial Intelligence and Diagnostics in Medicine and Forensic Science,” Diagnostics (Basel) 13, no. 23 (2023): 3554.

[91]

M. Moor, O. Banerjee, Z. S. H. Abad, et al., “Foundation Models for Generalist Medical Artificial Intelligence,” Nature 616, no. 7956 (2023): 259-265.

[92]

H. Chi, H. Chen, R. Wang, et al., “Proposing New Early Detection Indicators for Pancreatic Cancer: Combining Machine Learning and Neural Networks for Serum miRNA-based Diagnostic Model,” Frontiers in Oncology 13 (2023): 1244578.

[93]

R. Li, Y. Guo, Z. Zhao, et al., “MRI-based Two-stage Deep Learning Model for Automatic Detection and Segmentation of Brain Metastases,” European Radiology 33, no. 5 (2023): 3521-3531.

[94]

D. T. Hoang, G. Dinstag, E. D. Shulman, et al., “A Deep-learning Framework to Predict Cancer Treatment Response From Histopathology Images Through Imputed Transcriptomics,” Nature Cancer 5, no. 9 (2024): 1305-1317.

[95]

E. J. Hwang, W. G. Jeong, P. M. David, M. Arentz, M. Ruhwald, and S. H. Yoon, “AI for Detection of Tuberculosis: Implications for Global Health,” Radiology: Artificial Intelligence 6, no. 2 (2024): e230327.

[96]

Y. Ren, X. Liu, J. Ge, et al., “Ipsilateral Lesion Detection Refinement for Tomosynthesis,” Ieee Transactions on Medical Imaging 42, no. 10 (2023): 3080-3090.

[97]

E. P. V. Le, Y. Wang, Y. Huang, S. Hickman, and F. J. Gilbert, “Artificial Intelligence in Breast Imaging,” Clinical Radiology 74, no. 5 (2019): 357-366.

[98]

T. Uematsu, K. Nakashima, T. L. Harada, H. Nasu, and T. Igarashi, “Comparisons Between Artificial Intelligence Computer-aided Detection Synthesized Mammograms and Digital Mammograms When Used Alone and in Combination With Tomosynthesis Images in a Virtual Screening Setting,” Japanese Journal of Radiology 41, no. 1 (2023): 63-70.

[99]

E. Yagis, A. G. S. De Herrera, and L. Citi, “Generalization Performance of Deep Learning Models in Neurodegenerative Disease Classification,” In: 2019 IEEE international conference on bioinformatics and biomedicine (BIBM). IEEE; 2019: 1692-1698.

[100]

S. Minaee, Y. Boykov, F. Porikli, A. Plaza, N. Kehtarnavaz, and D. Terzopoulos, “Image Segmentation Using Deep Learning: A Survey,” Ieee Transactions on Pattern Analysis and Machine Intelligence 44, no. 7 (2021): 3523-3542.

[101]

C. Chen, C. Qin, H. Qiu, et al., “Deep Learning for Cardiac Image Segmentation: A Review,” Frontiers in Cardiovascular Medicine 7 (2020): 25.

[102]

Z. Q. Zhao, P. Zheng, S. T. Xu, and X. Wu, “Object Detection with Deep Learning: A Review,” IEEE Transactions on Neural Networks and Learning Systems 30, no. 11 (2019): 3212-3232.

[103]

J. Fan, X. Cao, Z. Xue, P.-T. Yap, and D. Shen, “Adversarial Similarity Network for Evaluating Image Alignment in Deep Learning Based Registration,” In: Medical Image Computing and Computer Assisted Intervention-MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part I. Springer; 2018: 739-746.

[104]

J. Teuwen, N. Moriakov, C. Fedon, et al., “Deep Learning Reconstruction of Digital Breast Tomosynthesis Images for Accurate Breast Density and Patient-specific Radiation Dose Estimation,” Medical Image Analysis 71 (2021): 102061.

[105]

T. Gomi, Y. Kijima, T. Kobayashi, and Y. Koibuchi, “Evaluation of a Generative Adversarial Network to Improve Image Quality and Reduce Radiation-Dose During Digital Breast Tomosynthesis,” Diagnostics (Basel) 12, no. 2 (2022): 495.

[106]

J. Reifman and E. E. Feldman, “Multilayer Perceptron for Nonlinear Programming,” Computers & Operations Research 29, no. 9 (2002): 1237-1250.

[107]

T. Kim and T. Adali, “Fully Complex Multi-layer Perceptron Network for Nonlinear Signal Processing,” Journal of Signal Processing Systems 32 (2002): 29-43.

[108]

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based Learning Applied to Document Recognition,” Proceedings of the Ieee 86, no. 11 (1998): 2278-2324.

[109]

P.-S. Zhu, Y.-R. Zhang, J.-Y. Ren, et al., “Ultrasound-based Deep Learning Using the VGGNet Model for the Differentiation of Benign and Malignant Thyroid Nodules: A Meta-analysis,” Frontiers in Oncology 12 (2022): 944859.

[110]

N. Zakaria and Y. M. M. Hassim, “Improved Image Classification Task Using Enhanced Visual Geometry Group of Convolution Neural Networks,” International Journal on Informatics Visualization 7, no. 4 (2023): 2498-2505.

[111]

N. Veni and J. Manjula, “High-performance Visual Geometric Group Deep Learning Architectures for MRI Brain Tumor Classification,” The Journal of Supercomputing 78, no. 10 (2022): 12753-12764.

[112]

N. Zakaria and Y. M. Mohmad Hassim, “A Review Study of the Visual Geometry Group Approaches for Image Classification,” Journal of Applied Science, Technology and Computing 1, no. 1 (2024): 14-28.

[113]

L. Balagourouchetty, J. K. Pragatheeswaran, B. Pottakkat, and R. Govindarajalou, “GoogLeNet-Based Ensemble FCNet Classifier for Focal Liver Lesion Diagnosis,” IEEE Journal of Biomedical and Health Informatics 24, no. 6 (2020): 1686-1694.

[114]

S. M. Sam, K. Kamardin, N. N. A. Sjarif, and N. Mohamed, “Offline Signature Verification Using Deep Learning Convolutional Neural Network (CNN) Architectures GoogLeNet Inception-v1 and Inception-v3,” Procedia Computer Science 161 (2019): 475-483.

[115]

W. Xu, Y. L. Fu, and D. Zhu, “ResNet and Its Application to Medical Image Processing: Research Progress and Challenges,” Computer Methods and Programs in Biomedicine 240 (2023): 107660.

[116]

L. Borawar and K. R. ResNet, Solving Vanishing Gradient in Deep Networks. In: Proceedings of International Conference on Recent Trends in Computing: ICRTC 2022. Springer; 2023: 235-247.

[117]

Z. Kurt, S. Isik, Z. Kaya, Y. Anagun, N. Koca, and S. Cicek, “Evaluation of EfficientNet Models for COVID-19 Detection Using Lung Parenchyma,” Neural Computing and Applications 35, no. 16 (2023): 12121-12132.

[118]

H. O. Ahmed and A. K. Nandi, “High Performance Breast Cancer Diagnosis From Mammograms Using Mixture of Experts With EfficientNet Features (MoEffNet),” IEEE Access 12 (2024): 133703-133725.

[119]

Q. Abbas, Y. Daadaa, U. Rashid, M. Z. Sajid, and M. E. A. Ibrahim, “HDR-EfficientNet: A Classification of Hypertensive and Diabetic Retinopathy Using Optimize EfficientNet Architecture,” Diagnostics (Basel) 13, no. 20 (2023): 3236.

[120]

N. Li, Y. Chen, W. Li, Z. Ding, D. Zhao, and N. S. BViT, “Broad Attention-based Vision Transformer,” IEEE Transactions on Neural Networks and Learning Systems 35, no. 9 (2023): 12772-12783.

[121]

J. Maurício, I. Domingues, and J. Bernardino, “Comparing Vision Transformers and Convolutional Neural Networks for Image Classification: A Literature Review,” Applied Sciences 13, no. 9 (2023): 5521.

[122]

O. N. Manzari, H. Ahmadabadi, H. Kashiani, S. B. Shokouhi, and A. Ayatollahi, “MedViT: A Robust Vision Transformer for Generalized Medical Image Classification,” Computers in Biology and Medicine 157 (2023): 106791.

[123]

L. Zhang, X. Wang, D. Yang, et al., “Generalizing Deep Learning for Medical Image Segmentation to Unseen Domains via Deep Stacked Transformation,” Ieee Transactions on Medical Imaging 39, no. 7 (2020): 2531-2540.

[124]

F. J. Gilbert, L. Tucker, and K. C. Young, “Digital Breast Tomosynthesis (DBT): A Review of the Evidence for Use as a Screening Tool,” Clinical Radiology 71, no. 2 (2016): 141-150.

[125]

C. Mandoul, C. Verheyden, I. Millet, et al., “Breast Tomosynthesis: What Do We Know and Where Do We Stand?,” Diagn Interv Imaging 100, no. 10 (2019): 537-551.

[126]

P. S. Sujlana, M. Mahesh, S. Vedantham, S. C. Harvey, L. A. Mullen, and R. W. Woods, “Digital Breast Tomosynthesis: Image Acquisition Principles and Artifacts,” Clinical Imaging 55 (2019): 188-195.

[127]

C. I. Lee and C. D. Lehman, “Digital Breast Tomosynthesis and the Challenges of Implementing an Emerging Breast Cancer Screening Technology into Clinical Practice,” J Am Coll Radiol 13, no. 11S (2016): R61-R66.

[128]

Y. Gao, L. Moy, and S. L. Heller, “Digital Breast Tomosynthesis: Update on Technology, Evidence, and Clinical Practice,” Radiographics 41, no. 2 (2021): 321-337.

[129]

N. W. Marshall and H. Bosmans, “Performance Evaluation of Digital Breast Tomosynthesis Systems: Physical Methods and Experimental Data,” Physics in Medicine and Biology 67, no. 22 (2022): 22TR03.

[130]

R. K. Samala, H.-P. Chan, L. Hadjiiski, M. A. Helvie, C. D. Richter, and K. H. Cha, “Breast Cancer Diagnosis in Digital Breast Tomosynthesis: Effects of Training Sample Size on Multi-stage Transfer Learning Using Deep Neural Nets,” Ieee Transactions on Medical Imaging 38, no. 3 (2018): 686-696.

[131]

R. V. Aswiga and A. P. Shanthi, “A Multilevel Transfer Learning Technique and LSTM Framework for Generating Medical Captions for Limited CT and DBT Images,” Journal of Digital Imaging 35, no. 3 (2022): 564-580.

[132]

R. K. Samala, H. P. Chan, L. M. Hadjiiski, M. A. Helvie, C. Richter, and K. Cha, “Evolutionary Pruning of Transfer Learned Deep Convolutional Neural Network for Breast Cancer Diagnosis in Digital Breast Tomosynthesis,” Physics in Medicine and Biology 63, no. 9 (2018): 095005.

[133]

Y.-D. Zhang, S. C. Satapathy, D. S. Guttery, J. M. Górriz, and S.-H. Wang, “Improved Breast Cancer Classification Through Combining Graph Convolutional Network and Convolutional Neural Network,” Inf Process Manage 58, no. 2 (2021): 102439.

[134]

S. D. Pawar, K. K. Sharma, S. G. Sapate, et al., “Multichannel DenseNet Architecture for Classification of Mammographic Breast Density for Breast Cancer Detection,” Front Public Health 10 (2022): 885212.

[135]

Z. Q. Habeeb, B. Vuksanovic, and I. Q. Al-Zaydi, “Breast Cancer Detection Using Image Processing and Machine Learning,” J Image Graph (UK) 11, no. 1 (2023): 1-8.

[136]

X. Chen, Y. Zhang, J. Zhou, et al., “Diagnosis of Architectural Distortion on Digital Breast Tomosynthesis Using Radiomics and Deep Learning,” Frontiers in oncology 12 (2022): 991892.

[137]

D. Esposito, G. Paternò, R. Ricciardi, A. Sarno, P. Russo, and G. Mettivier, “A Pre-processing Tool to Increase Performance of Deep Learning-based CAD in Digital Breast Tomosynthesis,” Health Technology 14, no. 1 (2024): 81-91.

[138]

K. Mendel, H. Li, D. Sheth, and M. Giger, “Transfer Learning from Convolutional Neural Networks for Computer-Aided Diagnosis: A Comparison of Digital Breast Tomosynthesis and Full-Field Digital Mammography,” Academic Radiology 26, no. 6 (2019): 735-743.

[139]

A. A. Mukhlif, B. Al-Khateeb, and M. A. Mohammed, “Incorporating a Novel Dual Transfer Learning Approach for Medical Images,” Sensors (Basel) 23, no. 2 (2023): 570.

[140]

N. A. Harron, S. N. Sulaiman, M. K. Osman, I. S. Isa, N. K. A. Karim, and M. I. F. Maruzuki, “Deep Learning Approach for Blur Detection of Digital Breast Tomosynthesis Images,” Journal of Electrical & Electronic Systems Research 21 (2022): 39-44.

[141]

Z. Cao, L. Duan, G. Yang, T. Yue, and Q. Chen, “An Experimental Study on Breast Lesion Detection and Classification From Ultrasound Images Using Deep Learning Architectures,” BMC Medical Imaging 19, no. 1 (2019): 51.

[142]

M. A. Kassem, K. M. Hosny, R. Damasevicius, and M. M. Eltoukhy, “Machine Learning and Deep Learning Methods for Skin Lesion Classification and Diagnosis: A Systematic Review,” Diagnostics (Basel) 11, no. 8 (2021): 1390.

[143]

H. Jiang, Z. Diao, T. Shi, et al., “A Review of Deep Learning-based Multiple-lesion Recognition From Medical Images: Classification, Detection and Segmentation,” Computers in Biology and Medicine 157 (2023): 106726.

[144]

M. Ahammed, M. Al Mamun, and M. S. Uddin, “A Machine Learning Approach for Skin Disease Detection and Classification Using Image Segmentation,” Healthcare Analytics 2 (2022): 100122.

[145]

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet Classification With Deep Convolutional Neural networks,” Advances in Neural Information Processing Systems. (Curran Associates, Inc, 2012): 1-9.

[146]

S. S. Islam, S. Rahman, M. M. Rahman, E. K. Dey, and M. Shoyaib, “Application of Deep Learning to Computer Vision: A Comprehensive Study,” In: 2016 5th international conference on informatics, electronics and vision (ICIEV). IEEE; 2016: 592-597.

[147]

X. Chen, C. Lian, H. H. Deng, et al., “Fast and Accurate Craniomaxillofacial Landmark Detection via 3D Faster R-CNN,” Ieee Transactions on Medical Imaging 40, no. 12 (2021): 3867-3878.

[148]

S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection With Region Proposal Networks,” Ieee Transactions on Pattern Analysis and Machine Intelligence 39, no. 6 (2017): 1137-1149.

[149]

A. Rasheed, S. H. Shirazi, A. I. Umar, M. Shahzad, W. Yousaf, and Z. Khan, “Cervical Cell's Nucleus Segmentation Through an Improved UNet Architecture,” PLoS ONE 18, no. 10 (2023): e0283568.

[150]

Y. Su, Q. Liu, W. Xie, and P. Hu, “YOLO-LOGO: A Transformer-based YOLO Segmentation Model for Breast Mass Detection and Segmentation in Digital Mammograms,” Computer Methods and Programs in Biomedicine 221 (2022): 106903.

[151]

Y. D. Jeon, M. J. Kang, S. U. Kuh, et al., “Deep Learning Model Based on You Only Look Once Algorithm for Detection and Visualization of Fracture Areas in Three-Dimensional Skeletal Images,” Diagnostics (Basel) 14, no. 1 (2023): 11.

[152]

M. Durve, S. Orsini, A. Tiribocchi, et al., “Benchmarking YOLOv5 and YOLOv7 Models With DeepSORT for Droplet Tracking Applications,” The European Physical Journal. E, Soft Matter 46, no. 5 (2023): 32.

[153]

R. Azad, E. K. Aghdam, A. Rauland, et al., “Medical Image Segmentation Review: The Success of U-Net,” Ieee Transactions on Pattern Analysis and Machine Intelligence 46, no. 12 (2024): 10076-10095.

[154]

J. Cheng, W. Xiong, W. Chen, Y. Gu, and Y. Li, “Pixel-level Crack Detection Using U-Net,” In: TENCON 2018-2018 IEEE region 10 conference. IEEE; 2018: 0462-0466.

[155]

M. Agarwal, S. K. Gupta, and K. K. Biswas, “Development of a Compressed FCN Architecture for Semantic Segmentation Using Particle Swarm Optimization,” Neural Computing and Applications 35, no. 16 (2023): 11833-11846.

[156]

E. Evain, C. Raynaud, C. Ciofolo-Veit, et al., “Breast Nodule Classification With Two-dimensional Ultrasound Using Mask-RCNN Ensemble Aggregation,” Diagnostic and Interventional Imaging 102, no. 11 (2021): 653-658.

[157]

L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic Image Segmentation With Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs,” Ieee Transactions on Pattern Analysis and Machine Intelligence 40, no. 4 (2018): 834-848.

[158]

K. Zhou, W. Li, and D. Zhao, “Deep Learning-based Breast Region Extraction of Mammographic Images Combining Pre-processing Methods and Semantic Segmentation Supported by Deeplab v3,” Technology and Health Care 30, no. S1 (2022): 173-190.

[159]

B. Felfeliyan, A. Hareendranathan, G. Kuntze, J. L. Jaremko, and J. L. Ronsky, “Improved-Mask R-CNN: Towards an Accurate Generic MSK MRI Instance Segmentation Platform (data From the Osteoarthritis Initiative),” Computerized Medical Imaging and Graphics 97 (2022): 102056.

[160]

F. Shamshad, S. Khan, S. W. Zamir, et al., “Transformers in Medical Imaging: A Survey,” Medical Image Analysis 88 (2023): 102802.

[161]

K. He, C. Gan, Z. Li, et al., “Transformers in Medical Image Analysis,” Intelligent Medicine 3, no. 1 (2023): 59-78.

[162]

K. Xia and J. Wang, “Recent Advances of Transformers in Medical Image Analysis: A Comprehensive Review,” MedComm 2, no. 1 (2023): e38.

[163]

K. Han, Y. Wang, H. Chen, et al., “A Survey on Vision Transformer,” Ieee Transactions on Pattern Analysis and Machine Intelligence 45, no. 1 (2023): 87-110.

[164]

S. Yan, C. Wang, W. Chen, and J. Lyu, “Swin Transformer-based GAN for Multi-modal Medical Image Translation,” Frontiers in oncology 12 (2022): 942511.

[165]

S. V. Fotin, Y. Yin, H. Haldankar, J. W. Hoffmeister, and S. Periaswamy, “Detection of Soft Tissue Densities From Digital Breast Tomosynthesis: Comparison of Conventional and Deep Learning approaches,” Medical Imaging 2016: Computer-aided Diagnosis. (SPIE, 2016): 228-233.

[166]

N. Konz, M. Buda, H. Gu, et al., “A Competition, Benchmark, Code, and Data for Using Artificial Intelligence to Detect Lesions in Digital Breast Tomosynthesis,” JAMA Network Open 6, no. 2 (2023): e230524.

[167]

M. Buda, A. Saha, R. Walsh, et al., “A Data Set and Deep Learning Algorithm for the Detection of Masses and Architectural Distortions in Digital Breast Tomosynthesis Images,” JAMA Network Open 4, no. 8 (2021): e2119100.

[168]

R. Aggarwal, V. Sounderajah, G. Martin, et al., “Diagnostic Accuracy of Deep Learning in Medical Imaging: A Systematic Review and Meta-analysis,” NPJ Digital Medicine 4, no. 1 (2021): 65.

[169]

A. J. Maxwell, M. Michell, Y. Y. Lim, et al., “A Randomised Trial of Screening With Digital Breast Tomosynthesis plus Conventional Digital 2D Mammography versus 2D Mammography Alone in Younger Higher Risk Women,” European Journal of Radiology 94 (2017): 133-139.

[170]

K. Simon, K. Dodelzon, M. Drotman, et al., “Accuracy of Synthetic 2D Mammography Compared with Conventional 2D Digital Mammography Obtained with 3D Tomosynthesis,” Ajr American Journal of Roentgenology 212, no. 6 (2019): 1406-1411.

[171]

S. Kulkarni, V. Freitas, and D. Muradali, “Digital Breast Tomosynthesis: Potential Benefits in Routine Clinical Practice,” Canadian Association of Radiologists Journal 73, no. 1 (2022): 107-120.

[172]

D. B. Russakoff, T. Rohlfing, K. Mori, et al., “Fast Generation of Digitally Reconstructed Radiographs Using Attenuation Fields With Application to 2D-3D Image Registration,” Ieee Transactions on Medical Imaging 24, no. 11 (2005): 1441-1454.

[173]

Y. Gao and L. Moy, “Phase-Sensitive Breast Tomosynthesis May Address Shortcomings of Digital Breast Tomosynthesis,” Radiology 306, no. 2 (2022): e222184.

[174]

I. Kassis, D. Lederman, G. Ben-Arie, M. Giladi Rosenthal, I. Shelef, and Y. Zigel, “Detection of Breast Cancer in Digital Breast Tomosynthesis With Vision Transformers,” Scientific Reports 14, no. 1 (2024): 22149.

[175]

S. M. McKinney, M. Sieniek, V. Godbole, et al., “International Evaluation of an AI System for Breast Cancer Screening,” Nature 577, no. 7788 (2020): 89-94.

[176]

X. Lai, W. Yang, and R. Li, “DBT Masses Automatic Segmentation Using U-Net Neural Networks,” Computational and Mathematical Methods in Medicine 2020, no. 1 (2020): 7156165.

[177]

M. Fan, Y. Li, S. Zheng, W. Peng, W. Tang, and L. Li, “Computer-aided Detection of Mass in Digital Breast Tomosynthesis Using a Faster Region-based Convolutional Neural Network,” Methods (San Diego, Calif.) 166 (2019): 103-111.

[178]

M. Fan, H. Zheng, S. Zheng, et al., “Mass Detection and Segmentation in Digital Breast Tomosynthesis Using 3D-Mask Region-Based Convolutional Neural Network: A Comparative Analysis,” Frontiers in Molecular Biosciences 7 (2020): 599333.

[179]

M. B. Hossain, R. M. Nishikawa, and J. Lee, “Developing Breast Lesion Detection Algorithms for Digital Breast Tomosynthesis: Leveraging False Positive Findings,” Medical Physics 49, no. 12 (2022): 7596-7608.

[180]

J. Sun, X. Wang, N. Xiong, and J. Shao, “Learning Sparse Representation With Variational Auto-encoder for Anomaly Detection,” IEEE Access 6 (2018): 33353-33361.

[181]

R. F. Mansour, J. Escorcia-Gutierrez, M. Gamarra, D. Gupta, O. Castillo, and S. Kumar, “Unsupervised Deep Learning Based Variational Autoencoder Model for COVID-19 Diagnosis and Classification,” Pattern Recognition Letters 151 (2021): 267-274.

[182]

H. Uzunova, S. Schultz, H. Handels, and J. Ehrhardt, “Unsupervised Pathology Detection in Medical Images Using Conditional Variational Autoencoders,” International Journal of Computer Assisted Radiology and Surgery 14, no. 3 (2019): 451-461.

[183]

I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., “Generative Adversarial Networks,” Communications of the Acm 63, no. 11 (2020): 139-144.

[184]

N. K. Singh and K. Raza. Medical Image Generation Using Generative Adversarial Networks: A review. In: R. Patgiri, A. Biswas, P. Roy, eds. “Health Informatics: A Computational Perspective in Healthcare” (Springer: Singapore, 2021): 77-96.

[185]

D. Hu, L. Wang, W. Jiang, S. Zheng, and B. Li, “A Novel Image Steganography Method via Deep Convolutional Generative Adversarial Networks,” IEEE Access 6 (2018): 38303-38314.

[186]

R. Toda, A. Teramoto, M. Kondo, K. Imaizumi, K. Saito, and H. Fujita, “Lung Cancer CT Image Generation From a Free-form Sketch Using Style-based pix2pix for Data Augmentation,” Scientific Reports 12, no. 1 (2022): 12867.

[187]

Y. Zhang, S. Liu, C. Dong, X. Zhang, and Y. Yuan, “Multiple Cycle-in-Cycle Generative Adversarial Networks for Unsupervised Image Super-Resolution,” Ieee Transactions on Image Processing 29 (2019): 1101-1112.

[188]

Q. Yang, P. Yan, Y. Zhang, et al., “Low-Dose CT Image Denoising Using a Generative Adversarial Network with Wasserstein Distance and Perceptual Loss,” Ieee Transactions on Medical Imaging 37, no. 6 (2018): 1348-1357.

[189]

H. Ma, D. Liu, and F. Wu, “Rectified wasserstein Generative Adversarial Networks for Perceptual Image Restoration,” Ieee Transactions on Pattern Analysis and Machine Intelligence 45, no. 3 (2022): 3648-3663.

[190]

Z. Ni, W. Yang, S. Wang, L. Ma, and S. Kwong, “Towards Unsupervised Deep Image Enhancement With Generative Adversarial Network,” Ieee Transactions on Image Processing 29 (2020): 9140-9151.

[191]

V. Bharti, B. Biswas, and K. K. Shukla, “EMOCGAN: A Novel Evolutionary Multiobjective Cyclic Generative Adversarial Network and Its Application to Unpaired Image Translation,” Neural Computing and Applications 34, no. 24 (2022): 21433-21447.

[192]

P. Hambar, Z. Gosher, S. Fengade, J. Jain, R. Nikam, and S. Dange, “Contrastive Learning Approach for Text-to Image Synthesis,” In: 2023 International Conference on Advanced Computing Technologies and Applications (ICACTA). IEEE; 2023: 1-7.

[193]

C. Zheng, T.-L. Vuong, J. Cai, and D. Phung, “Movq: Modulating Quantized Vectors for High-fidelity Image Generation,” Advances in Neural Information Processing Systems 35 (2022): 23412-23425.

[194]

A. Gallucci, D. Znamenskiy, Y. Long, N. Pezzotti, and M. Petkovic, “Generating High-Resolution 3D Faces and Bodies Using VQ-VAE-2 With PixelSNAIL Networks on 2D Representations,” Sensors (Basel) 23, no. 3 (2023): 1168.

[195]

I. Gligorea, M. Cioca, R. Oancea, A.-T. Gorski, H. Gorski, and P. Tudorache, “Adaptive Learning Using Artificial Intelligence in E-learning: A Literature Review,” Education Sciences 13, no. 12 (2023): 1216.

[196]

W. Lin, Z. Zhao, X. Zhang, et al., “Pmc-clip: Contrastive Language-image Pre-training Using Biomedical Documents,” In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer; 2023: 525-536.

[197]

J. Liu, H.-Y. Zhou, C. Li, et al., “Mlip: Medical Language-image Pre-training With Masked Local Representation Learning,” In: 2024 IEEE International Symposium on Biomedical Imaging (ISBI). IEEE; 2024: 1-5.

[198]

S. W. Zamir, A. Arora, S. Khan, et al., “Learning Enriched Features for Real Image Restoration and Enhancement,” In: Computer Vision-ECCV2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXV 16. Springer; 2020: 492-511.

[199]

L. Chen, P. Bentley, K. Mori, K. Misawa, M. Fujiwara, and D. Rueckert, “Self-supervised Learning for Medical Image Analysis Using Image Context Restoration,” Medical Image Analysis 58 (2019): 101539.

[200]

X. Liu, L. Song, S. Liu, and Y. Zhang, “A Review of Deep-learning-based Medical Image Segmentation Methods,” Sustainability 13, no. 3 (2021): 1224.

[201]

L. Cai, J. Gao, and D. Zhao, “A Review of the Application of Deep Learning in Medical Image Classification and Segmentation,” Annals of translational medicine 8, no. 11 (2020): 713.

[202]

D. Nie, R. Trullo, J. Lian, et al., “Medical Image Synthesis With Deep Convolutional Adversarial Networks,” Ieee Transactions on Bio-Medical Engineering 65, no. 12 (2018): 2720-2730.

[203]

D. Nie, R. Trullo, J. Lian, et al., “Medical Image Synthesis With Context-aware Generative Adversarial Networks,” In: Medical Image Computing and Computer Assisted Intervention− MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, September 11-13, 2017, Proceedings, Part III 20. Springer; 2017: 417-425.

[204]

X. Li, L. Yu, H. Chen, C.-W. Fu, L. Xing, and P.-A. Heng, “Transformation-consistent Self-ensembling Model for Semisupervised Medical Image Segmentation,” IEEE Transactions on Neural Networks and Learning Systems 32, no. 2 (2020): 523-534.

[205]

T. Fernando, H. Gammulle, S. Denman, S. Sridharan, and C. Fookes, “Deep Learning for Medical Anomaly Detection-a Survey,” ACM Computing Surveys (CSUR) 54, no. 7 (2021): 1-37.

[206]

E. Shivhare and V. Saxena, “Optimized Generative Adversarial Network Based Breast Cancer Diagnosis With Wavelet and Texture Features,” Multimedia Systems 28, no. 5 (2022): 1639-1655.

[207]

E. Strelcenia and S. Prakoonwit, “Improving Cancer Detection Classification Performance Using GANs in Breast Cancer Data,” IEEE Access 11 (2023): 71594-71615.

[208]

S. Guan and M. Loew, “Breast Cancer Detection Using Synthetic Mammograms From Generative Adversarial Networks in Convolutional Neural Networks,” Journal of Medical Imaging (Bellingham) 6, no. 3 (2019): 031411.

[209]

M. Gao, J. A. Fessler, and H. P. Chan, “Deep Convolutional Neural Network with Adversarial Training for Denoising Digital Breast Tomosynthesis Images,” Ieee Transactions on Medical Imaging 40, no. 7 (2021): 1805-1816.

[210]

A. Swiecicki, N. Konz, M. Buda, and M. A. Mazurowski, “A Generative Adversarial Network-based Abnormality Detection Using Only Normal Images for Model Training With Application to Digital Breast Tomosynthesis,” Scientific Reports 11, no. 1 (2021): 10276.

[211]

D. Shah, M. A. Ullah Khan, and M. Abrar, “Reliable Breast Cancer Diagnosis With Deep Learning: DCGAN-Driven Mammogram Synthesis and Validity Assessment,” Applied Computational Intelligence and Soft Computing 2024, no. 1 (2024): 1122109.

[212]

F. Shahidi, “Breast Cancer Histopathology Image Super-resolution Using Wide-attention Gan With Improved Wasserstein Gradient Penalty and Perceptual Loss,” IEEE Access 9 (2021): 32795-32809.

[213]

J. Lee and R. M. Nishikawa, “Identifying Women with Mammographically- Occult Breast Cancer Leveraging GAN-Simulated Mammograms,” Ieee Transactions on Medical Imaging 41, no. 1 (2022): 225-236.

[214]

M. Staffa, L. D'Errico, R. Ricciardi, et al., “How to Increase and Balance Current DBT Datasets via an Evolutionary GAN: Preliminary Results,” In: 2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid). IEEE; 2022: 913-920.

[215]

J. Son, S. E. Lee, E. K. Kim, and S. Kim, “Prediction of Breast Cancer Molecular Subtypes Using Radiomics Signatures of Synthetic Mammography From Digital Breast Tomosynthesis,” Scientific Reports 10, no. 1 (2020): 21566.

[216]

M. Ma, R. Liu, C. Wen, et al., “Predicting the Molecular Subtype of Breast Cancer and Identifying Interpretable Imaging Features Using Machine Learning Algorithms,” European Radiology 32, no. 3 (2022): 1652-1662.

[217]

J. Gu, T. Tong, C. He, et al., “Deep Learning Radiomics of Ultrasonography Can Predict Response to Neoadjuvant Chemotherapy in Breast Cancer at an Early Stage of Treatment: A Prospective Study,” European Radiology 32, no. 3 (2022): 2099-2109.

[218]

D. Zuo, L. Yang, Y. Jin, H. Qi, Y. Liu, and L. Ren, “Machine Learning-based Models for the Prediction of Breast Cancer Recurrence Risk,” BMC Medical Informatics and Decision Making [Electronic Resource] 23, no. 1 (2023): 276.

[219]

D. Shimokawa, K. Takahashi, K. Oba, et al., “Deep Learning Model for Predicting the Presence of Stromal Invasion of Breast Cancer on Digital Breast Tomosynthesis,” Radiological Physics and Technology 16, no. 3 (2023): 406-413.

[220]

M. M. Schmitgen, I. Niedtfeld, R. Schmitt, et al., “Individualized Treatment Response Prediction of Dialectical Behavior Therapy for Borderline Personality Disorder Using Multimodal Magnetic Resonance Imaging,” Brain and Behavior 9, no. 9 (2019): e01384.

[221]

B. Rigaud, O. O. Weaver, J. B. Dennison, et al., “Deep Learning Models for Automated Assessment of Breast Density Using Multiple Mammographic Image Types,” Cancers (Basel) 14, no. 20 (2022): 5003.

[222]

K. Michielsen, A. Rodriguez-Ruiz, I. Reiser, J. G. Nagy, and I. Sechopoulos, “Iodine Quantification in Limited Angle Tomography,” Medical Physics 47, no. 10 (2020): 4906-4916.

[223]

H. Jang and J. Baek, “Convolutional Neural Network-based Model Observer for Signal Known Statistically Task in Breast Tomosynthesis Images,” Medical Physics 50, no. 10 (2023): 6390-6408.

[224]

M. Gao, J. A. Fessler, and H. P. Chan, “Model-based Deep CNN-regularized Reconstruction for Digital Breast Tomosynthesis With a Task-based CNN Image Assessment Approach,” Physics in Medicine and Biology 68, no. 24 (2023): 245024.

[225]

T. Su, X. Deng, J. Yang, et al., “DIR-DBTnet: Deep Iterative Reconstruction Network for Three-dimensional Digital Breast Tomosynthesis Imaging,” Medical Physics 48, no. 5 (2021): 2289-2300.

[226]

B. Yang, Y. Wu, Z. Zhou, et al., “A Collection Input Based Support Tensor Machine for Lesion Malignancy Classification in Digital Breast Tomosynthesis,” Physics in Medicine and Biology 64, no. 23 (2019): 235007.

[227]

J. Wang, H. Sun, K. Jiang, et al., “CAPNet: Context Attention Pyramid Network for Computer-aided Detection of Microcalcification Clusters in Digital Breast Tomosynthesis,” Computer Methods and Programs in Biomedicine 242 (2023): 107831.

[228]

R. K. Samala, H. P. Chan, L. Hadjiiski, M. A. Helvie, J. Wei, and K. Cha, “Mass Detection in Digital Breast Tomosynthesis: Deep Convolutional Neural Network With Transfer Learning From Mammography,” Medical Physics 43, no. 12 (2016): 6654-6666.

[229]

A. M. Mota, M. J. Clarkson, P. Almeida, and N. Matela, “Automatic Classification of Simulated Breast Tomosynthesis Whole Images for the Presence of Microcalcification Clusters Using Deep CNNs,” Journal of Imaging 8, no. 9 (2022): 231.

[230]

E. F. Conant, A. Y. Toledano, S. Periaswamy, et al., “Improving Accuracy and Efficiency With Concurrent Use of Artificial Intelligence for Digital Breast Tomosynthesis,” Radiology: Artificial Intelligence 1, no. 4 (2019): e180096.

[231]

B. Xiao, H. Sun, Y. Meng, et al., “Classification of Microcalcification Clusters in Digital Breast Tomosynthesis Using Ensemble Convolutional Neural Network,” Biomedical Engineering Online [Electronic Resource] 20, no. 1 (2021): 71.

[232]

H. M. Whitney, H. Li, Y. Ji, P. Liu, and M. L. Giger, “Comparison of Breast MRI Tumor Classification Using Human-Engineered Radiomics, Transfer Learning from Deep Convolutional Neural Networks, and Fusion Methods,” Proceedings of the IEEE 108, no. 1 (2020): 163-177.

[233]

T. P. Matthews, S. Singh, B. Mombourquette, et al., “A Multisite Study of a Breast Density Deep Learning Model for Full-Field Digital Mammography and Synthetic Mammography,” Radiology: Artificial Intelligence 3, no. 1 (2021): e200015.

[234]

D. H. Kim, S. T. Kim, J. M. Chang, and Y. M. Ro, “Latent Feature Representation With Depth Directional Long-term Recurrent Learning for Breast Masses in Digital Breast Tomosynthesis,” Physics in Medicine and Biology 62, no. 3 (2017): 1009-1031.

[235]

M. L. Altoe, A. Marone, H. K. Kim, et al., “Diffuse Optical Tomography of the Breast: A Potential Modifiable Biomarker of Breast Cancer Risk With Neoadjuvant Chemotherapy,” Biomedical Optics Express 10, no. 8 (2019): 4305-4315.

[236]

A. S. Tagliafico, B. Bignotti, F. Rossi, et al., “Breast Cancer Ki-67 Expression Prediction by Digital Breast Tomosynthesis Radiomics Features,” European Radiology Experimental 3, no. 1 (2019): 36.

[237]

M. G. Davey, M. S. Davey, M. R. Boland, E. J. Ryan, A. J. Lowery, and M. J. Kerin, “Radiomic Differentiation of Breast Cancer Molecular Subtypes Using Pre-operative Breast Imaging—A Systematic Review and Meta-analysis,” European Journal of Radiology 144 (2021): 109996.

[238]

E. K. Park, K. S. Lee, B. K. Seo, et al., “Machine Learning Approaches to Radiogenomics of Breast Cancer Using Low-Dose Perfusion Computed Tomography: Predicting Prognostic Biomarkers and Molecular Subtypes,” Scientific Reports 9, no. 1 (2019): 17847.

[239]

I. Nissar, S. Alam, S. Masood, and M. Kashif, “MOB-CBAM: A Dual-channel Attention-based Deep Learning Generalizable Model for Breast Cancer Molecular Subtypes Prediction Using Mammograms,” Computer Methods and Programs in Biomedicine 248 (2024): 108121.

[240]

S. Cai, M. Yao, D. Cai, et al., “Association Between Digital Breast Tomosynthesis and Molecular Subtypes of Breast Cancer,” Oncology letters 17, no. 3 (2019): 2669-2676.

[241]

S. Huang, J. Yang, S. Fong, and Q. Zhao, “Artificial Intelligence in Cancer Diagnosis and Prognosis: Opportunities and Challenges,” Cancer Letters 471 (2020): 61-71.

[242]

E. F. Conant, S. P. Zuckerman, E. S. McDonald, et al., “Five Consecutive Years of Screening With Digital Breast Tomosynthesis: Outcomes by Screening Year and Round,” Radiology 295, no. 2 (2020): 285-293.

[243]

G. Chugh, S. Kumar, and N. Singh, “Survey on Machine Learning and Deep Learning Applications in Breast Cancer Diagnosis,” Cognitive Computation 13, no. 6 (2021): 1451-1470.

[244]

R. Ha, P. Chang, S. Mutasa, et al., “Convolutional Neural Network Using a Breast MRI Tumor Dataset Can Predict Oncotype Dx Recurrence Score,” Journal of Magnetic Resonance Imaging 49, no. 2 (2019): 518-524.

[245]

M. A. Durand, B. M. Haas, X. Yao, et al., “Early Clinical Experience With Digital Breast Tomosynthesis for Screening Mammography,” Radiology 274, no. 1 (2015): 85-92.

[246]

N. Alsheik, L. Blount, Q. Qiong, et al., “Outcomes by Race in Breast Cancer Screening with Digital Breast Tomosynthesis versus Digital Mammography,” Journal of the American College of Radiology 18, no. 7 (2021): 906-918.

[247]

E. F. Conant, W. E. Barlow, S. D. Herschorn, et al., “Association of Digital Breast Tomosynthesis vs Digital Mammography with Cancer Detection and Recall Rates by Age and Breast Density,” JAMA oncology 5, no. 5 (2019): 635-642.

[248]

M. Eriksson, S. Destounis, K. Czene, et al., “A Risk Model for Digital Breast Tomosynthesis to Predict Breast Cancer and Guide Clinical Care,” Science Translational Medicine 14, no. 644 (2022): eabn3971.

[249]

K. Johnson, K. Lang, D. M. Ikeda, A. Akesson, I. Andersson, and S. Zackrisson, “Interval Breast Cancer Rates and Tumor Characteristics in the Prospective Population-based Malmo Breast Tomosynthesis Screening Trial,” Radiology 299, no. 3 (2021): 559-567.

[250]

S. Niu, T. Yu, Y. Cao, Y. Dong, Y. Luo, and X. Jiang, “Digital Breast Tomosynthesis-based Peritumoral Radiomics Approaches in the Differentiation of Benign and Malignant Breast Lesions,” Diagnostic and Interventional Radiology 28, no. 3 (2022): 217-225.

[251]

D. Wang, M. Liu, Z. Zhuang, et al., “Radiomics Analysis on Digital Breast Tomosynthesis: Preoperative Evaluation of Lymphovascular Invasion Status in Invasive Breast Cancer,” Academic Radiology 29, no. 12 (2022): 1773-1782.

[252]

M. Bahl, S. Mercaldo, P. A. Dang, A. M. McCarthy, K. P. Lowry, and C. D. Lehman, “Breast Cancer Screening With Digital Breast Tomosynthesis: Are Initial Benefits Sustained?,” Radiology 295, no. 3 (2020): 529-539.

[253]

G. Kim, S. Mercaldo, and M. Bahl, “Impact of Digital Breast Tomosynthesis (DBT) on Finding Types Leading to True-positive and False-positive Examinations,” Clinical Imaging 71 (2021): 155-159.

[254]

Y. Peng, S. Wu, G. Yuan, et al., “A Radiomics Method to Classify Microcalcification Clusters in Digital Breast Tomosynthesis,” Medical Physics 47, no. 8 (2020): 3435-3446.

[255]

A. Sakai, Y. Onishi, M. Matsui, et al., “A Method for the Automated Classification of Benign and Malignant Masses on Digital Breast Tomosynthesis Images Using Machine Learning and Radiomic Features,” Radiological Physics and Technology 13, no. 1 (2020): 27-36.

[256]

M. Wels, B. M. Kelm, M. Hammon, A. Jerebko, M. Sühling, and D. Comaniciu, “Data-driven Breast Decompression and Lesion Mapping From Digital Breast Tomosynthesis,” In: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2012: 15th International Conference, Nice, France, October 1-5, 2012, Proceedings, Part I 15. Springer; 2012: 438-446.

[257]

M. Chen, S. J. Copley, P. Viola, H. Lu, and E. O. Aboagye, “Radiomics and Artificial Intelligence for Precision Medicine in Lung Cancer treatment,” Semin Cancer Biol. (Elsevier, 2023): 97-113.

[258]

M. Avanzo, L. Wei, J. Stancanello, et al., “Machine and Deep Learning Methods for Radiomics,” Medical Physics 47, no. 5 (2020): e185-e202.

[259]

S. Rizzo, F. Botta, S. Raimondi, et al., “Radiomics: The Facts and the Challenges of Image Analysis,” European Radiology Experimental 2, no. 1 (2018): 36.

[260]

C. Scapicchio, M. Gabelloni, A. Barucci, D. Cioni, L. Saba, and E. Neri, “A Deep Look Into Radiomics,” La Radiologia Medica 126, no. 10 (2021): 1296-1311.

[261]

F. Murtas, V. Landoni, P. Ordonez, et al., “Clinical-radiomic Models Based on Digital Breast Tomosynthesis Images: A Preliminary Investigation of a Predictive Tool for Cancer Diagnosis,” Frontiers in oncology 13 (2023): 1152158.

[262]

A. S. Tagliafico, F. Valdora, G. Mariscotti, et al., “An Exploratory Radiomics Analysis on Digital Breast Tomosynthesis in Women With Mammographically Negative Dense Breasts,” Breast (Edinburgh, Scotland) 40 (2018): 92-96.

[263]

R. Fusco, P. Vallone, S. Filice, et al., “Radiomic Features Analysis by Digital Breast Tomosynthesis and Contrast-enhanced Dual-energy Mammography to Detect Malignant Breast Lesions,” Biomedical Signal Processing and Control 53 (2019): 101568.

[264]

G. Murtaza, L. Shuib, A. W. Abdul Wahab, et al., “Deep Learning-based Breast Cancer Classification Through Medical Imaging Modalities: State of the Art and Research Challenges,” Artificial Intelligence Review 53 (2020): 1655-1720.

[265]

A. Sarno, G. Mettivier, F. di Franco, et al., “Dataset of Patient-derived Digital Breast Phantoms for in Silico Studies in Breast Computed Tomography, Digital Breast Tomosynthesis, and Digital Mammography,” Medical Physics 48, no. 5 (2021): 2682-2693.

[266]

K. Dembrower, P. Lindholm, and F. Strand, “A Multi-million Mammography Image Dataset and Population-Based Screening Cohort for the Training and Evaluation of Deep Neural Networks-the Cohort of Screen-Aged Women (CSAW),” Journal of Digital Imaging 33, no. 2 (2020): 408-413.

[267]

O. M. Velarde, C. Lin, S. Eskreis-Winkler, and L. C. Parra, “Robustness of Deep Networks for Mammography: Replication across Public Datasets,” Journal of Imaging Informatics in Medicine 37, no. 2 (2024): 536-546.

[268]

M. Muštra and A. Štajduhar, “Segmentation Masks for the Mini-mammographic Image analysis society (mini-MIAS) Database,” IEEE Consumer Electronics Magazine 9, no. 5 (2020): 28-33.

[269]

H. Chougrad, H. Zouaki, and O. Alheyane, “Deep Convolutional Neural Networks for Breast Cancer Screening,” Computer Methods and Programs in Biomedicine 157 (2018): 19-30.

[270]

P. Murty, C. Anuradha, P. A. Naidu, et al., “Integrative Hybrid Deep Learning for Enhanced Breast Cancer Diagnosis: Leveraging the Wisconsin Breast Cancer Database and the CBIS-DDSM Dataset,” Scientific Reports 14, no. 1 (2024): 26287.

[271]

X. Yu, Q. Zhou, S. Wang, and Y. D. Zhang, “A Systematic Survey of Deep Learning in Breast Cancer,” International Journal of Intelligent Systems 37, no. 1 (2022): 152-216.

[272]

P. Oza, U. Oza, R. Oza, et al., “Digital Mammography Dataset for Breast Cancer Diagnosis Research (DMID) With Breast Mass Segmentation Analysis,” Biomedical Engineering Letters 14, no. 2 (2024): 317-330.

[273]

H. T. Nguyen, H. Q. Nguyen, H. H. Pham, et al., “VinDr-Mammo: A Large-scale Benchmark Dataset for Computer-aided Diagnosis in Full-field Digital Mammography,” Scientific Data 10, no. 1 (2023): 277.

[274]

H. Koziolek, S. Grüner, R. Hark, V. Ashiwal, S. Linsbauer, and N. Eskandani, “LLM-based and Retrieval-Augmented Control Code Generation,” In: Proceedings of the 1st International Workshop on Large Language Models for Code. Association for Computing Machinery; 2024: 22-29.

[275]

B. B. Zimmermann, B. Deng, B. Singh, et al., “Multimodal Breast Cancer Imaging Using Coregistered Dynamic Diffuse Optical Tomography and Digital Breast Tomosynthesis,” Journal of Biomedial Optics 22, no. 4 (2017): 46008.

[276]

B. L. Sprague, R. Y. Coley, K. P. Lowry, et al., “Digital Breast Tomosynthesis versus Digital Mammography Screening Performance on Successive Screening Rounds From the Breast Cancer Surveillance Consortium,” Radiology 307, no. 5 (2023): e223142.

[277]

O. Imane, A. Mohamed, R. F. Lazhar, S. Hama, B. Elhadj, and A. Conci, “LAMIS-DMDB: A New Full Field Digital Mammography Database for Breast Cancer AI-CAD Researches,” Biomedical Signal Processing and Control 90 (2024): 105823.

[278]

J. Park, J. Chłędowski, S. Jastrzębski, et al., “An Efficient Deep Neural Network to Classify Large 3D Images With Small Objects,” Ieee Transactions on Medical Imaging 43, no. 1 (2023): 351-365.

[279]

A. C. Pujara, J. Hui, and L. C. Wang, “Architectural Distortion in the Era of Digital Breast Tomosynthesis: Outcomes and Implications for Management,” Clinical Imaging 54 (2019): 133-137.

[280]

O. N. Oyelade, A. E. Ezugwu, M. S. Almutairi, A. K. Saha, L. Abualigah, and H. Chiroma, “A Generative Adversarial Network for Synthetization of Regions of Interest Based on Digital Mammograms,” Scientific Reports 12, no. 1 (2022): 6166.

[281]

J. R. Burt, N. Torosdagli, N. Khosravan, et al., “Deep Learning Beyond Cats and Dogs: Recent Advances in Diagnosing Breast Cancer With Deep Neural Networks,” British Journal of Radiology 91, no. 1089 (2018): 20170545.

[282]

T. Viriyasaranon, J. W. Chun, Y. H. Koh, et al., “Annotation-Efficient Deep Learning Model for Pancreatic Cancer Diagnosis and Classification Using CT Images: A Retrospective Diagnostic Study,” Cancers (Basel) 15, no. 13 (2023): 3392.

[283]

P. Oza, P. Sharma, S. Patel, and P. Kumar, “Computer-aided Breast Cancer Diagnosis: Comparative Analysis of Breast Imaging Modalities and Mammogram Repositories,” Current Medical Imaging Reviews 19, no. 5 (2023): 456-468.

[284]

A. S. Betancourt Tarifa, C. Marrocco, M. Molinara, F. Tortorella, and A. Bria, “Transformer-based Mass Detection in Digital Mammograms,” Journal of Ambient Intelligence and Humanized Computing 14, no. 3 (2023): 2723-2737.

[285]

M. D. Halling-Brown, L. M. Warren, D. Ward, et al., “OPTIMAM Mammography Image Database: A Large-Scale Resource of Mammography Images and Clinical Data,” Radiology: Artificial Intelligence 3, no. 1 (2021): e200103.

[286]

M. M. Najafabadi, F. Villanustre, T. M. Khoshgoftaar, N. Seliya, R. Wald, and E. Muharemagic, “Deep Learning Applications and Challenges in Big Data Analytics,” Journal of Big Data 2 (2015): 1-21.

[287]

W. Lotter, A. R. Diab, B. Haslam, et al., “Robust Breast Cancer Detection in Mammography and Digital Breast Tomosynthesis Using an Annotation-efficient Deep Learning Approach,” Nature Medicine 27, no. 2 (2021): 244-249.

[288]

L. Balkenende, J. Teuwen, and R. M. Mann, “Application of Deep Learning in Breast Cancer imaging,” Seminars in Nuclear Medicine. (Elsevier, 2022): 584-596.

[289]

M. Yousefi, A. Krzyzak, and C. Y. Suen, “Mass Detection in Digital Breast Tomosynthesis Data Using Convolutional Neural Networks and Multiple Instance Learning,” Computers in Biology and Medicine 96 (2018): 283-293.

[290]

D. Petrov, N. Marshall, K. Young, G. Zhang, and H. Bosmans, “Model and human Observer Reproducibility for Detection of Microcalcification Clusters in Digital Breast Tomosynthesis Images of Three-dimensionally Structured Test Object,” Journal of Medical Imaging (Bellingham) 6, no. 1 (2019): 015503.

[291]

N. Houssami, K. Lång, S. Hofvind, et al., “Effectiveness of Digital Breast Tomosynthesis (3D-mammography) in Population Breast Cancer Screening: A Protocol for a Collaborative Individual Participant Data (IPD) Meta-analysis,” Translational Cancer Research 6, no. 4 (2017): 869-877.

[292]

C. Reis, A. Pascoal, T. Sakellaris, and M. Koutalonis, “Quality Assurance and Quality Control in Mammography: A Review of Available Guidance Worldwide,” Insights Imaging 4, no. 5 (2013): 539-553.

[293]

C. Hill and L. Robinson, “Mammography Image Assessment; Validity and Reliability of Current Scheme,” Radiography 21, no. 4 (2015): 304-307.

[294]

E. Goceri, “Medical Image Data Augmentation: Techniques, Comparisons and Interpretations,” Artificial Intelligence Review 56, no. 11 (2023): 1-45.

[295]

Q. Zheng, M. Yang, X. Tian, N. Jiang, and D. Wang, “A Full Stage Data Augmentation Method in Deep Convolutional Neural Network for Natural Image Classification,” Discrete Dynamics in Nature and Society 2020, no. 1 (2020): 4706576.

[296]

L. Taylor and G. Nitschke, “Improving Deep Learning With Generic Data Augmentation,” In: 2018 IEEE symposium series on computational intelligence (SSCI). IEEE; 2018: 1542-1547.

[297]

A. J. Plompen, O. Cabellos, C. De Saint Jean, et al., “The Joint Evaluated Fission and Fusion Nuclear Data Library, JEFF-3.3,” European Physical Journal A: Hadrons and Nuclei 56 (2020): 1-108.

[298]

L. Garrucho, K. Kushibar, R. Osuala, et al., “High-resolution Synthesis of High-density Breast Mammograms: Application to Improved Fairness in Deep Learning Based Mass Detection,” Frontiers in oncology 12 (2022): 1044496.

[299]

J. G. Elmore and C. I. Lee, “Data Quality, Data Sharing, and Moving Artificial Intelligence Forward,” JAMA Network Open 4, no. 8 (2021): e2119345.

[300]

G. A. Kaissis, M. R. Makowski, D. Rückert, and R. F. Braren, “Secure, Privacy-preserving and Federated Machine Learning in Medical Imaging,” Nature Machine Intelligence 2, no. 6 (2020): 305-311.

[301]

M. Field, D. I. Thwaites, M. Carolan, et al., “Infrastructure Platform for Privacy-preserving Distributed Machine Learning Development of Computer-assisted Theragnostics in Cancer,” Journal of Biomedical Informatics 134 (2022): 104181.

[302]

F. Cossio, H. Schurz, M. Engstrom, et al., “VAI-B: A Multicenter Platform for the External Validation of Artificial Intelligence Algorithms in Breast Imaging,” J Med Imaging (Bellingham) 10, no. 6 (2023): 061404.

[303]

X. Liu, L. Xie, Y. Wang, et al., “Privacy and Security Issues in Deep Learning: A Survey,” IEEE Access 9 (2020): 4566-4593.

[304]

Y. Chen, X. Qin, J. Wang, C. Yu, and W. Gao, “Fedhealth: A Federated Transfer Learning Framework for Wearable Healthcare,” Ieee Intelligent Systems 35, no. 4 (2020): 83-93.

[305]

G. Dhiman, S. Juneja, H. Mohafez, et al., “Federated Learning Approach to Protect Healthcare Data Over Big Data Scenario,” Sustainability 14, no. 5 (2022): 2500.

[306]

Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated Machine Learning: Concept and Applications,” ACM Transactions on Intelligent Systems and Technology 10, no. 2 (2019): 1-19.

[307]

Y. Chen, J. Li, F. Wang, et al., “DS2PM: A Data-sharing Privacy Protection Model Based on Blockchain and Federated Learning,” IEEE Internet of Things Journal 10, no. 14 (2021): 12112-12125.

[308]

M. J. Sheller, G. A. Reina, B. Edwards, J. Martin, and S. Bakas, “Multi-institutional Deep Learning Modeling Without Sharing Patient Data: A Feasibility Study on Brain Tumor Segmentation,” In: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers, Part I 4. Springer; 2019: 92-104.

[309]

R. Yan, F. Zhang, X. Rao, et al., “Richer Fusion Network for Breast Cancer Classification Based on Multimodal Data,” BMC Medical Informatics and Decision Making [Electronic Resource] 21, no. Suppl1 (2021): 134.

[310]

F.-Z. Nakach, A. Idri, and E. Goceri, “A Comprehensive Investigation of Multimodal Deep Learning Fusion Strategies for Breast Cancer Classification,” Artificial Intelligence Review 57, no. 12 (2024): 1-53.

[311]

S. Steyaert, M. Pizurica, D. Nagaraj, et al., “Multimodal Data Fusion for Cancer Biomarker Discovery With Deep Learning,” Nature Machine Intelligence 5, no. 4 (2023): 351-362.

[312]

W. A. Berg, L. Gutierrez, M. S. NessAiver, et al., “Diagnostic Accuracy of Mammography, Clinical Examination, US, and MR Imaging in Preoperative Assessment of Breast Cancer,” Radiology 233, no. 3 (2004): 830-849.

[313]

L. A Carbonaro. Clinical Applications for Digital Breast Tomosynthesis. In: A. Tagliafico, N. Houssami, M. Calabrese, eds. “Digital Breast Tomosynthesis: A Practical Approach”. (Springer, 2016): 45-58.

[314]

A. Rodriguez-Ruiz, E. Krupinski, J. J. Mordang, et al., “Detection of Breast Cancer With Mammography: Effect of an Artificial Intelligence Support System,” Radiology 290, no. 2 (2019): 305-314.

[315]

J. Dong, Y. Geng, D. Lu, et al., “Clinical Trials for Artificial Intelligence in Cancer Diagnosis: A Cross-Sectional Study of Registered Trials in ClinicalTrials.Gov,” Frontiers in oncology 10 (2020): 1629.

[316]

B. Abhisheka, S. K. Biswas, and B. Purkayastha, “A Comprehensive Review on Breast Cancer Detection, Classification and Segmentation Using Deep Learning,” Archives of Computational Methods in Engineering 30, no. 8 (2023): 5023-5052.

[317]

S. Ramesh, S. Sasikala, S. Gomathi, V. Geetha, and V. Anbumani, “Segmentation and Classification of Breast Cancer Using Novel Deep Learning Architecture,” Neural Computing and Applications 34, no. 19 (2022): 16533-16545.

[318]

J. Zhu, J. Geng, W. Shan, et al., “Development and Validation of a Deep Learning Model for Breast Lesion Segmentation and Characterization in Multiparametric MRI,” Frontiers in oncology 12 (2022): 946580.

[319]

R. Azad, M. Heidari, M. Shariatnia, et al., “Transdeeplab: Convolution-free Transformer-based Deeplab v3+ for Medical Image segmentation,” International Workshop on PRedictive Intelligence in MEdicine. (Springer, 2022): 91-102.

[320]

H. Hui, X. Zhang, F. Li, X. Mei, and Y. Guo, “A Partitioning-stacking Prediction Fusion Network Based on an Improved Attention U-Net for Stroke Lesion Segmentation,” IEEE Access 8 (2020): 47419-47432.

[321]

W. C. Shia, F. R. Hsu, S. T. Dai, S. L. Guo, and D. R. Chen, “Semantic Segmentation of the Malignant Breast Imaging Reporting and Data System Lexicon on Breast Ultrasound Images by Using DeepLab v3,” Sensors (Basel) 22, no. 14 (2022): 5352.

[322]

T. Alam, W. C. Shia, F. R. Hsu, and T. Hassan, “Improving Breast Cancer Detection and Diagnosis Through Semantic Segmentation Using the Unet3+ Deep Learning Framework,” Biomedicines 11, no. 6 (2023): 1536.

[323]

J. Li, L. Cheng, T. Xia, H. Ni, and J. Li, “Multi-scale Fusion U-net for the Segmentation of Breast Lesions,” IEEE Access 9 (2021): 137125-137139.

[324]

M. Bobowicz, M. Rygusik, J. Buler, et al., “Attention-Based Deep Learning System for Classification of Breast Lesions-Multimodal, Weakly Supervised Approach,” Cancers (Basel) 15, no. 10 (2023): 2704.

[325]

T. Cogan, M. Cogan, and L. Tamil, “RAMS: Remote and Automatic Mammogram Screening,” Computers in Biology and Medicine 107 (2019): 18-29.

[326]

K. Balaji, “Image Augmentation Based on Variational Autoencoder for Breast Tumor Segmentation,” Academic Radiology 30, no. Suppl2 (2023): S172-S183.

[327]

L. Luo, X. Wang, Y. Lin, et al., “Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions,” Ieee Reviews in Biomedical Engineering 18 (2024): 130-151.

[328]

X. Wang, Z. Li, X. Luo, et al., “Black-box Domain Adaptative Cell Segmentation via Multi-source Distillation,” In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer; 2023: 749-758.

[329]

C. Chen, W. Xie, Y. Wen, Y. Huang, and X. Ding, “Multiple-source Domain Adaptation With Generative Adversarial Nets,” Knowledge-Based Systems 199 (2020): 105962.

[330]

L. Garrucho, K. Kushibar, S. Jouide, O. Diaz, L. Igual, and K. Lekadir, “Domain Generalization in Deep Learning Based Mass Detection in Mammography: A Large-scale Multi-center Study,” Artificial Intelligence in Medicine 132 (2022): 102386.

[331]

G. Kang, L. Jiang, Y. Wei, Y. Yang, and A. Hauptmann, “Contrastive Adaptation Network for Single-and Multi-source Domain Adaptation,” Ieee Transactions on Pattern Analysis and Machine Intelligence 44, no. 4 (2020): 1793-1804.

[332]

K. Li, J. Lu, H. Zuo, and G. Zhang, “Multi-source Contribution Learning for Domain Adaptation,” IEEE Transactions on Neural Networks and Learning Systems 33, no. 10 (2021): 5293-5307.

[333]

Q. Wu, X. Zhou, Y. Yan, H. Wu, and H. Min, “Online Transfer Learning by Leveraging Multiple Source Domains,” Knowledge and Information Systems 52 (2017): 687-707.

[334]

A. S. Morcos, D. G. Barrett, N. C. Rabinowitz, and M. Botvinick. “On the Importance of Single Directions for Generalization,” arXiv preprint arXiv:180306959. 2018.

[335]

I. Salehin and D.-K. Kang, “A Review on Dropout Regularization Approaches for Deep Neural Networks Within the Scholarly Domain,” Electronics 12, no. 14 (2023): 3106.

[336]

S. H. Khan, M. Hayat, and F. Porikli, “Regularization of Deep Neural Networks With Spectral Dropout,” Neural Networks 110 (2019): 82-90.

[337]

Y. Ma, Q. Yan, Y. Liu, J. Liu, J. Zhang, and Y. Zhao, “StruNet: Perceptual and Low-rank Regularized Transformer for Medical Image Denoising,” Medical Physics 50, no. 12 (2023): 7654-7669.

[338]

Z. Xiao, Y. Su, Z. Deng, and W. Zhang, “Efficient Combination of CNN and Transformer for Dual-Teacher Uncertainty-guided Semi-supervised Medical Image Segmentation,” Computer Methods and Programs in Biomedicine 226 (2022): 107099.

[339]

S. Aslani, M. Dayan, L. Storelli, et al., “Multi-branch Convolutional Neural Network for Multiple Sclerosis Lesion Segmentation,” Neuroimage 196 (2019): 1-15.

[340]

C. Sendra-Balcells, V. M. Campello, C. Martin-Isla, et al., “Domain Generalization in Deep Learning for Contrast-enhanced Imaging,” Computers in Biology and Medicine 149 (2022): 106052.

[341]

J. Wang, C. Lan, C. Liu, et al., “Generalizing to Unseen Domains: A Survey on Domain Generalization,” Ieee Transactions on Knowledge and Data Engineering 35, no. 8 (2022): 8052-8072.

[342]

A. J. Thirunavukarasu, D. S. J. Ting, K. Elangovan, L. Gutierrez, T. F. Tan, and D. S. W. Ting, “Large Language Models in Medicine,” Nature Medicine 29, no. 8 (2023): 1930-1940.

[343]

E. Kasneci, K. Seßler, S. Küchemann, et al., “ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education,” Learning and Individual Differences 103 (2023): 102274.

[344]

L. Wang, C. Ma, X. Feng, et al., “A Survey on Large Language Model Based Autonomous Agents,” Frontiers of Computer Science 18, no. 6 (2024): 186345.

[345]

Y. Liu, T. Han, S. Ma, et al., “Summary of Chatgpt-related Research and Perspective towards the Future of Large Language Models,” Meta-Radiology 1, no. 2 (2023): 100017.

[346]

K. Huang, Y. Qu, H. Cousins, et al., “Crispr-GPT: An LLM Agent for Automated Design of Gene-editing Experiments,” arXiv preprint arXiv: 240418021. 2024.

[347]

Q. Jin, Z. Wang, C. S. Floudas, et al., “Matching Patients to Clinical Trials With Large Language Models,” Nature Communications 15, no. 1 (2024): 9074.

[348]

S. Jabbour, D. Fouhey, E. Kazerooni, J. Wiens, and M. W. Sjoding, “Combining Chest X-rays and Electronic Health Record (EHR) Data Using Machine Learning to Diagnose Acute respiratory Failure,” Journal of the American Medical Informatics Association 29, no. 6 (2022): 1060-1068.

[349]

X. Yang, A. Chen, N. PourNejatian, et al., “A Large Language Model for Electronic Health Records,” NPJ Digital Medicine 5, no. 1 (2022): 194.

[350]

M. Wornow, Y. Xu, R. Thapa, et al., “The Shaky Foundations of Large Language Models and Foundation Models for Electronic Health Records,” NPJ Digital Medicine 6, no. 1 (2023): 135.

[351]

M. Guevara, S. Chen, S. Thomas, et al., “Large Language Models to Identify Social Determinants of Health in Electronic Health Records,” NPJ Digital Medicine 7, no. 1 (2024): 6.

[352]

V. Lievin, C. E. Hother, A. G. Motzfeldt, and O. Winther, “Can Large Language Models Reason About Medical Questions?,” Patterns (N Y) 5, no. 3 (2024): 100943.

[353]

H. Huang, O. Zheng, D. Wang, et al., “ChatGPT for Shaping the Future of Dentistry: The Potential of Multi-modal Large Language Model,” International Journal of Oral Science 15, no. 1 (2023): 29.

[354]

Z. Tan, M. Yang, L. Qin, et al., “An Empirical Study and Analysis of Text-to-image Generation Using Large Language Model-powered Textual Representation,” In: European Conference on Computer Vision. Springer; 2025: 472-489.

[355]

Y. Guo, W. Qiu, G. Leroy, S. Wang, and T. Cohen, “Retrieval Augmentation of Large Language Models for Lay Language Generation,” Journal of Biomedical Informatics 149 (2024): 104580.

[356]

J. J. Woo, A. J. Yang, R. J. Olsen, et al., “Custom Large Language Models Improve Accuracy: Comparing Retrieval Augmented Generation and Artificial Intelligence Agents to Non-custom Models for Evidence-based Medicine,” Arthroscopy 41, no. 3 (2024): 565-573. e6.

[357]

M. Ryspayeva, M. Molinara, A. Bria, C. Marrocco, and F. Tortorella, “Transfer Learning in Breast Mass Detection on the OMI-DB Dataset: A Preliminary Study,” Pattern Recognition, Computer Vision, and Image Processing ICPR 2022 International Workshops and Challenges. (Springer Nature Switzerland, 2023): 529-538.

[358]

M. Jeong, J. Sohn, M. Sung, and J. Kang, “Improving Medical Reasoning Through Retrieval and Self-reflection With Retrieval-augmented Large Language Models,” Bioinformatics 40, no. Suppl1 (2024): i119-i129.

[359]

A. Cozzi, K. Pinker, A. Hidber, et al., “BI-RADS Category Assignments by GPT-3.5, GPT-4, and Google Bard: A Multilanguage Study,” Radiology 311, no. 1 (2024): e232133.

[360]

V. Sorin, B. S. Glicksberg, Y. Artsi, et al., “Utilizing Large Language Models in Breast Cancer Management: Systematic Review,” Journal of Cancer Research and Clinical Oncology 150, no. 3 (2024): 140.

[361]

A. Rao, J. Kim, M. Kamineni, et al., “Evaluating GPT as an Adjunct for Radiologic Decision Making: GPT-4 versus GPT-3.5 in a Breast Imaging Pilot,” Journal of the American College of Radiology 20, no. 10 (2023): 990-997.

[362]

R. Bhayana, “Chatbots and Large Language Models in Radiology: A Practical Primer for Clinical and Research Applications,” Radiology 310, no. 1 (2024): e232756.

[363]

S. Pan, L. Luo, Y. Wang, C. Chen, J. Wang, and X. Wu, “Unifying Large Language Models and Knowledge Graphs: A Roadmap,” Ieee Transactions on Knowledge and Data Engineering 36, no. 7 (2024): 3580-3599.

[364]

T. Guo, Q. Yang, C. Wang, et al., “Knowledgenavigator: Leveraging Large Language Models for Enhanced Reasoning Over Knowledge Graph,” Complex Intell Syst 10, no. 5 (2024): 7063-7076.

[365]

Y. Hu, F. Zou, J. Han, X. Sun, and Y. Wang, “Llm-tikg: Threat Intelligence Knowledge Graph Construction Utilizing Large Language Model,” Computers and Security 145 (2024): 103999.

[366]

A. Fang, P. Lou, J. Hu, et al., “Head and Tail Entity Fusion Model in Medical Knowledge Graph Construction: Case Study for Pituitary Adenoma,” JMIR medical informatics Med Inform 9, no. 7 (2021): e28218.

[367]

Z. Zhang, L. Cao, X. Chen, W. Tang, Z. Xu, and Y. Meng, “Representation Learning of Knowledge Graphs With Entity Attributes,” IEEE Access 8 (2020): 7435-7441.

[368]

X. Huang, J. Tang, Z. Tan, W. Zeng, J. Wang, and X. Zhao, “Knowledge Graph Embedding by Relational and Entity Rotation,” Knowledge-Based Systems 229 (2021): 107310.

[369]

S. M. S. Hasan, D. Rivera, X. C. Wu, E. B. Durbin, J. B. Christian, and G. Tourassi, “Knowledge Graph-Enabled Cancer Data Analytics,” IEEE Journal of Biomedical and Health Informatics 24, no. 7 (2020): 1952-1967.

[370]

X. Cao and Y. Liu, “Relmkg: Reasoning With Pre-trained Language Models and Knowledge Graphs for Complex Question Answering,” Applied Intelligence 53, no. 10 (2023): 12032-12046.

[371]

X. Li, A. Henriksson, M. Duneld, J. Nouri, and Y. Wu, “Evaluating Embeddings From Pre-trained Language Models and Knowledge Graphs for Educational Content Recommendation,” Future Internet 16, no. 1 (2023): 12.

[372]

X. Li, S. Sun, T. Tang, et al., “Construction of a Knowledge Graph for Breast Cancer Diagnosis Based on Chinese Electronic Medical Records: Development and Usability Study,” BMC Medical Informatics and Decision Making [Electronic Resource] 23, no. 1 (2023): 210.

[373]

C. Zhang and X. Cao, “Biological Gene Extraction Path Based on Knowledge Graph and Natural Language Processing,” Frontiers in Genetics 13 (2022): 1086379.

[374]

C. Wang, Y. Chen, F. Liu, et al., “An Interpretable and Accurate Deep-learning Diagnosis Framework Modelled With Fully and Semi-supervised Reciprocal Learning,” Ieee Transactions on Medical Imaging 43, no. 1 (2023): 392-404.

[375]

S. T. Kim, H. Lee, H. G. Kim, and Y. M Ro, “ICADx: Interpretable Computer Aided Diagnosis of Breast Masses,” In: Medical Imaging 2018: Computer-Aided Diagnosis. SPIE; 2018: 450-459.

[376]

S. T. Kim, J. H. Lee, H. Lee, and Y. M. Ro, “Visually Interpretable Deep Network for Diagnosis of Breast Masses on Mammograms,” Physics in Medicine and Biology 63, no. 23 (2018): 235025.

[377]

D. Castelvecchi, “Can We Open the Black Box of AI?,” Nature 538, no. 7623 (2016): 20-23.

[378]

R. Geirhos, J.-H. Jacobsen, C. Michaelis, et al., “Shortcut Learning in Deep Neural Networks,” Nature Machine Intelligence 2, no. 11 (2020): 665-673.

[379]

K. Freeman, J. Geppert, C. Stinton, et al., “Use of Artificial Intelligence for Image Analysis in Breast Cancer Screening Programmes: Systematic Review of Test Accuracy,” Bmj 374 (2021): n1872.

[380]

T. Ching, D. S. Himmelstein, B. K. Beaulieu-Jones, et al., “Opportunities and Obstacles for Deep Learning in Biology and Medicine,” Journal of the Royal Society, Interface 15, no. 141 (2018): 20170387.

[381]

W. Samek, G. Montavon, A. Vedaldi, L. K. Hansen, and K. R. Müller, Explainable Ai-preface. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. (Springer Nature, 2019): v-vii.

[382]

V. Beaudouin, I. Bloch, D. Bounie, et al., “Flexible and Context-specific AI Explainability: A Multidisciplinary Approach,” arXiv preprint arXiv: 200307703. 2020.

[383]

S. Mohseni, N. Zarei, and E. D. Ragan, “A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems,” ACM Transactions on Interactive Intelligent Systems 11, no. 3-4 (2021): 1-45.

[384]

L. Farah, J. M. Murris, I. Borget, A. Guilloux, N. M. Martelli, and S. I. M. Katsahian, “Assessment of Performance, Interpretability, and Explainability in Artificial Intelligence-Based Health Technologies: What Healthcare Stakeholders Need to Know,” Mayo Clinic Proceedings: Digital Health 1, no. 2 (2023): 120-138.

[385]

S. Chakraborty, R. Tomsett, R. Raghavendra, et al., “Interpretability of Deep Learning Models: A Survey of Results,” In: 2017 IEEE smartworld, ubiquitous intelligence & computing, advanced & trusted computed, scalable computing & communications, cloud & big data computing, Internet of people and smart city innovation (smartworld/SCALCOM/UIC/ATC/CBDcom/IOP/SCI). IEEE; 2017: 1-6.

[386]

Y. Lu, T. Chen, N. Hao, C. Van Rechem, J. Chen, and T. Fu, “Uncertainty Quantification and Interpretability for Clinical Trial Approval Prediction,” Health Data Science 4 (2024): 0126.

[387]

H. Panwar, P. K. Gupta, M. K. Siddiqui, R. Morales-Menendez, P. Bhardwaj, and V. Singh, “A Deep Learning and Grad-CAM Based Color Visualization Approach for Fast Detection of COVID-19 Cases Using Chest X-ray and CT-Scan Images,” Chaos, Solitons & Fractals 140 (2020): 110190.

[388]

Y. Zhong, Y. Piao, and G. Zhang, “Multi-view Fusion-based Local-global Dynamic Pyramid Convolutional Cross-tansformer Network for Density Classification in Mammography,” Physics in Medicine and Biology 68, no. 22 (2023): 225012.

[389]

Y. Gal and Z. Ghahramani, “Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning,” In: international conference on machine learning. PMLR; 2016: 1050-1059.

[390]

S. Walia, K. Kumar, S. Agarwal, and H. Kim, “Using xai for Deep Learning-based Image Manipulation Detection With shapley Additive Explanation,” Symmetry 14, no. 8 (2022): 1611.

[391]

P. Guleria, P. N. Srinivasu, and M. Hassaballah, “Diabetes Prediction Using Shapley Additive Explanations and DSaaS Over Machine Learning Classifiers: A Novel Healthcare Paradigm,” Multimedia Tools and Applications 83, no. 14 (2024): 40677-40712.

[392]

J. Li, Y. Zhang, S. He, and Y. Tang, “Interpretable Mortality Prediction Model for ICU Patients With Pneumonia: Using shapley Additive Explanation Method,” BMC Pulmonary Medicine 24, no. 1 (2024): 447.

[393]

O. O. Oladimeji, H. Ayaz, I. McLoughlin, and S. Unnikrishnan, “Mutual Information-based Radiomic Feature Selection With SHAP Explainability for Breast Cancer Diagnosis,” Results in Engineering 24 (2024): 103071.

[394]

S. Shen, S. X. Han, D. R. Aberle, A. A. Bui, and W. Hsu, “An Interpretable Deep Hierarchical Semantic Convolutional Neural Network for Lung Nodule Malignancy Classification,” Expert Systems with Applications 128 (2019): 84-95.

[395]

A. A. Verma, J. Murray, R. Greiner, et al., “Implementing Machine Learning in Medicine,” Cmaj 193, no. 34 (2021): E1351-E1357.

[396]

H. Bosmans and N. Marshall, “Radiation Doses and Risks Associated With Mammographic Screening,” Current Radiology Reports 1, no. 1 (2013): 30-38.

RIGHTS & PERMISSIONS

2025 The Author(s). MedComm published by Sichuan International Medical Exchange & Promotion Association (SCIMEA) and John Wiley & Sons Australia, Ltd.

AI Summary AI Mindmap
PDF

101

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/