Survey on deep learning for pulmonary medical imaging

Jiechao Ma , Yang Song , Xi Tian , Yiting Hua , Rongguo Zhang , Jianlin Wu

Front. Med. ›› 2020, Vol. 14 ›› Issue (4) : 450 -469.

PDF (1435KB)
Front. Med. ›› 2020, Vol. 14 ›› Issue (4) : 450 -469. DOI: 10.1007/s11684-019-0726-4
REVIEW
REVIEW

Survey on deep learning for pulmonary medical imaging

Author information +
History +
PDF (1435KB)

Abstract

As a promising method in artificial intelligence, deep learning has been proven successful in several domains ranging from acoustics and images to natural language processing. With medical imaging becoming an important part of disease screening and diagnosis, deep learning-based approaches have emerged as powerful techniques in medical image areas. In this process, feature representations are learned directly and automatically from data, leading to remarkable breakthroughs in the medical field. Deep learning has been widely applied in medical imaging for improved image analysis. This paper reviews the major deep learning techniques in this time of rapid evolution and summarizes some of its key contributions and state-of-the-art outcomes. The topics include classification, detection, and segmentation tasks on medical image analysis with respect to pulmonary medical images, datasets, and benchmarks. A comprehensive overview of these methods implemented on various lung diseases consisting of pulmonary nodule diseases, pulmonary embolism, pneumonia, and interstitial lung disease is also provided. Lastly, the application of deep learning techniques to the medical image and an analysis of their future challenges and potential directions are discussed.

Keywords

deep learning / neural networks / pulmonary medical image / survey

Cite this article

Download citation ▾
Jiechao Ma, Yang Song, Xi Tian, Yiting Hua, Rongguo Zhang, Jianlin Wu. Survey on deep learning for pulmonary medical imaging. Front. Med., 2020, 14(4): 450-469 DOI:10.1007/s11684-019-0726-4

登录浏览全文

4963

注册一个新账户 忘记密码

Introduction

Deep learning covers a set of artificial intelligence methods that use many interconnected units to fulfill complex tasks. Deep learning algorithms can automatically learn representations from large amounts of data rather than the use of a set of pre-programmed instructions [13]. Radiology is a natural application field for deep learning because it relies mainly on extracting useful information from images, and the research in this field has rapidly developed [4]. With the aggravation of air pollution and the increasing number of smokers, respiratory diseases have become a serious threat to people’s life and health [5]. However, many of the early clinical manifestations of respiratory diseases are not evident, and some patients do not even feel any discomfort during the early stage. Hence, many patients miss the critical period of early treatment because they discover clinical symptoms at a later time. Therefore, public awareness of early detection and treatment is necessary for the prevention and treatment of lung diseases. In medical imaging, the accurate diagnosis and evaluation of diseases depend on image collection and interpretation. Computer-aided diagnosis (CAD) of medical images has been developed for the early discovery and analysis of patient’s symptoms based on medical images. However, this method has long been rooted on limited features that are identified based on physicians’ past experiences. Most researchers also study medical image features that need to be extracted manually and thus require special clinical experience and a deep understanding of the data [6,7]. With the rapid development of computer vision and medical imaging, computer-aided calculation can assist medical workers in diagnosis, such as enhancing diagnostic capabilities, identifying the necessary treatments, supporting their workflow, and reducing the workload to capture specific features from medical images.

The deep learning algorithm was first applied to medical image processing in 1993, in which the neural network was used for the detection of pulmonary nodules [8]. In 1995, deep learning was applied in breast tissue detection [9]. The region of interest (ROI) was extracted from mammograms through a model, which contains tumors and normal tissues confirmed via biopsy. The model results include one input layer, two hidden layers, and one output layer for back propagation. At that time, the typical convolutional neural network (CNN) image-processing architecture has not been widely accepted by scholars because it has high demand for computing power and sufficient data and does not provide good interpretations of results. Most doctors prefer clear interpretations, which are important in the medical field but cannot be provided by deep learning to support clinical decisions.

Most people focus on manual feature extraction on images, such as features that may represent the edge of a straight line (e.g., organ detection) or contour information (e.g., a circular object like a colon polyp) [10]. Alternatively, key points (feature points) on different scales are identified, and their directions are calculated [11,12]. Radiomics [13] is also used to study some higher-order features, such as the local and global shape and texture.

With the explosive growth of data and the emergence of graphics processing unit with increasing computing power, deep neural networks have made considerable progress. In general, medical image processing can be divided into image classification, detection, segmentation, registration, and other related tasks. For image classification, each image must be classified into specific categories. The algorithm determines related objects in the image, including the type of disease, pneumothorax, and bullae and generates a list of possible object categories in descending order of confidence [1417]. The image object detection is a further improvement based on image classification. In this task, the algorithm gives the category of the detected object (e.g., pulmonary nodules appear in a CT slice) and marks the corresponding boundary of the object on the image [1820]. Semantic segmentation is a basic task in computer vision in which visual input is divided into different semantically interpretable categories. All the pixels belonging to the lesion in the image might need to be distinguished [21,22].

Medical image processing mainly aims to detect possible lesions and tumors because of their remarkably effects on the follow-up diagnosis and treatment. Although tumor detection and analysis have been widely studied, many obstacles must be overcome for future application. In pulmonary nodules [23,24], challenges exist due to within-class variability in lesion appearance. First, the shape, size, and density of the same lesion may vary, and the appearance of the lesion might be different (Fig. 1A, solid pulmonary nodules; Fig. 1B, ground glass pulmonary nodules, in green box). Second, many different diseases (normal tissues and pulmonary nodules) show the same texture features (Fig. 1C, pleural nodules, in green box, and many blood vessels in the center of the image). Third, the quality of image acquisition, such as the change of posture, blur, and occlusion, is considered (Fig. 1D). Fourth, different surrounding environments make different types of lung nodules appear to be diverse. Lastly, unbalanced data often pose a challenge in designing effective models from limited data.

Several dedicated surveys have been developed for the application of deep learning in medical image analysis [11,25]. These surveys include a large amount of related works and cover various references in almost all fields in medical imaging analysis by using deep learning. In the present study, we focused on a comprehensive review of pulmonary medical image analysis. A previous review [26] centered on the detection and classification of pulmonary nodules by using CNNs. However, additional works should be focused on targeting the segmentation tasks crossing diverse pulmonary diseases. The present review aimed to present the history of deep learning and its recent applications in medical imaging, especially in pulmonary diseases, and to summarize the specific tasks in these applications.

This paper is organized as follows. In section “Overview of deep learning,” we present a brief introduction of deep learning techniques and the origins of deep learning in neuroscience. The specific application areas of deep learning are presented in section “Deep learning in pulmonary medical image.” Detailed reviews of the datasets and the performance evaluation are presented in section “Datasets and performance.” Finally, our discussion are provided in sections “Challenges and future trends” and “Conclusions.”

Overview of deep learning

In this section, we introduce the origin of modern deep learning algorithms and several deep learning network structures commonly used in medical image processing. Then, we will briefly introduce some concepts used in the network (Table 1).

Historical perspective on networks

The history of neural networks can be traced back to the 1940s, and the deep neural network at that time was called cybernetics [2729]. Simple bionic models such as perceptrons, which can train a single model, have emerged with the discovery of the biomedical theory [30]. In 1962, Hubel and Wiesel proposed the concept of the receptive field by studying of cat visual cortical cells and found that the animal’s visual nervous system recognizes objects in layers [21,32]. A simple structure of visual processing was then established. When an object passes through a visual area, the brain constructs complex visual information through the edges. Although this model helps understand the functions of the brain, it was not designed to be an actual model. In 1971, basing on the receptive field study, Blakemore [33] proposed that receptive field of visual cells is acquired rather than innate.

In 1980, Japanese scholar Fukushima proposed a new cognitive machine (neocognitron) based on the concept of receptive field. It can be regarded as the first implementation network of CNN and the first application of the concept of receptive field in the field of artificial neural network [34]. The model is inspired by the animal visual system, namely, the hierarchical structure of vision proposed by Hubel. In 1984, Fukushima proposed an improved neocognitron machine with double C-layers [35]. By the 1980s, the method of connection mechanism was introduced [36]. Rumelhart proposed the famous back-propagating (BP) algorithm, which can train a neural network with two hidden layers and transmit errors in reverse for changing the weight of the network [37]. Some of the ideas of this algorithm have been applied in the structure of deep networks. The emergence of BP algorithm makes multi-layer perceptrons trainable and solves XOR problems [38] that cannot be explained by a single perceptron [39,40]. In 1989, Robert Hecht–Nielsen reported that a continuous function in any closed interval can be approximated using a three-layer network of hidden layers, indicating that the multilayer perceptron has the so-called “universal approximation” capability [41].

Modern architectures

These simple learning algorithms mentioned in the previous section have greatly accelerated the development of neural networks. In 1989, the emergence of CNNs made people pay attention to the generalization error of networks [43,44,49]. In 1990, the proposed convolutional network to identify numbers marked the great success of CNNs in the field of computer vision and image processing [44,50]. LeNet [44] was proposed by LeCun in 1998 to solve the problem of handwritten numeral recognition. Subsequently, the basic architecture of CNN has been defined; it comprises a convolution layer, a pooling layer and a full connection layer.

With the rapid development of computer technology, the computational bottleneck in the neural network has been continuously overcome, which promoted the rapid development of neural networks in the last two decades. Subsequently, the neural network continued to achieve impressive performance on certain tasks, but these network frameworks are still difficult to train [51,52]. Until 2006, the deep belief network proposed that the difficulty of training of neural network was gradually improved [45]. In this network, Hinton uses a greedy algorithm for layer-by-layer training, which effectively solves the above problems. After that, the same strategy was used in training many other types of deep networks [53].

Since 1980, the recognition and prediction capabilities of deep neural networks have improved, and their application fields have become more extensive. With the increasing amount of data, the scale of the neural network model has also expanded tremendously. Among these models, AlexNet can be considered as the landmark algorithm structure [46]; this model was proposed by Krizhevsky in ImageNet in 2012 [54]. Prior to AlexNet, only a single object in a very small image could be identified due to computational limitations. After 2012, the size of the image that the neural network can process has gradually increased, and an image of any size may be used as input for processing. In the same year, in the annual image classification competition ImageNet, AlexNet dropped the top five error rate with the highest accuracy from 26.1% to only 15.3%, which was 10% lower than the previous year’s champion and far exceeded the second participating team. Subsequently, many scholars have re-examined deep neural networks. Deep neural convolution networks have often won in this competition. The VGG-Net [47] and GoogLeNet [16] were proposed in 2014, and the 2015 champion algorithm ResNet [17] deepened more layers than the previous network of AlexNet. As the network deepens, it becomes a better network. Dense neural networks [48] have also achieved good results in other fields.

Deep learning in medical image analysis

The aforementioned networks are mainly used for common image classification tasks. An overview of network structures used for classification, detection, and segmentation tasks for the perspective of medical image analysis is described below.

Classification

The classification task determines which classes an image belongs to. Depending on the task, the class can exist in binary (e.g., normal and abnormal) or multiple categories (e.g., nodule classification in chest CT). Many medical studies use CNNs to analyze medical images to stage disease severity in different organs [55]. Some authors investigated on combination of local information and global contextual information and designed architecture for image analysis at different scales [56]. Some work focused on the utilization of 3D CNN for enhanced classification performance [57].

Detection

Object detection in medical image analysis refers to the localization of various ROIs such as lung nodules and is often an important pre-processing step for segmentation tasks in medical image analysis. The detection task in most medical image analysis requires the processing of 3D images. To utilize deep learning algorithm for 3D data, Yang et al. processed 3D MRI images by using 2D MRI sequences with typical CNN [58]. de Vos et al. localized 3D bounding box of organs by analyzing 2D sequences from 3D CT volume [59]. To reduce the complexity of 3D images, Zheng et al. decomposed 3D convolution as three 1D convolutions for the detection of carotid artery bifurcation [60].

Segmentation

Segmentation of meaningful parts, such as organ, substructure, and lesion, provides a reliable basis for subsequent medical image analysis (Fig. 2). U-net [61] published by Ronneberger et al. has become the most well-known CNN architecture for medical image segmentation from very few images. U-net is based on fully convolutional networks (FCN) [62]. FCN can be utilized for classification at the pixel level and use deconvolution layer for upsampling the feature map of the last convolution layer and restore it to the same size of the input image, thus allowing the prediction to be generated for each pixel. The spatial information in the original input image is preserved. Subsequently, the pixel-by-pixel classification is performed on the upsampled feature map to complete the final image segmentation. In U-net, however, skip connections are used for connecting downsampling to upsampling layers, which makes features extracted by downsampling layers passed directly to the upsampling layers. This process allows U-net to analyze the full context of the image, resulting in segmentation map in an end-to-end way. After U-net was proposed, many researchers have used U-net structure for medical image segmentation and made improvements based on U-net. Cicek et al. designed 3D U-net [63] targeting 3D image sequences, and Milletari et al. proposed a 3D-variant of U-net architecture, namely V-net [64], which uses Dice coefficient as loss function.

Deep learning in pulmonary medical image

In the analysis of the number of thoracic pulmonary nodules, the automatic texture extraction of the pulmonary nodules has always been a key issue in traditional algorithms. In the past decades, the manual extraction of the texture morphology of the pulmonary nodules has been the conventional way of designing algorithms.

This section presents an overview of the contribution of deep learning to various application areas in pulmonary medical imaging.

Pulmonary nodule

Lung cancer is one of the most severe cancer types [65]. This disease can be prevented if the pulmonary nodules are detected early and diagnosed correctly. With the help of modern CAD systems, radiologists can detect more pulmonary nodules with much less time [6669]. Detection, segmentation, and classification of pulmonary nodules are the main functions of the modern CAD system and belong to computer vision, which has achieved huge advance with CNNs (Table 2).

Pulmonary nodule classification

For the classification of pulmonary nodules, most studies focus on how computer-aided detection system provides radiologists with image manifestations, such as type of the nodule (benign or malignant) for the early diagnosis of lung cancer and provides advice for the diagnosis. Fig. 3 provides an illustration of the commonly used classification network.

Owing to the self-learning and generalization ability of deep CNN, it has been applied in the classification of the type of pulmonary nodules. A specific nodule image network structure has been proposed to solve three types of nodule recognition problems, namely, solid, semi-solid, and ground glass opacity (GGO) [70]. Netto et al. [71] studied the separation of pulmonary nodule-like structures from other structures, such as blood vessels and bronchi. Finally, the structure is divided into nodules and non-nodules based on shape and texture measurement with support vector machine. Pei et al. [72] used a 2D multi-scale filter for dividing the nodules into nodules and non-nodules by geometrically constrained region growth method.

For benign and malignant classification, Suzuki et al. [73,74] developed methods to differentiate benign and malignant nodules in low-density CT scans. Causey et al. [75] proposed a method called NoduleX for the prediction of lung nodule malignancy with CT scans. Zhao et al. [76] proposed an agile CNN model to overcome the challenges of small-scale medical datasets and nodules. Considering the limited chest CT data, Xie et al. [77] used transfer learning algorithm to separate benign and malignant pulmonary nodules. Shen et al. [78] presented a multi-crop CNN (MC-CNN) to automatically extract nodule salient information for the investigation of the lung nodule malignancy suspiciousness. Liu et al. [79,80] proposed a multi-task model to explore the relatedness between lung nodule classification and the attribute score. Many researchers have used 3D CNNs to predict the malignancy of the pulmonary nodule and achieve a high AUC score [81,82]. Some researchers attempted to make the prediction interpretable by using multitask joint learning [83,84].

Pulmonary nodule detection

The diagnosis of pulmonary nodules is a special detection task. Considering that one pulmonary nodule can go across multi CT slices, most of the existing pulmonary nodule detection methods are based on 3D or 2.5D CNNs (convolution neural networks). The general detection process, including training and testing phases, is illustrated in Fig. 4. A high-performance pulmonary nodule detection system must have high sensitivity and precision. Hence, many researchers have focused on two-stage networks. Two-stage involves one network for nodule candidate detection and the other for false positive reduction (Fig. 5). Ding et al. [85] proposed a deconvolutional structure for faster region-based CNN (faster R-CNN) for candidate detection with a 3D DCNN for false positive reduction. A 3D roto-translation group convolution (G-Convs) was introduced for false positive reduction network for improved efficiency and performance [86]. A 3D faster R-CNN with a U-net-like encoder-decoder structure for candidate detection and a gradient boosting machine with a 3D dual path network (DPN) for false positive reduction have been designed [87]. Tang et al. [88] used online hard negative mining in the first stage and assembled both stages via consensus until the predictions are realized. Tang et al. [89] then proposed an end-to-end method for training the candidate detection and false-positive reduction network together, resulting in improved performance. In pulmonary nodule detection, the imbalanced sample is a severe problem. Two-stage networks use the first stage for choosing positive and hard negative samples, thus providing the second stage with a balanced ratio between positive and negative samples. ResNet [17] and the feature pyramid network combined single stage model have been modified [90]. This model improved the sample imbalance via a patch-based sampling strategy. Another one-stage network based on SSD has been introduced [91]. It uses group convolution and attention network for abstract features and balances the samples with hard negative sample mining. Liu et al. [94] evaluated the influence of radiation dose, patient age, and CT manufacturer on the performance of deep learning applied in nodule detection.

Pulmonary nodule segmentation

Pulmonary nodules are first detected. The segmentation of pulmonary nodules is also important in measuring the size of the nodule, and the malignancy prediction is the final target. The U-net architecture and unsupervised learning are widely adopted in the segmentation task. Considering that the segmentation label is difficult to obtain, a weakly-supervised method that generates accurate voxel-level nodule segmentation has been proposed [102]; this method only needs the image level classification label. Messay et al. [103] trained a nodule segmentation model by using weakly labeled data without dense voxel-level annotations.

Pulmonary embolism (PE)

PE is a highly lethal condition that occurs when an artery in the lung becomes partially or completely blocked. It occurs when a thrombus generated from legs, or sometimes other parts of the body, moves to the lungs and obstructs the central, lobar, segmental, or sub-segmental pulmonary arteries depending on the size of the embolus. However, this rate can be decreased to 2%–11% if measures are taken timely and correctly. Although PE is not always fatal, it is the third most threatening disease with at least 650 000 cases occurring annually [104].

CT pulmonary angiography (CTPA) is the primary means for PE diagnosis, wherein a radiologist carefully trace each branch of the pulmonary artery for any suspected PEs. However, in general, CTPA consists of hundreds of images. Each image represent one slice of the lung, and the differentiation of PE with high clinical accuracy is time-consuming and difficult. The diagnosis of PE is a complicated task, because many reasons may result in wrong diagnosis, such as high false-positive results. For instance, respiratory motion, flow-related, streak, partial volume, and stair-step artifacts, lymph nodes, and vascular bifurcation could affect the diagnosis. Thus, computer-aided detection (CAD) is an important tool for radiologists in the detection and diagnosis of PE accurately and decreasing the reading time of CTPA (Table 3).

Matteo Rucco introduced an integrative approach based on Q-analysis with machine learning [95]. The new approach, called Neural Hypernetwork, has been applied in a case study of PE diagnosis, involving data from 28 diagnostic features of 1427 people considered to be at risk of PE and obtained a satisfying recognition rate of 94%. This study involves a structure-based analysis algorithm. For CTPA image classification, the CAD of PE typically consists of the four following stages: (1) extraction of a volume of interest (VOI) from the original dataset via lung [105107] or vessel segmentation [108,109], (2) generation of a set of PE candidates within the VOI using algorithms, such as tobogganing [110] and extracting hand-crafted features from each PE candidate [111,112], and (3) computation of a confidence score for each candidate by using a rule-based classifier, neural networks and a nearest neighbor [106,108,113] or multi-instance classifier [110]. Jinbo Bi [96] proposed a new classification method for the automatic detection of PE. Unlike almost all existing PEs search space methods that require vascular segmentation, this method is based on Toboggan’s candidate generator, which can quickly and effectively retrieve the suspicious areas of the whole lung. The network provides an effective solution for the learning problem of multiple positive examples to indicate that the action is in progress. The detection sensitivity of 177 clinical cases was 81%.

Nowadays, the neural network method has achieved much attention in PE recognition [114116]. Scott et al. proved that radiologists can improve their interpretations of PE diagnosis by incorporating computer output in formulating diagnostic prediction [117]. Agharezaei et al. used the artificial neural network (ANN) for the prediction of the risk level of PE [97]. Serpen et al. confirmed that knowledge-based hybrid learning algorithms are configured for providing better performance than the pure empirical mechanical learning algorithms that provide automatic classification tasks associated with medical diagnosis described as PE. A considerable expertise in the PE domain is considered, and the hybrid classifier of knowledge is easily utilized based on both illustration and experience learning [98]. Tsai et al. proposed the multiple active contour models, which combine the tree hierarchy to obtain the regional lung and vascular distribution. In the last step of the system, the gabor neural network (GNN) was used to determine the location of the thrombosis. This novel method used the GNN network for recognizing PE, but the accuracy and precision of the results are not good [99]. Tajbakhsh et al. investigated the possibility of a unique PE representation, coupled with CNNs, thus increasing the accuracy of PE CAD system for PE CT classification [100]. To eliminate the false-positive detection for the PE recognition, the possibility of implementing neural network as an effective tool for validating CTPA datasets has been investigated [118]. In addition, it improved the accuracy of PE recognition to 83%. Meanwhile, the vessel-aligned multi-planar image representation had three advantages that can improve the PE accuracy. First, the efficiency of the image representation is high, because it is a brief summary of 3D context information near the blockage in two image channels. Second, the image representation consistently supports data enhancement for training the CNN. Therefore, the import extensions can be posted. Third, the image representation is expandable, because it naturally supports data augmentation for training CNN. Besides, Chen et al. evaluated the performance of the deep learning CNN model, comparing it with a traditional natural language processing (NLP) model in extracting PE information from the thoracic CT reports from two institutions and proved that the CNN model can classify radiology free-text reports with an accuracy equivalent to or outperform that of an existing traditional NLP model [101].

Pneumonia

Pneumonia is one of the main causes of death among children. Unfortunately, in rural areas in developing countries, infrastructure and medical expertise are lacking for its timely diagnosis. The early diagnosis of interstitial lung disease is essential for treatment. Therefore, chest X-ray examination is one of the most commonly used radiological examinations for the screening and diagnosis of many lung diseases. However, the diagnosis of pneumonia in children by using X-ray is a very difficult task, because the current type of pneumonia image discrimination relies mainly on the experience of doctors. Specialized departments and personnel from hospitals are required for making judgments. This set-up is laborious, and considering that the images of some pneumonia are very similar, doctors can easily make mistake, causing misdiagnoses (Table 4).

Pneumonia usually manifests as one or more opaque areas on the chest radiograph (CXR) [126]. However, the diagnosis of CXR pneumonia is complicated by many other diseases in the lungs, such as fluid overload (pulmonary edema), bleeding, volume loss (atelectasis or collapse), lung cancer, or post-radiation or surgical changes. Generally, medical images are viewed, and a rough estimation of the observed tissue is made to distinguish whether the tissue is normal. In recent decades, the identification of pneumonia has developed rapidly through the computer-assisted technology. The technique pays attention to deep learning. However, many methods are available based on the traditional image pattern recognition. The template matching and learning method based on the statistical mode is one example. Siemens used a template matching algorithm [127] to identify the type of pneumonia. In this work, images were converted from the spatial domain to the frequency domain via Fourier transform infrared spectroscopy, and the target features in the frequency domain were used to classify the pneumonia type. However, the algorithm is computationally intensive and the accuracy is low. A conventional image processing method based on a statistical model can also be used. It generally extracts features manually and then uses a classifier for identification. In general, the modeling is based on the color, texture, shape, and spatial relationship of the image. The algorithms commonly used for texture features are local binary patterns (LBP) mode and direction histograms of oriented gradients (HOG) [128,129].

However, the features extracted by hand cannot accurately distinguish the types of pneumonia. As a recently developed method for the automatic feature extraction, deep learning has been applied to some medical image analysis. The network avoids the complex pre-processing of the images in the early stage and can directly input the images into the network to obtain the recognition results. CNNs can learn the essential characteristics of different pneumonia types autonomously through the convolution kernels. Abdullah et al. [120] proposed a detection method for the pneumonia symptoms by using the CNNs based on the difference of gray-scale color and the segmentation between normal and suspicious lung regions. Correa [121,130] introduced a method of automatic diagnosis of pneumonia by pulmonary ultrasound imaging. Different from Refs. [131] and [130], Cisnerosvelarde et al. [122] applied pneumonia detection in a new field based on ultrasound videos rather than ultrasound images. To determine an automated diagnostic method for medical images, 40 simulated chest CXRs related to the normal and pneumonia patients were studied [123]. For the detection of pneumonia clouds, the healthy part of the lungs was isolated from the area of pneumonia infection. Then, the algorithm for clipping and extracting lung regions from images was also developed and was compiled based on CUDA [131133] for an improved computational performance [124].

The scarcity of data and the dependence on the labeled data in the application of deep learning in medical imaging have been analyzed. Wang et al. [125] aimed to build a large-scale and high-accuracy CAD system for increased academic interest in building large-scale database of medical images. The author extracted the report contents and tags from the picture archiving and communication system (PACS) of the hospital by NLP and constructed a hospital-scale chest X-ray database.

Tuberculosis

Pulmonary tuberculosis is a chronic infectious disease mainly transmitted by the respiratory tract [35]. Pulmonary tuberculosis is caused by individual factors such as age, genetic factors, and personal behaviors such as smoking and air pollution. Its pathogen is Mycobacterium tuberculosis, which can invade the body and cause hematogenous dissemination. At present, the diagnostic methods of tuberculosis mainly depend on historical records, symptoms and signs, imaging diagnosis, and the sputum Mycobacterium tuberculosis examination. The chest X-ray examination is an important method for the diagnosis of tuberculosis. It can detect early mild tuberculosis lesions and judge the nature of the lesions.

The success of the method depends on the radiologist’s CAD system, which can overcome this problem and accelerate the active case detection (Table 5). In recent years, great progress has been made in the field of deep learning, which allowed the classification of heterogeneous images [134,135]. CNN is popular for its ability to learn intermediate and advanced images. Various CNN models have been used for the classification of CXR into tuberculosis [136]. Lakhani & Sundaram [137] used deep learning with CNNs and achieved accurately classified tuberculosis from the CXR with an area under the curve of 0.99. Melendez et al. [138] evaluated the deep learning framework on a database containing 392 patient records with suspected TB subjects. Melendez [139141] proposed the use of weakly labeled approach for TB detection. It studied an alternative pattern classification method, namely multi-instance learning, which does not require detailed information for training a CAD system. They have applied this alternative method to a CAD system designed for the detection of texture lesions associated with tuberculosis. Then, for solving the problem of having to use additional clinical information in screening for tuberculosis, a combination framework based on machine learning has been proposed [141,145,146]. Zheng et al. [142] studied the performance of the known deep convolution network (DCN) structures under different abnormal conditions. In comparison with the deep features, the shallow features or the early layers always provide higher detection accuracy. These techniques have been applied for tuberculosis detection on different datasets and achieved highest accuracy. For classifying abnormalities in the CXRs, a cascade of CNN and recurrent neural network (RNN) have been employed Indiana chest X-rays dataset [140]. However, the accuracy was not compared with previous results. The use of a binary classifier scheme of normal versus abnormal has been attempted [143].

Interstitial lung disease (ILD)

ILD is a group of heterogeneous non-neoplastic and non-infectious lung diseases with alveolar inflammation and interstitial fibrosis as the basic pathological changes. This disease also called diffuse parenchymal lung disease (DPLD). ILD involves several abnormal imaging patterns observed in CT images. The accurate classification of these patterns plays an important role in the accurate clinical judgment of the extent and nature of the disease (Table 6). Therefore, the development of an automatic computer-aided detection system for lung is important.

Anthimopoulos et al. [55] proposed and evaluated a CNN for the classification of ILD patterns. This method used the texture classification scheme of the ROI for the generation of an ILD quantization map of the whole lung by sliding a fixed proportion classifier on the pre-segmented lung field. Then, the quantified results were used in the final diagnosis of the CAD system. Simonyan and Zisserman [147] developed a CNN framework to classify the lung tissue patterns into different classes such as normal, reticulation, GGO, and honeycombing. Li et al. [148] used an unsupervised algorithm to capture image features of different scales and feature extractors of different sizes and achieved a good classification accuracy of 84%. Then, Li et al. [149] designed a customized CNN with a shallow convolution layer to classify ILD images. Gao et al. [150] proposed two variations of multi-label deep CNNs to accurately recognize the potential multiple ILD co-occurrence on an input lung CT slice. Christodoulidis et al. [151] applied algorithms similar to knowledge maps to the classify ILD. In this study, the possibility of transfer learning in the field of medical image analysis and the structural nature of the problem were expressed. The training method of the network is as important as the design of that architecture. By rescaling the original CT images of Hounsfield units to three different scales (one focusing on the low attenuation mode, one focusing on the high attenuation mode, and one focusing on the normal mode), the three 2D images were used as input into the network. Gao et al. [152] found that the three attenuation ranges provided a better visibility or visual separation in all six ILD disease categories.

Others

For other pulmonary diseases including common diseases, such as pneumothorax, bullae, and emphysema, deep learning models have many applications, which greatly improve the diagnostic rate of etiology. Cengil et al. [153] used deep learning for the classification of cancer types. A semi-supervised deep learning algorithm was proposed to automatically classify patients’ lung sounds [154,155] (for the two most common lung sounds, wheezing and bursting). The algorithm made some progress in automatic lung sound recognition and classification. Aykanat et al. [156] proposed and implemented a U-net convolution network structure for the biomedical image segmentation. It mainly separates lung regions from the other tissues in the CT images. To facilitate the detection and the classification of lung nodules, Tan et al. [157] used a CAD system based on transfer learning (TL) and improved the accuracy of lung disease diagnosis in bronchoscopy. For chronic obstructive pulmonary disease (COPD) [158], the characteristics of long-term short-term memory (LSTM) unit are used for representing the progress of COPD, and a specially configured RNN was used for capturing irregular time-lapse. It improved the explanatory ability of the model and the accuracy of estimating the progress of COPD. Campo et al. [159] used X-rays to quantify the emphysema instead of CT scans.

Datasets and performance

Pulmonary nodule datasets

LIDC-IDRI

The Lung Image Database Consortium image collection (LIDC-IDRI) [160] consists of chest medical image files (such as CT and X-ray) and the corresponding pathological markers of the diagnostic results (Table 7). The data were collected by the National Cancer Institute to study early cancer detection in high-risk populations. The dataset contains 1018 research cases, and the nodule diameter in the LIDC-IDRI dataset ranged from 3 mm to 30 mm. For each data, four experienced thoracic radiologists carried out two-stage diagnostic labeling. In the first stage, each radiologist independently examined each CT scan and marked one of three types of lesions (“nodule≥3 mm,” “nodule<3 mm,” and “non-nodule>3 mm”). On the second phase, each radiologist independently checks his or her own markers and the anonymous markers from three other radiologists to provide final comments. This procedure aims to identify all pulmonary nodules in each CT scan as completely as possible without compulsory consistency. A brief comparison is given in the LIDC-IDRI dataset. Armato et al. [161] believed that better results can be obtained by combining geometric texture with the directional gradient histogram with reduced HOG-PCA features to create a hybrid feature vector for each candidate node. Huidrom et al. [162] used a nonlinear algorithm to classify the 3D nodule candidate boxes. The proposed algorithm is based on the combination of genetic algorithm (GA) and the particle swarm optimization (PSO) to prove the learning ability of multi-layer perceptron. This method was compared with the existing linear discriminant analysis (LDA) and the convolutional neuron methods. Shaukat et al. [163] presented a marker-controlled watershed technique that used intensity, shape, and texture features for the detection of lung nodules. Zhang et al. [164] used 3D skeletonization features based on the prior anatomical knowledge for the determination of the lung nodules. Naqi et al. [165] used traditional manual feature HOG and CNN features to construct hybrid feature vectors to find candidate nodules. Refs. [166168] showed algorithms that achieved better results that year. The deep learning methods [71,169172] for lung nodule detection did not show promising results.

LUNA16

LUNA16 dataset [160,176] is a subset of LIDC-IDRI. LIDC-IDRI that includes 1018 low-dose lung CT images, while LUNA excludes CT images with slices thicker than 3 mm and pulmonary nodules smaller than 3 mm. The database is very heterogeneous. It is clinically collected from seven different academic institutions for dose and low-dose CT scans, and it has a wide range of scanner models and acquisition parameters. The final list contains 888 scans. Dou et al. [175] employed 3D CNNs for false positive reduction in automated pulmonary nodule detection from volumetric CT scans. Setio et al. [57] used multi-view convolutional networks (ConvNets) to extract the features and then combined a dedicated fusion method to obtain the final classification. Other teams [159,161,177] also achieved relatively good results.

Pneumonia datasets

Chest X-ray images

The dataset released by the National Institutes of Health includes 112 120 frontal-view X-ray images of 30 805 unique patients [178]. Fourteen different chest pathological markers were labeled using the NLP method in the Journal of Radiology. As a positive example, pneumonia images were identified, and as a negative example of the subject of pneumonia detection, all other images were summarized. The database contains more than 100 000 X-ray front views (about 42 g) of 14 lung diseases (atelectasis, consolidation, infiltration, pneumothorax, edema, emphysema, fibrosis, effusion, pneumonia, pleural thickening, cardiac hypertrophy, nodules, masses, and hernia). Researchers used NLP to label the data. Grades 1−14 correspond to 14 kinds of lung diseases, and grade 15 represents 14 kinds of lung diseases. The accuracy of tags in this database exceeded 90%. Wang et al. [125] proposed a weakly-supervised multi-label image classification and disease localization framework and achieved F1 score of 0.633. Yao et al. [179] used LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns and got F1 score of 0.713. Rajpurkar et al. [178] improved the result to 0.768 by using a 121-layer DCN (DenseNet).

Tuberculosis datasets

Shenzhen Hospital X-ray

Shenzhen Hospital X-ray [180] is a dataset collected by Shenzhen Third People’s Hospital, Guangdong Medical University, Shenzhen, China. Chest X-rays from the clinic were captured as part of the daily hospital routine for a month, mostly in September 2012. This dataset contains 626 frontal chest X-rays, in which 326 are normal, and 336 are accompanied by symptoms of TB. All data were provided in PNG form, which vary in size but are all around 3 k×3 k pixels.

Montgomery County X-ray

The Montgomery County X-ray dataset [180] consists of 138 frontal chest X-rays from the TB screening program in the Department of Health and Human Services, Montgomery County, Maryland, USA. In addition, 80 patients were in normal condition and 58 patients had imaging symptoms of tuberculosis. All pictures were captured using the conventional X-ray machine (cr) to store 12-bit gray level images in the form of portable network graphics (png). They can also be used in the form of DICOM as required. The size of the X-ray is 4020×4892 or 4892×4020 pixels. The work [136] tested deep learning methods on the detection of tuberculosis based on this dataset, and the Shenzhen dataset achieved an accuracy of more than 80% that is comparable performance to the radiologists.

Interstitial lung disease datasets

Geneva database

Geneva database was collected by the University Hospitals of Geneva, Geneva, Switzerland. The dataset consists of chest CT scans of 1266 patients between 2003 and 2008 in the University Hospitals of Geneva. Based on the EHR information, only cases with HRCT (without contrast agent, 1 mm slice thickness) were included. Up until now, more than 700 cases were revised and 128 were stored in the database that affected one of the 13 histological diagnoses of ILDs. The database is available for research on request and after the signature of a license agreement. Anthimopoulos et al. [173] and Gangeh et al., [174] improved the quantitative measurement of the ILD based on Geneva database.

Challenges and future trends

From the medical and clinical aspects, despite the successes of deep learning technology, many limitations and challenges exist. Deep learning generally requires a large amount of annotated data for analysis. This requirement is a big challenge for annotating medical images. Labeling medical images require expert knowledge, such as the domain knowledge of radiologists. Hence, annotating sufficient medical image is labor- and time-consuming. Although the annotation of medical images is not easy, the amount of unlabeled medical images is vast, because they are well stored in PACS for a long time. If the unlabeled images can be utilized by deep learning techniques, considerable time and effort in annotation would be saved.

Another challenge is the interpretability of deep learning [181]. Deep learning methods are often taken as black box, where their performance or failure is hard to interpret. The demands for the investigation of these techniques increase to pave the way for clinical application of deep learning in medical image analysis. From the perspective of law, the wide spread of deep learning application in medical field would also require transparency and interpretability.

Our future work will further analyze the problem of image semantics segmentation based on the deep learning network, and summarize and improve the shortcomings in the research. Under the background of the research of medical imaging based on deep learning, this paper puts forward several potential or under-study directions in the future. (1) Neural network has a good classification effect on independent and identically distributed test sets, but examples of error classification added to the model, which are not very visually different, will cause a great difference in neural network. Therefore, the Adversarial Net [147] has been proposed to determine a method that can result in higher resolution of medical images based on human eyes. (2) Common methods of machine learning includes supervised learning and unsupervised learning. The current research is based on supervised learning algorithms. However, supervised learning requires human label classification and network training of the data, which can greatly consume the time of medical experts. Senior medical experts often do not have much time to label the training data of a certain order of magnitude. Unsupervised learning may be a potential research direction of medical image processing in the future.

Conclusions

Medical image processing based on deep learning is a hot and challenging subject intersecting the medical field and the computer field. This paper summarizes the research work carried out in the following direction. First, the recent popular DNN framework was introduced, and the origin of its neural network was traced back and discussed in detail. In addition, toward the current deep network framework, the classical models that are universally applied to medical images were introduced.

In the third part of this paper, the application of neural network in various lung diseases was introduced. For the tasks of different diseases, this paper describes the current research status of deep neural network in medical images, analyses and summarizes the development of the framework, and makes a detailed analysis of the models that have achieved good results in these fields to lay an important research foundation for researchers afterwards.

In the fourth part of the article, various algorithm models on datasets such as LIDC-IDRI and LUNA16 were introduced in detail. In addition, some commonly used datasets on other diseases were briefly introduced in this paper, so that others can carry out relevant experiments.

References

[1]

LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015; 521(7553): 436–444

[2]

Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw 2015; 61: 85–117

[3]

Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in radiology. Nat Rev Cancer 2018; 18(8): 500–510

[4]

Camarlinghi N. Automatic detection of lung nodules in computed tomography images: training and validation of algorithms using public research databases. Eur Phys J Plus 2013; 128(9): 110

[5]

Siegel RL, Miller KD, Jemal A. Cancer Statistics, 2017. CA Cancer J Clin 2017; 67(1): 7–30

[6]

AbuBaker AA, Qahwaji RS, Aqel MJ, Saleh M. Average row thresholding method for mammogram segmentation. Conf Proc IEEE Eng Med Biol Soc 2005; 3: 3288–3291

[7]

Haider W, Sharif M, Raza M. Achieving accuracy in early stage tumor identification systems based on image segmentation and 3D structure analysis. Comput Eng Intell Syst 2011; 2(6): 96–102

[8]

Lo SCB, Lin JS, Freedman MT, . Computer-assisted diagnosis of lung nodule detection using artificial convoultion neural network[C]//Medical Imaging 1993: Image Processing. International Society for Optics and Photonics. 1993. 1898: 859–869

[9]

Sahiner B, Chan HP, Petrick N, Wei D, Helvie MA, Adler DD, Goodsitt MM. Classification of mass and normal breast tissue: a convolution neural network classifier with spatial domain and texture images. IEEE Trans Med Imaging 1996; 15(5): 598–610

[10]

Wang X, Han T X, Yan S. An HOG-LBP human detector with partial occlusion handling[C]//2009 IEEE 12th international conference on computer vision. IEEE. 2009. 32–39

[11]

Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42: 60–88

[12]

Zhou H, Yuan Y, Shi C. Object tracking using sift features and mean shift. Comput Vis Image Underst 2009; 113(3): 345–352

[13]

Mori K, Hahn HK. Computer-aided diagnosis[C]//Proc. of SPIE Vol. 2019. 10950: 1095001–1

[14]

Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L. ImageNet large scale visual recognition challenge. Int J Comput Vis 2015; 115(3): 211–252 (IJCV)

[15]

Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks[C]//Advances in neural information processing systems. 2012. 1097–1105

[16]

Szegedy C, Liu W, Jia Y, . Going deeper with convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. 1–9

[17]

He K, Zhang X, Ren S, . Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. 770–778

[18]

Ren S, He K, Girshick R, . Faster r-cnn: towards real-time object detection with region proposal networks[C]//Advances in neural information processing systems. 2015. 91–99

[19]

Girshick R. Fast r-cnn[C]//Proceedings of the IEEE international conference on computer vision. 2015. 1440–1448

[20]

Liu W, Anguelov D, Erhan D, . Ssd: single shot multibox detector[C]//European conference on computer vision. Springer, Cham. 2016. 21–37

[21]

Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. 3431–3440

[22]

Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation[C]//International Conference on Medical image computing and computer-assisted intervention. Springer, Cham. 2015. 234–241

[23]

Lederlin M, Revel MP, Khalil A, Ferretti G, Milleron B, Laurent F. Management strategy of pulmonary nodule in 2013. Diagn Interv Imaging 2013; 94(11): 1081–1094

[24]

Ozekes S, Osman O, Ucan ON. Nodule detection in a lung region that’s segmented with using genetic cellular neural networks and 3D template matching with fuzzy rule based thresholding. Korean J Radiol 2008; 9(1): 1–9

[25]

Shen D, Wu G, Suk HI. Deep learning in medical image analysis. Annu Rev Biomed Eng 2017; 19(1): 221–248

[26]

Monkam P, Qi S, Ma H, Gao W, Yao Y, Qian W. Detection and classification of pulmonary nodules using convolutional neural networks: a survey. IEEE Access 2019; 7: 78075–78091

[27]

Wall B, Hart D. Revised radiation doses for typical X-ray examinations. Report on a recent review of doses to patients from medical X-ray examinations in the UK by NRPB. National Radiological Protection Board.. Br J Radiol 1997; 70(833): 437–439

[28]

Ashby WR. An introduction to cybernetics. Chapman & Hall Ltd, 1961

[29]

Wiener N. Cybernetics. Bull Am Acad Arts Sci 1950; 3(7): 2–4

[30]

Rosenblatt F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 1958; 65(6): 386–408

[31]

Hubel DH, Wiesel TN. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J Physiol 1962; 160(1): 106–154

[32]

Rodieck RW, Stone J. Analysis of receptive fields of cat retinal ganglion cells. J Neurophysiol 1965; 28(5): 833–849

[33]

Blakemore C. The working brain. Nature 1972; 239(5373): 473

[34]

Fukushima K. Neocognitron: a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern 1980; 36(4): 193–202

[35]

Fukushima K, Hirota M, Terasaki PI, Wakisaka A, Togashi H, Chia D, Suyama N, Fukushi Y, Nudelman E, Hakomori S. Characterization of sialosylated Lewisx as a new tumor-associated antigen. Cancer Res 1984; 44(11): 5279–5285

[36]

Fukushima K, Miyake S, Ito T. Neocognitron: a neural network model for a mechanism of visual pattern recognition. IEEE Trans Syst Man Cybern 1983 (5): 826–834

[37]

Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature 1988; 323(6088): 696–699

[38]

Nitta T. Solving the XOR problem and the detection of symmetry using a single complex-valued neuron. Neural Netw 2003; 16(8): 1101–1105

[39]

Pineda FJ. Generalization of back-propagation to recurrent neural networks. Phys Rev Lett 1987; 59(19): 2229–2232

[40]

Wigner EP. The problem of measurement. Am J Phys 1963; 31(1): 6–15

[41]

Hecht-Nielsen R. Theory of the backpropagation neural network. Neural Netw 1988; 1: 445–448

[42]

Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature 1988; 323(6088): 696–699

[43]

LeCun Y. Generalization and network design strategies. Connectionism in perspective. Amsterdam: Elsevier, 1989.Vol. 19

[44]

LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE 1998; 86(11): 2278–2324

[45]

Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief nets. Neural Comput 2006; 18(7): 1527–1554

[46]

Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012. 1097–1105

[47]

Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint. 2014. arXiv: 1409.1556

[48]

Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. 4700–4708

[49]

Krogh A, Hertz JA. Dynamics of generalization in linear perceptrons. In: Advances in Neural Information Processing Systems. 1991. 897–903

[50]

LeCun Y, Boser B E, Denker JS, Henderson D, Howard RE, Hubbard WE, Jackel LD. Handwritten digit recognition with a back-propagation network. In: Advances in neural information processing systems. 1990. 396–404

[51]

Haskell BG, Howard PG, LeCun YA, Puri A, Ostermann J, Civanlar MR, Rabiner L, Bottou L, Haffner P. Image and video coding-emerging standards and beyond. IEEE Trans Circ Syst Video Tech 1998; 8(7): 814–837

[52]

Hochreiter S, Bengio Y, Frasconi P, Schmidhuber J. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. 2001

[53]

Pedrazzi M, Patrone M, Passalacqua M, Ranzato E, Colamassaro D, Sparatore B, Pontremoli S, Melloni E. Selective proinflammatory activation of astrocytes by high-mobility group box 1 protein signaling. J Immunol 2007; 179(12): 8525–8532

[54]

Deng J, Dong W, Socher R, Li LJ, Li K, Li FF. Imagenet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. 2009. 248–255

[55]

Anthimopoulos M, Christodoulidis S, Ebner L, Christe A, Mougiakakou S. Lung pattern classification for interstitial lung diseases using a deep convolutional neural network. IEEE Trans Med Imaging 2016; 35(5): 1207–1216

[56]

Kawahara J, BenTaieb A, Hamarneh G. Deep features to classify skin lesions. In: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI). 2016. 1397–1400

[57]

Setio AAA, Ciompi F, Litjens G, Gerke P, Jacobs C, van Riel SJ, Wille MMW, Naqibullah M, Sanchez CI, van Ginneken B. Pulmonary nodule detection in CT images: false positive reduction using multi-view convolutional networks. IEEE Trans Med Imaging 2016; 35(5): 1160–1169

[58]

Yang D, Zhang S, Yan Z, Tan C, Li K, Metaxas D. Automated anatomical landmark detection ondistal femur surface using convolutional neural network. In: Biomedical Imaging (ISBI), 2015 IEEE 12th International Symposium on. 2015. 17–21

[59]

de Vos BD, Wolterink JM, de Jong PA, Viergever MA, Išgum I. 2D image classification for 3D anatomy localization: employing deep convolutional neural networks. In: Medical Imaging 2016: Image Processing. 2016. vol. 9784, p. 97841Y

[60]

Zheng Y, Liu D, Georgescu B, Nguyen H, Comaniciu D. 3D deep learning for efficient and robust landmark detection in volumetric data. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer. 2015. 565–572

[61]

Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention. Springer. 2015. 234–241

[62]

Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. 3431–3440

[63]

Cicek O, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer. 2016. 424–432

[64]

Milletari F, Navab N, Ahmadi SA. V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV). 565–571

[65]

Siegel RL, Miller KD, Jemal A. Cancer statistics, 2018. CA Cancer J Clin 2018; 68(1): 7–30

[66]

Awai K, Murao K, Ozawa A, Komi M, Hayakawa H, Hori S, Nishimura Y. Pulmonary nodules at chest CT: effect of computer-aided diagnosis on radiologists’ detection performance. Radiology 2004; 230(2): 347–352

[67]

Ciompi F, Chung K, van Riel SJ, Setio AAA, Gerke PK, Jacobs C, Scholten ET, Schaefer-Prokop C, Wille MMW, Marchianò A, Pastorino U, Prokop M, van Ginneken B. Towards automatic pulmonary nodule management in lung cancer screening with deep learning. Sci Rep 2017; 7(1): 46479

[68]

Liu S, Xie Y, Jirapatnakul A, Reeves AP. Pulmonary nodule classification in lung cancer screening with three-dimensional convolutional neural networks. J Med Imaging (Bellingham) 2017; 4(4): 041308

[69]

Hua KL, Hsu CH, Hidayati SC, Cheng WH, Chen YJ. Computer-aided classification of lung nodules on computed tomography images via deep learning technique. Onco Targets Ther 2015; 8: 2015–2022

[70]

Li W, Cao P, Zhao D, Wang J. Pulmonary nodule classification with deep convolutional neural networks on computed tomography images. Comput Math Methods Med 2016; 2016: 6215085

[71]

Magalhães Barros Netto S, Corrêa Silva A, Acatauassú Nunes R, Gattass M. Automatic segmentation of lung nodules with growing neural gas and support vector machine. Comput Biol Med 2012; 42(11): 1110–1121

[72]

Pei X, Guo H, Dai J. Computerized detection of lung nodules in CT images by use of multiscale filters and geometrical constraint region growing[C]//2010 4th International Conference on Bioinformatics and Biomedical Engineering. IEEE. 2010: 1–4

[73]

Suzuki K, Li F, Sone S, Doi K. Computer-aided diagnostic scheme for distinction between benign and malignant nodules in thoracic low-dose CT by use of massive training artificial neural network. IEEE Trans Med Imaging 2005; 24(9): 1138–1150

[74]

Suzuki K, Doi K. Computerized scheme for distinction between benign and malignant nodules in thoracic low-dose CT: U.S. Patent Application 11/181,884[P]. 2006-1-26

[75]

Causey JL, Zhang J, Ma S, Jiang B, Qualls JA, Politte DG, Prior F, Zhang S, Huang X. Highly accurate model for prediction of lung nodule malignancy with CT scans. Sci Rep 2018; 8(1): 9286

[76]

Zhao X, Liu L, Qi S, Teng Y, Li J, Qian W. Agile convolutional neural network for pulmonary nodule classification using CT images. Int J CARS 2018; 13(4): 585–595

[77]

Xie Y, Xia Y, Zhang J, . Transferable multi-model ensemble for benign-malignant lung nodule classification on chest CT[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2017. 656–664

[78]

Shen W, Zhou M, Yang F, Yu D, Dong D, Yang C, Zang Y, Tian J. Multi-crop convolutional neural networks for lung nodule malignancy suspiciousness classification. Pattern Recognit 2017; 61: 663–673

[79]

Liu L, Dou Q, Chen H, Olatunji IE, Qin J, Heng PA. Mtmr-net: Multi-task deep learning with margin ranking loss for lung nodule analysis. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Springer. 2018. 74–82

[80]

Heng PA. Mtmr-net: Multi-task deep learning with margin ranking loss for lung nodule analysis. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings. 2018. vol. 11045, p. 74

[81]

Liao F, Liang M, Li Z, Hu X, Song S. Evaluate the malignancy of pulmonary nodules using the 3D deep leaky noisy-or network. IEEE Trans Neural Netw Learn Syst 2019; 1–12

[82]

Ardila D, Kiraly AP, Bharadwaj S, Choi B, Reicher JJ, Peng L, Tse D, Etemadi M, Ye W, Corrado G, Naidich DP, Shetty S. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat Med 2019; 25(6): 954–961

[83]

Wu B, Zhou Z, Wang J, Wang Y. Joint learning for pulmonary nodule segmentation, attributes and malignancy prediction. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). 2018. 1109–1113

[84]

Shen S, Han SX, Aberle DR, Bui AAT, Hsu W. An interpretable deep hierarchical semantic convolutional neural network for lung nodule malignancy classification. Expert Syst Appl 2019; 128: 84–95

[85]

Ding J, Li A, Hu Z, Wang L. Accurate pulmonary nodule detection in computed tomography images using deep convolutional neural networks. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017. 2017. Springer. 559–567

[86]

Winkels M, Cohen T S. 3D g-cnns for pulmonary nodule detection. arXiv preprint. 2018. arXiv:1804.04656

[87]

Zhu W, Liu C, Fan W, Xie X. Deeplung: 3D deep convolutional nets for automated pulmonary nodule detection and classification. arXiv preprint. 2017. arXiv:1709.05538

[88]

Tang H, Kim DR, Xie X. Automated pulmonary nodule detection using 3D deep convolutional neural networks. International Symposium on Biomedical Imaging. 2018. 523–526

[89]

Tang H, Liu XW, Xie XH. An end-to-end framework for integrated pulmonary nodule detection and false positive reduction. arXiv preprint. 2019. arXiv:1903.09880

[90]

Xie Z. Towards single-phase single-stage detection of pulmonary nodules in chest CT imaging. arXiv preprint. 2018. arXiv: 1807.05972

[91]

Ma JC, . Group-Attention Single-Shot Detector (GA-SSD): finding pulmonary nodules in large-scale CT images. arXiv preprint. 2018. arXiv:1812.07166

[92]

Feng X, Yang J, Laine AF, Angelini ED. Discriminative localization in cnns for weakly-supervised segmentation of pulmonary nodules. Medical image computing and computer assisted intervention. 2017. 568–576

[93]

Messay T, Hardie RC, Tuinstra TR. Segmentation of pulmonary nodules in computed tomography using a regression neural network approach and its application to the Lung Image Database Consortium and Image Database Resource Initiative dataset. Med Image Anal 2015; 22(1): 48–62

[94]

Liu K, Li Q, Ma J, Zhou Z, Sun M, Deng Y, Xiao Y. Evaluating a fully automated pulmonary nodule detection approach and its impact on radiologist performance. Radiol Artif Intell 2019. 1(3): e180084

[95]

Rucco M, Sousa-Rodrigues D, Merelli E, Johnson JH, Falsetti L, Nitti C, Salvi A. Neural hypernetwork approach for pulmonary embolism diagnosis. BMC Res Notes 2015; 8(1): 617

[96]

Bi J, Liang J. Multiple instance learning of pulmonary embolism detection with geodesic distance along vascular structure. 2007 IEEE Conference on Computer Vision and Pattern Recognition. 2007. 1–8

[97]

Agharezaei L, Agharezaei Z, Nemati A, Bahaadinbeigy K, Keynia F, Baneshi MR, Iranpour A, Agharezaei M. The prediction of the risk level of pulmonary embolism and deep vein thrombosis through artificial neural network. Acta Inform Med 2016; 24(5): 354–359

[98]

Serpen G, Tekkedil DK, Orra M. A knowledge-based artificial neural network classifier for pulmonary embolism diagnosis. Comput Biol Med 2008; 38(2): 204–220

[99]

Tsai H, Chin C, Cheng Y. Intelligent pulmonary embolsim detection system. Biomed Eng (Singapore) 2012; 24(6): 471–483

[100]

Tajbakhsh N, Gotway MB, Liang J. Computer-aided pulmonary embolism detection using a novel vessel-aligned multi-planar image representation and convolutional neural networks. MICCAI 2015: Medical Image Computing and Computer-Assisted Intervention. 2015. 62–69

[101]

Chen MC, Ball RL, Yang L, Moradzadeh N, Chapman BE, Larson DB, Langlotz CP, Amrhein TJ, Lungren MP. Deep learning to classify radiology free-text reports. Radiology 2017; 286(3): 845–852

[102]

Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature 1988; 323(6088): 696–699

[103]

Messay T, Hardie RC, Tuinstra TR. Segmentation of pulmonary nodules in computed tomography using a regression neural network approach and its application to the Lung Image Database Consortium and Image Database Resource Initiative dataset. Med Image Anal 2015; 22(1): 48–62

[104]

Breiman L. Bagging predictors. Mach Learn 1996; 24(2): 123–140

[105]

Blackmon KN, Florin C, Bogoni L, McCain JW, Koonce JD, Lee H, Bastarrika G, Thilo C, Costello P, Salganicoff M, Joseph Schoepf U. Computer-aided detection of pulmonary embolism at CT pulmonary angiography: can it improve performance of inexperienced readers? Eur Radiol 2011; 21(6): 1214–1223

[106]

Wang X, Song X F, Chapman B E, . Improving performance of computer-aided detection of pulmonary embolisms by incorporating a new pulmonary vascular-tree segmentation algorithm[C]//Medical Imaging 2012: Computer-Aided Diagnosis. International Society for Optics and Photonics. 2012. 8315: 83152U

[107]

Loud PA, Katz DS, Bruce DA, Klippenstein DL, Grossman ZD. Deep venous thrombosis with suspected pulmonary embolism: detection with combined CT venography and pulmonary angiography. Radiology 2001; 219(2): 498–502

[108]

Özkan H, Osman O, Şahin S, Boz AF. A novel method for pulmonary embolism detection in CTA images. Comput Methods Programs Biomed 2014; 113(3): 757–766

[109]

Schoepf UJ, Costello P. CT angiography for diagnosis of pulmonary embolism: state of the art. Radiology 2004; 230(2): 329–337

[110]

Liang J, Bi J. Computer aided detection of pulmonary embolism with tobogganing and mutiple instance classification in CT pulmonary angiography. In: Biennial International Conference on Information Processing in Medical Imaging. Springer. 2007. 630–641

[111]

Engelke C, Schmidt S, Bakai A, Auer F, Marten K. Computer-assisted detection of pulmonary embolism: performance evaluation in consensus with experienced and inexperienced chest radiologists. Eur Radiol 2008; 18(2): 298–307

[112]

Liang J, Bi J. Local characteristic features for computer-aided detection of pulmonary embolism in CT angiography. In: Proceedings of the First MICCAI Workshop on Pulmonary Image Analysis. 2008. 263–272

[113]

Park SC, Chapman BE, Zheng B. A multistage approach to improve performance of computer-aided detection of pulmonary embolisms depicted on CT images: preliminary investigation. IEEE Trans Biomed Eng 2011; 58(6): 1519–1527

[114]

Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB, Liang JM. Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE Trans Med Imaging 2016; 35(5): 1299–1312

[115]

Tang L, Wang L, Pan S, Su Y, Chen Y. A neural network to pulmonary embolism aided diagnosis with a feature selection approach. 2010 3rd International Conference on Biomedical Engineering and Informatics. IEEE. 2010. 2255–2260

[116]

Ebrahimdoost Y, Dehmeshki J, Ellis TS, Firoozbakht M, Youannic A, Qanadli SD. Medical image segmentation using active contours and a level set model: application to pulmonary embolism (PE) segmentation. 2010 Fourth International Conference on Digital Society. IEEE. 2010. 269–273

[117]

Scott JA, Palmer EL, Fischman AJ. How well can radiologists using neural network software diagnose pulmonary embolism? AJR Am J Roentgenol 2000; 175(2): 399–405

[118]

Tajbakhsh N, Gotway MB, Liang J. Computer-aided pulmonary embolism detection using a novel vesselaligned multi-planar image representation and convolutional neural networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer. 2015. 62–69

[119]

Lee Y, Hara T, Fujita H, Itoh S, Ishigaki T. Automated detection of pulmonary nodules in helical CT images based on an improved template-matching technique. IEEE Trans Med Imaging 2001; 20(7): 595–604

[120]

Abdullah AA, Posdzi NM, Nishio Y. Preliminary study of pneumonia symptoms detection method using cellular neural network. In: International Conference on Electrical, Control and Computer Engineering 2011 (InECCE). 2011. 497–500

[121]

Correa M, Zimic M, Barrientos F, Barrientos R, Román-Gonzalez A, Pajuelo MJ, Anticona C, Mayta H, Alva A, Solis-Vasquez L, Figueroa DA, Chavez MA, Lavarello R, Castañeda B, Paz-Soldán VA, Checkley W, Gilman RH, Oberhelman R. Automatic classification of pediatric pneumonia based on lung ultrasound pattern recognition. PLoS One 2018; 13(12): e0206410

[122]

Cisnerosvelarde P, Correa M, Mayta H, Anticona C, Pajuelo M, Oberhelman RA, Checkley W, Gilman RH, Figueroa D, Zimic M, . Automatic pneumonia detection based on ultrasound video analysis. 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE. 2016. 4117–4120

[123]

Sharma A, Raju D, Ranjan S. Detection of pneumonia clouds in chest X-ray using image processing approach[C]//2017 Nirma University International Conference on Engineering (NUiCONE). IEEE. 2017. 1–4

[124]

de Melo G, Macedo S O, Vieira S L, . Classification of images and enhancement of performance using parallel algorithm to detection of pneumonia. 2018 IEEE International Conference on Automation/XXIII Congress of the Chilean Association of Automatic Control (ICA-ACCA). IEEE. 2018. 1–5

[125]

Wang X, Peng Y, Lu L, Lu Z, Bagheri M, Summers RM. Chestx-ray8: hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017. 3462–3471

[126]

Franquet T. Imaging of community-acquired pneumonia. J Thorac Imaging 2018; 33(5): 282–294

[127]

Lee Y, Hara T, Fujita H, Itoh S, Ishigaki T. Automated detection of pulmonary nodules in helical CT images based on an improved template-matching technique. IEEE Trans Med Imaging 2001; 20(7): 595–604

[128]

Nanni L, Lumini A, Brahnam S. Local binary patterns variants as texture descriptors for medical image analysis. Artif Intell Med 2010; 49(2): 117–125

[129]

Dalal N, Triggs B. Histograms of oriented gradients for human detection. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05). 2005. 886–893

[130]

Barrientos R, Roman-Gonzalez A, Barrientos F, . Automatic detection of pneumonia analyzing ultrasound digital images. 2016 IEEE 36th Central American and Panama Convention. 2016. 1–4

[131]

Nvidia C. Nvidia cuda c programming guide. Nvidia Corporation 2011; 120(18): 8

[132]

Dye C. Global epidemiology of tuberculosis. Lancet 2006; 367(9514): 938–940

[133]

Sudre P, ten Dam G, Kochi A. Tuberculosis: a global overview of the situation today. Bull World Health Organ 1992; 70(2): 149–159

[134]

Ponnudurai N, Denkinger C M, Van Gemert W, . New TB tools need to be affordable in the private sector: The case study of Xpert MTB/RIF. J Epidemiol Glob Health 2018; 8(3–4): 103–105

[135]

Pande T, Cohen C, Pai M, Ahmad Khan F. Computer aided diagnosis of tuberculosis using digital chest radiographs: a systematic review. Chest 2015; 148(4 Suppl): 135A

[136]

Rohilla A, Hooda R, Mittal A. Tb detection in chest radiograph using deep learning architecture. ICETETSM-17. 2017. 136–147

[137]

Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 2017; 284(2): 574–582

[138]

Melendez J, Sánchez CI, Philipsen RHHM, Maduskar P, Dawson R, Theron G, Dheda K, van Ginneken B. An automated tuberculosis screening strategy combining X-ray-based computer-aided detection and clinical information. Sci Rep 2016; 6(1): 25265

[139]

Melendez J, Sánchez CI, Philipsen RHHM, . Multiple-instance learning for computer-aided detection of tuberculosis. Computer-Aided Diagnosis. International Society for Optics and Photonics. 2014. 9035: 90351J

[140]

Shin HC, Roberts K, Lu L, Demner-Fushman D, Yao J, Summers RM. Learning to read chest X-rays: recurrent neural cascade model for automated image annotation. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. 2497–2506

[141]

Murphy K, Habib S S, Zaidi S M A, . Computer aided detection of tuberculosis on chest radiographs: an evaluation of the CAD4TB v6 system. arXiv preprint. 2019. arXiv:1903.03349

[142]

Zheng Y, Liu D, Georgescu B, Nguyen H, Comaniciu D. 3D deep learning for efficient and robust landmark detection in volumetric data. International Conference on Medical Image Computing and Computer-Assisted Intervention. 2015. 565–572

[143]

Bar Y, Diamant I, Wolf L, Lieberman S, Konen E, Greenspan H. Chest pathology detection using deep learning with non-medical training. 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI). 2015. 294–297

[144]

Feng X, Yang J, Laine AF, Angelini ED. Discriminative Localization in CNNs for Weakly-Supervised Segmentation of Pulmonary Nodules. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017. Springer. 2017. 568–576

[145]

Melendez J, van Ginneken B, Maduskar P, Philipsen RHHM, Reither K, Breuninger M, Adetifa IMO, Maane R, Ayles H, Sánchez CI. A novel multiple-instance learning-based approach to computer-aided detection of tuberculosis on chest X-rays. IEEE Trans Med Imaging 2015; 34(1): 179–192

[146]

Melendez J, Sánchez CI, Philipsen RH, Maduskar P, Dawson R, Theron G, Dheda K, van Ginneken B. An automated tuberculosis screening strategy combining X-ray-based computer-aided detection and clinical information. Sci Rep 2016; 6(1): 25265

[147]

Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. Computer Vision and Pattern Recognition. arXiv preprint. 2014. arXiv:1409.1556

[148]

Li Q, Cai W, Feng DD. Lung image patch classification with automatic feature learning. 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE. 2013. 6079–6082

[149]

Li Q, Cai W, Wang X, Zhou Y, Feng DD, Chen M. Medical image classification with convolutional neural network. 13th International Conference on Control Automation Robotics & Vision (ICARCV). IEEE. 2014. 844–848

[150]

Gao M, Xu Z, Lu L, . Multi-label deep regression and unordered pooling for holistic interstitial lung disease pattern detection[C]//International Workshop on Machine Learning in Medical Imaging. Springer, Cham. 2016.147–155

[151]

Christodoulidis S, Anthimopoulos M, Ebner L, Christe A, Mougiakakou S. Multisource transfer learning with convolutional neural networks for lung pattern analysis. IEEE J Biomed Health Inform 2017; 21(1): 76–84

[152]

Gao M, Bagci U, Lu L, Wu A, Buty M, Shin HC, Roth H, Papadakis GZ, Depeursinge A, Summers RM, Xu Z, Mollura DJ. Holistic classification of CT attenuation patterns for interstitial lung diseases via deep convolutional neural networks. Comput Methods Biomech Biomed Eng Imaging Vis 2018; 6(1): 1–6

[153]

Cengil E, Çinar A. A deep learning based approach to lung cancer identification[C]//2018 International Conference on Artificial Intelligence and Data Processing (IDAP). IEEE. 2018. 1–5

[154]

Chamberlain D, Kodgule R, Ganelin D, Miglani V, Fletcher R.Application of semi-supervised deep learning to lung sound analysis. 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE. 2016. 804–807

[155]

Hashemi A, Arabalibiek H, Agin K. Classification of wheeze sounds using wavelets and neural networks. In: 2011 International Conference on Biomedical Engineering and Technology. Singapore: IACSIT Press. 2011. vol. 11. 127–131

[156]

Aykanat M, Kılıç Ö, Kurt B, Saryal S. Classification of lung sounds using convolutional neural networks. EURASIP J Image Video Processing 2017; 2017: 65

[157]

Tan T, Li Z, Liu H, Zanjani FG, Ouyang Q, Tang Y, Hu Z, Li Q. Optimize transfer learning for lung diseases in bronchoscopy using a new concept: sequential fine-tuning. IEEE J Transl Eng Health Med 2018; 6: 1800808

[158]

Tang C, Plasek JM, Zhang H, Xiong Y, Bates DW, Zhou L. A deep learning approach to handling temporal variation in chronic obstructive pulmonary disease progression. 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE. 2018. 502–509

[159]

Campo MI, Pascau J, Estepar RSJ. Emphysema quantification on simulated X-rays through deep learning techniques. 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). IEEE. 2018. 273–276

[160]

Armato III SG, McLennan G, Bidaut L, . The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans. Med Phys 2011; 38(2): 915–931

[161]

Armato SG 3rd, Giger ML, Moran CJ, Blackburn JT, Doi K, MacMahon H. Computerized detection of pulmonary nodules on CT scans. Radiographics 1999; 19(5): 1303–1311

[162]

Huidrom R, Chanu YJ, Singh KM. Pulmonary nodule detection on computed tomography using neuroevolutionary scheme. Signal Image Video Process 2019; 13(1): 53–60

[163]

Shaukat F, Raja G, Ashraf R, Khalid S, Ahmad M, Ali A. Artificial neural network based classification of lung nodules in CT images using intensity, shape and texture features. J Ambient Intell Humaniz Comput 2019; 10(10): 4135–4149

[164]

Zhang W, Wang X, Li X, Chen J. 3D skeletonization feature based computer-aided detection system for pulmonary nodules in CT datasets. Comput Biol Med 2018; 92: 64–72

[165]

Naqi S, Sharif M, Yasmin M, Fernandes SL. Lung nodule detection using polygon approximation and hybrid features from CT images. Curr Med Imaging Rev 2018; 14(1): 108–117

[166]

Liu JK, Jiang HY, Gao MD, He CG, Wang Y, Wang P, Ma H, Li Y. An assisted diagnosis system for detection of early pulmonary nodule in computed tomography images. J Med Syst 2017; 41(2): 30

[167]

Javaid M, Javid M, Rehman MZU, Shah SIA. A novel approach to CAD system for the detection of lung nodules in CT images. Comput Methods Programs Biomed 2016; 135: 125–139

[168]

Akram S, Javed MY, Akram MU, Qamar U, Hassan A. Pulmonary nodules detection and classification using hybrid features from computerized tomographic images. J Med Imaging Health Inform 2016; 6(1): 252–259

[169]

Özkan H, Osman O, Şahin S, Boz AF. A novel method for pulmonary embolism detection in CTA images. Comput Methods Programs Biomed 2014; 113(3): 757–766

[170]

Mehre S A, Mukhopadhyay S, Dutta A, . An automated lung nodule detection system for CT images using synthetic minority oversampling[C]//Medical Imaging 2016: Computer-Aided Diagnosis. International Society for Optics and Photonics. 2016. 9785: 97850H

[171]

Naqi SM, Sharif M, Lali IU. A 3D nodule candidate detection method supported by hybrid features to reduce false positives in lung nodule detection. Multimedia Tools Appl 2019; 78(18): 26287–26311

[172]

Huidrom R, Chanu YJ, Singh KM. Pulmonary nodule detection on computed tomography using neuroevolutionary scheme. Signal Image Video Process 2019; 13(1): 53–60

[173]

Anthimopoulos M, Christodoulidis S, Christe A, Mougiakako S.Classification of interstitial lung disease patterns using local DCT features and random forest. 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2014. 6040–6043

[174]

Gangeh MJ, Sorensen L, Shaker SB, Kamel MS, de Bruijne M, Loog M. A texton-based approach for the classification of lung parenchyma in CT images. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2010. Springer. 2010. 595–602

[175]

Dou Q, Chen H, Yu L, Qin J, Heng PA. Multilevel contextual 3-D CNNs for false positive reduction in pulmonary nodule detection. IEEE Trans Biomed Eng 2017; 64(7): 1558–1567

[176]

Torres EL, Fiorina E, Pennazio F, Peroni C, Saletta M, Camarlinghi N, Fantacci ME, Cerello P. Large scale validation of the M5L lung CAD on heterogeneous CT datasets. Med Phys 2015; 42(4): 1477–1489

[177]

van Ginneken B, Armato SG 3rd, de Hoop B, van Amelsvoort-van de Vorst S, Duindam T, Niemeijer M, Murphy K, Schilham A, Retico A, Fantacci ME, Camarlinghi N, Bagagli F, Gori I, Hara T, Fujita H, Gargano G, Bellotti R, Tangaro S, Bolaños L, de Carlo F, Cerello P, Cristian Cheran S, Lopez Torres E, Prokop M. Comparing and combining algorithms for computer-aided detection of pulmonary nodules in computed tomography scans: The ANODE09 study. Med Image Anal 2010; 14(6): 707–722

[178]

Rajpurkar P, Irvin J, Zhu K, . CheXNet: radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv: Computer Vision and Pattern Recognition. arXiv preprint. 2017. arXiv:1711.05225

[179]

Yao L, Poblenz E, Dagunts D, . Learning to diagnose from scratch by exploiting dependencies among labels. arXiv: Computer Vision and Pattern Recognition. arXiv preprint. 2018 arXiv: 1710.10501

[180]

Jaeger S, Candemir S, Antani S, Wáng YXJ, Lu PX, Thoma G. Two public chest X-ray datasets for computer-aided screening of pulmonary diseases. Quant Imaging Med Surg 2014; 4(6): 475–477

[181]

Bi WL, Hosny A, Schabath MB, Giger ML, Birkbak NJ, Mehrtash A, Allison T, Arnaout O, Abbosh C, Dunn IF, Mak RH, Tamimi RM, Tempany CM, Swanton C, Hoffmann U, Schwartz LH, Gillies RJ, Huang RY, Aerts HJWL. Artificial intelligence in cancer imaging: clinical challenges and applications. CA Cancer J Clin 2019; 69(2): 127–157

RIGHTS & PERMISSIONS

The Author(s) 2019. This article is published with open access at link.springer.com and journal.hep.com.cn

AI Summary AI Mindmap
PDF (1435KB)

3518

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/