Artificial intelligence in radiotherapy: a technological review

Ke Sheng

Front. Med. ›› 2020, Vol. 14 ›› Issue (4) : 431 -449.

PDF (2258KB)
Front. Med. ›› 2020, Vol. 14 ›› Issue (4) : 431 -449. DOI: 10.1007/s11684-020-0761-1
REVIEW
REVIEW

Artificial intelligence in radiotherapy: a technological review

Author information +
History +
PDF (2258KB)

Abstract

Radiation therapy (RT) is widely used to treat cancer. Technological advances in RT have occurred in the past 30 years. These advances, such as three-dimensional image guidance, intensity modulation, and robotics, created challenges and opportunities for the next breakthrough, in which artificial intelligence (AI) will possibly play important roles. AI will replace certain repetitive and labor-intensive tasks and improve the accuracy and consistency of others, particularly those with increased complexity because of technological advances. The improvement in efficiency and consistency is important to manage the increasing cancer patient burden to the society. Furthermore, AI may provide new functionalities that facilitate satisfactory RT. The functionalities include superior images for real-time intervention and adaptive and personalized RT. AI may effectively synthesize and analyze big data for such purposes. This review describes the RT workflow and identifies areas, including imaging, treatment planning, quality assurance, and outcome prediction, that benefit from AI. This review primarily focuses on deep-learning techniques, although conventional machine-learning techniques are also mentioned.

Keywords

artificial intelligence / radiation therapy / medical imaging / treatment planning / quality assurance / outcome prediction

Cite this article

Download citation ▾
Ke Sheng. Artificial intelligence in radiotherapy: a technological review. Front. Med., 2020, 14(4): 431-449 DOI:10.1007/s11684-020-0761-1

登录浏览全文

4963

注册一个新账户 忘记密码

Introduction

Radiation therapy (RT) is used to treat over 60% of cancer patients in the US and 30% of cancer patients in China. The oncological use of radiation to treat malignancies started immediately after the discovery of radioactive isotopes. In the first half-century of RT technological development, most of the research and development effort has been allocated to making high-energy and highly penetrating radiation sources available for the treatment of deep tumors. In the past 40 years, engineering has played an important role in RT technologies. The change has been largely driven by the availability of three-dimensional (3D) images for treatment planning. 3D images from modalities, such as computed tomography (CT) or magnetic resonance (MR), provide quantitative delineation of the tumor target and organs at risk (OARs). Such images also support accurate 3D dose calculation, which in combination with the organ delineation provides a wealth of statistical information to correlate with the tumor control probability and normal organ toxicity—this knowledge quantum-leaped modern RT to be a quantitative science. With decades of technological evolution, modern RT workflow can be simplified into a flowchart (Fig. 1). The 3D images are first acquired for a RT patient. Then, the gross tumor volume (GTV) and OARs are delineated on 3D images. GTV describes visible tumors based on medical images. This variable is subsequently expanded as clinical and planning target volumes to account for microscopic tumor infiltration and geometrical uncertainties. A prescription reflecting the oncologist’s intent and prevailing dose constraints is used to guide treatment planning, which can be forward or inverse to use a combination of beams and dose modulation device to approach the prescription. A typical dose modulation device is a multileaf collimator (MLC) that defines both the aperture shape and relative fluences within the beam aperture. Once the treatment plan is created, approved, and validated, the patient is scheduled for treatment. During treatment in modern RT settings, additional images are obtained to localize the target for registration and patient positioning. The treatment is then delivered. A number of key technological advances in the process need a brief definition to facilitate the main topic of this review article, which is artificial intelligence (AI) in RT.

Image-guided RT

Although 3D imaging has been used to define the target and OARs without precisely positioning the patient and following the treatment plan, the treatment cannot be accurately delivered to achieve the intended effects. 3D treatment imaging has not been incorporated in the RT treatment until less than 20 years ago when a flat panel imager and a kV X-ray source were installed on the clinical linac gantry for volumetric image acquisition [1]. The result is 3D cone-beam CT (CBCT) images that provide considerably superior internal anatomy localization than the 2D radiograph that predated the CBCT. MR-guided RT (MRgRT) [2] has overcome the deficiencies of CBCT-guided RT. Compared with CBCT, MR has superior image contrast for soft tissues, is ionizing-radiation-free, and flexible in the imaging plane orientation. CBCT-guided RT and MRgRT have the potential to not only assist patient positioning but also support the adaptation of a treatment plan in the case of non-rigid anatomical and physiological changes.

Intensity-modulated RT (IMRT)

IMRT is another important technical breakthrough in RT. This technique was originally invented to compensate for the dose drop off toward the edges of the target when uniform conformal beams are used [35]. A mathematical tool known as inverse optimization, in combination with the abovementioned MLC, can be applied to RT planning to create complex or concave dose distributions that match the tumor shape and avoid the surrounding critical organs [6]. This breakthrough enabled dose carving for concave and complex geometries that are unattainable with 3D conformal RT.

Robotic RT platform

On the hardware end, since the linear accelerator became a clinical utility, despite the expanding imaging and targeting functionalities, the architecture of RT machines has remained largely unchanged. The X-ray source revolves around a fixed axis to provide coplanar RT. The degrees of freedom of patient couch and a robotic gantry have been used to accommodate the need for delivering non-coplanar beams [715]. Mathematical algorithms were developed to utilize the enormous delivery space for superior dosimetry effectively [12,1416].

Regardless of the specific delivery platforms, modern RT follows a specific workflow (Fig. 1). With the increased complexity of treatment and the goal to achieve a more effective RT, machine learning and AI have played increasingly important roles in this treatment. This review provides an overview of the opportunities for AI in each step of RT (Table 1).

AI for image acquisition

Low-dose CT acquisition

The signal-to-noise ratio (SNR) is proportional to the CT imaging dose and should be as low as reasonably achievable. Reducing the imaging dose is important for repeated daily image guidance and screening [17]. Conventional filtered back projection is susceptible to the lack of sufficient photon counts and results in severe artifacts and noise from low-dose CT projection. An alternative method using iterative reconstruction with a fidelity term was developed to minimize the difference between the actual and forward projection data of the current estimation of CT. A regularization term is added to the optimization problem to mitigate the noise and artifacts from the ill-conditioned low-dose CT problem and exploit known anatomical characteristics. A typical term is total variation that exploits the piecewise smoothness of an anatomical structure. Iterative CT reconstruction with regularization terms has achieved remarkable success in various applications [1829]. However, the regularization term not only introduces a statistical bias but also compromises the CT resolution in exchange for noise and artifact suppression [30]. Recently, deep-learning methods have been used to reconstruct low-dose CT images with remarkable success [18,20,21,3137]. Deep learning was developed from artificial neural networks (ANNs), which mimic the information transfer and processing of biological systems. However, deep-learning neural networks differ from ANNs in terms of the depths of network layers and the capability to learn high-level features on their own. A representative deep-learning neural network is a convolutional neural network (CNN) that has input and output layers with hidden layers in between. Each hidden layer is connected with its adjacent layers by convolutional operations for hierarchical feature extraction. Deep learning has unprecedented versatility to learn high-level features in complex systems and perform tasks, including classification and prediction of new cases. Chen et al. shows a combined autoencoder and CNN approach for low-dose CT reconstruction [36]. In this method, patches with a fixed size are extracted from paired low and normal-dose CT images. The patches are transferred to the feature space in fully connected convolutional layers with the rectified linear unit activation function. In this process, image noise is suppressed. In the decoder step, deconvolutional layers are used to recover image details from the extracted features. Residual compensation is used to enhance the details. Consequently, effective noise suppression, structure preservation, and lesion detection are reported using the deep-learning method. A challenge of deep-learning method is that sufficient training data may not be readily available; however, training of a data set with substantially different imaging characteristics can result in undesired distortion in the reconstructed images. A solution to combining the strengths of traditional iterative methods and the deep-learning method is plug-and-play alternating direction method of multiplier, where the regularization term is replaced with an off-the-shelf denoiser, such as block-matching and 3D filtering [38,39] or a pretrained deep-learning neural network [20].

CBCT artifact correction

Compared with fan-beam CT that is used for diagnostic and simulation image acquisition, the CBCT used for image-guided RT creates additional challenges in image quality. Given the severe scatter X-rays, photon starvation artifacts, and the motion artifacts from slow acquisition, CBCT image quality is substantially inferior than that of the fan-beam CT, showing poorer contrast and inaccurate electron density for dose calculation. The distortion of CT number is referred to as the shading artifact. In addition to the anti-scatter grid for reducing the scatter X-rays [4047], hardware blockers and computational methods have been utilized to estimate the scatter photon components with varying levels of success [4246,4884]. Blocker methods require modification of the existing CBCT systems and may be impractical. In the computational approach, the scatter photons at the detector are estimated using Monte Carlo forward projection [52]. Alternatively, the scatter component can be estimated using analytical methods [85,86]. Despite software and hardware acceleration, the amount of computation needed to estimate the scatter component is prohibitive for online CBCT reconstruction. The emergence of deep-learning neural networks provides a potential solution to this problem and is effective in improving CBCT quality [8791]. Fig. 2 shows a deep residual convolution neural network (ResNet) that learns the shading compensation map for scatter correction. ResNet with residual blocks to skip layers effectively mitigates the vanishing gradient problem commonly observed in training a CNN with many hidden layers. Additional shortcut connections between the input and output layers facilitate the backpropagation of the gradient. Paired CT images with/without shading artifacts are used to train the network, which then corrects the images reconstructed without scatter correction. Compared with the iterative reconstruction method, trained deep-learning neural networks are more efficient to use. The efficiency gains are essential for online image reconstruction and interventional applications such as adaptive RT.

Rapid MR acquisition

A primary motivation to overcome the substantial technical challenges in combining MR with a linac for MRgRT is to provide dynamic images for tumor tracking. A combination of fast MR sequences and parallel imaging techniques [93] has been developed to provide 2D dynamic images with sub-second temporal resolution, which is acceptable for motions as rapid as respiration. For instance, the steady-state free precession sequence [94] is suited for fast dynamic imaging because of its high SNR and robust performance in low-field MR imaging (MRI) systems. However, 2D dynamic MRI is insufficient to resolve complex anatomies, such as the pancreas, where the convoluted ones require 3D images for adequate description. 4DMRI has been developed to retrospectively sort or prospectively gate the 2D image or k-space data for an assembled 4D data set [95102]. Nonetheless, 4DMRI in its current form only reflects a sparsely sampled average moving anatomy. Real-time intervention decisions, such as gating or motion tracking RT, cannot be performed on the basis of the 4DMRI alone. Existing MR techniques cannot achieve sufficiently high image quality, spatial and temporal resolution, and reconstruction speed for 3D real-time anatomical imaging. The current research in this area is focused on compressed sensing, where the k-space sample is markedly decreased to reduce the signal acquisition time [103]. In the reconstruction step, similar to the iterative CT reconstruction, a regularized optimization problem in space and time domains is solved [104107]. A major limitation of the iterative reconstruction is the long computational time that prevents it from being available as a real-time technique, regardless of the potentially achievable k-space downsampling ratio. By contrast, deep-learning-based MR reconstruction of undersampled k-space data is suited for online RT applications [108112]. In these studies, similar to solving the CBCT reconstruction problem, a deep-learning neural network is trained on the basis of the paired fully sampled and undersampled MR images. The trained network then uses the undersampled k-space data as input to predict fully sampled images. In addition to superior reconstruction results of iterative reconstruction methods, deep-learning offloads most of the computational burden, including the training of the neural networks, to become offline. The online reconstruction can be performed in near real time.

AI for image synthesis, registration, and segmentation

Image synthesis

The available combination of images should be an optimal match for the need of a specific RT task. However, ideal images may not always be available. For example, an MR-only RT planning workflow is created to utilize the superior MR soft-tissue contrast, eliminate unnecessary imaging dose to the patient, and avoid the image registration problem [113]. Without perfectly matching CT images, one challenge of this workflow is the failure of MR to provide the electron density needed for radiation dose calculation. An intuitive method to solve the problem is by segmenting the MR images into tissue subtypes and then assigning known CT densities to these tissues. The accuracy of bulk density assignment method [114] depends on the segmentation accuracy, the complexity of anatomy, and the homogeneity of CT density within one tissue type. The need for manual segmentation inevitably increases the processing time of a patient plan. Synthetic CT (CTsynth) images based on multiparametric MR have been studied to improve the tissue mapping accuracy [115]. Compared with MR from a single sequence, different types of tissues and structures are better quantified using multiparametric MRI. In particular, the dark cortical bones can be differentiated from air cavities in ultra-short echo time MR images. Despite their potential for accurate CT density assignment, methods based on multiparametric MR suffer from the inconsistency between images acquired at different times because of unavoidable anatomical motions and remain cumbersome in practice.

Recently, CTsynth images using deep learning have gained wide popularity [116123]. Compared with conventional CTsynth methods, the deep-learning method can be fully automated and is more efficient and robust. Its versatility is further improved by the development of the cycle-consistent generative adversarial network (GAN), which allows training of the network based on unpaired MR-CT images [124]. GAN is a new deep-learning architecture that uses two networks: the generator network attempts to generate realistic images, whereas the discriminator network attempts to distinguish between real images and those created by the generator. When successful training is completed, the generator can create an image that cannot be differentiated from the training set. A cycle-consistent GAN (CycleGAN) for image synthesis utilizes two GANs: one attempts to generate a realistic CTsynth slice given a real MR slice, and the other attempts to generate a realistic synthetic MR slice given a real CT slice. The generators are then switched and applied to synthetic outputs to translate the synthetic MR back into a CT slice, and vice versa. The original CT or MR slice should be recovered. Hence, this network architecture should show cycle consistency. The loss function for CycleGAN has an adversarial loss term for generating realistic CT images, an adversarial loss term for generating realistic MR images, and a cycle-consistent loss term to prevent the network from assigning any random realistic-looking image from the other domain. In published reports, deep-learning-generated CTsynth images are adequately accurate for dose calculation [125127]. The same method can be extended to other types of image synthesis. For example, virtual 4DMR images are synthesized from 4DCT for good visualization of the liver tumor in image-guided RT [128].

Image registration

In RT, images from different patients, times, or modalities often need to be registered to synthesize their corresponding information in a common coordinate. For example, the CT acquired at the time of positron emission tomography (PET) needs to be registered with the planning CT to overlay the PET information in CT planning for target delineation. Multimodal image registration is often needed to allow organs or tissues with better conspicuity in one image modality, e.g., MR, to help in the delineation of the target and OARs in CT planning. The corresponding CT images need to be registered to correctly accumulate the radiation dose delivered at different times. Different from rigid phantoms, voxel-level deformation occurs between the image pairs to be registered, creating a deformable image registration (DIR) problem, whose solution depends on the establishment of voxel correspondence between the image pairs. Conventionally, image-based and biomechanical methods are developed to tackle the image registration problem. In the image-based method, a deformation engine is used to iteratively morph the original image for a desirable match with the target image. Common deformation engines include “demon” [129], freeform [130], and B-spline [131]. Image-based DIR has achieved remarkable success in selected applications where ample landmarks and good image contrast are available. However, DIR results in low-contrast regions and is less reliable for multimodal registration. Moreover, the registration results can be sensitive to parameter tuning, making the process subjective and tedious.

In the biomechanical method, images are first segmented into organs and assigned with known elasticity coefficients. Tissue deformation is then driven by boundary conditions derived from the origin and target images [132]. In theory, the biomechanical method takes advantage of intrinsic tissue mechanical properties; thus, the biomechanical method is resilient to the lack of image contrast and variation in image characteristics in the multimodel registration problem. However, in practice, accurate tissue mechanical properties and boundary conditions are difficult, if not impossible, to obtain for individual patients. The actual mechanical problems are highly nonlinear, making an accurate solution for such a problem unattainable in most cases. Thus, biomechanical DIR is rarely used in RT.

Recent efforts on using deep learning have been focused on the improvement of the quality and efficiency of DIR. Notably, VoxelMorph proposed by Balakrishnan et al. [133] uses U-net to perform unsupervised and supervised learning for brain image registration. The method alleviates the need for manual parameter tuning and is versatile to incorporate additional manual labeling for improved registration accuracy. Although the training of a registration model can be time consuming, registration for new image pairs is faster when using VoxelMorph relative to conventional registration methods. In another unsupervised DIR study [134], a deep convolutional inverse graphics network was used to perform DIR between CT and CBCT with superior results over those of conventional registration methods, regardless of the intrinsic CT number inaccuracy in the CBCT images. For MR-CT DIR problem, synthetic bridge images are created using the aforementioned CycleGAN to ease the challenge of matching images with markedly different characteristics [135]. Remarkably improved DIR accuracy in comparison with the direct registration method is observed using this method to register head and neck (H&N) MR and CT images (Fig. 3).

Image segmentation

For RT, IMRT has replaced 3D conformal and 2D planning because of its superior dose conformity and OAR sparing [136]. Inverse optimization in IMRT requires the delineation of OARs. Conventionally, IMRT has been performed by oncologists and dosimetrists. Manual segmentation is one of the time-consuming processes in RT. Furthermore, given the non-uniform training and time available for planning, manual segmentation has been shown to possess substantial intra- and inter-observer variabilities [137]. The lengthy process required for segmentation is incompatible with adaptive RT, where a new IMRT plan needs to be rapidly created [138,139] on the basis of the newly acquired CBCT or better quality fan-beam simulation CTs. The advent of MRgRT has provided superior soft-tissue contrast and motivated frequent online adaptive RT [140], where automated segmentation is important for efficient treatment planning.

A common strategy used by commercial platforms, including MRgRT systems, is the registration of manually labeled planning image to online images. Given the unavoidable non-rigid anatomical motion by the patient between image acquisitions, DIR is needed to establish a voxel-to-voxel correspondence between two medical images reflecting two different anatomical instances. The created deformable vector field would then propagate the contours to the online images. Alternatively, without the initial individual segmentation on the planning images, an atlas is generated on the basis of an average patient [141,142]. In practice, these methods highly depend on the accuracy of deformable registration, which can be erroneous when the deformation is large or the image contrast is low. Shape or appearance models [143] have been used to regularize the surface formation to achieve anatomical plausibility [144], thus preventing large contouring errors. Nevertheless, atlas methods have not found wide adoption in RT practice because of the lack of robustness and slow performance.

Deep-learning neural networks have shown potential for medical image segmentation, target detection, registration, and other tasks [145152]. For RT, CNNs are used to segment H&N CT images [153]. The resultant contours are then refined using the Markov random field algorithm. To eliminate the post-processing step, Tong et al. developed a novel automated H&N OAR segmentation method that combines the fully convolutional residual network (FC-ResNet) with a shape-constrained (SC) model. The SC network is trained to capture the 3D OAR shape features that are used to constrain the FC-ResNet. Tong et al. showed that superior segmentation performance of state-of-the-art methods could be achieved with dual neural nets [154]. A challenge with training the segmentation network is that the amount of curated data with manual labeling is typically small, resulting in an overfitting problem and deteriorated independent validation results. By adding a GAN, the robustness of segmentation is improved for small training data sets [155]. Fig. 4 and Table 2 show the H&N segmentation results. The segmentation artifacts, including organ islands and incorrect boundaries, are remarkably reduced with the inclusion of SC and GAN. Using manual segmentation as the ground truth, the volume and surface agreements are remarkably improved compared with conventional auto-segmentation methods and vanilla deep-learning neural network methods.

AI for treatment planning

Dose prediction

RT treatment planning is a labor-intensive procedure. Despite the consistent planning goals, the planning results differ because of inter-patient variability in the anatomy. The results cannot be predicted at the beginning of the planning process. In the planning process, the dosimetrist tunes a large number of optimization parameters without knowing the endpoint. Inconsistent and suboptimal plan dosimetry is common among different institutions and individual planners [159161]. Knowledge-based planning (KBP) and automated planning techniques have been developed to address these challenges [162164].

KBP is motivated by the observation that the achievable patient dose is highly correlated to the anatomy. For instance, the closer a critical organ is to the tumor, the higher the dose is to this organ. To learn the correlation between patient anatomies and planning dose, Wu et al. [165] introduced the concept of the overlap volume histogram and established its relationship with the dose–volume histogram (DVH). Zhu et al. [166] and later Yuan et al. [160] used machine-learning methods, such as support vector regression, to predict the dose. Principal component analysis (PCA) is performed on the spatial and volumetric input features to identify the most important anatomical features and avoid overfitting with limited training samples. PCA uses matrix operations to reduce the number of potentially correlated variables to a small number of uncorrelated ones. The accuracies of various dose prediction methods were previously compared [167]. In addition to these direct regressional learning methods, ANNs are used to predict dose distributions [162], showing similar performance for simple cases, including brain and prostate cancers. However, the prediction performance deteriorates for large regions of interest and complex cases. In relation to segmentation tasks, the 3D dose cloud is probably correlated with the underlying anatomy and should deform with the underlying anatomy of a new patient. In an atlas-based dose prediction study [163], the CT features in the treating set were mapped to the testing set. Multiple atlases exist in the training set. This finding results in probabilistic dose estimates, where the most likely voxel dose is determined using a conditional random field, which is a method for the prediction of voxel association based on contextual information.

The common drawbacks of these conventional methods include sensitivity to parameter tuning, low accuracy in complex cases, increasing requirement on the training data set with more features included, and slow performance. This unique RT problem sets up an ideal inquiry for deep learning, which learns implicit anatomical, imaging, and dosimetric features with relatively straight forward training processes. Deep learning for the dose prediction of various clinical cases, including the H&N and prostate cancer, and different treatment modalities, including IMRT, volumetric modulated arc therapy, and helical TomoTherapy, are reported [116,168173]. Fig. 5 shows the predicted dose for an H&N cancer patient using various neural nets [116]. The predicted dose using a hyperdense U-net showed the highest similarity to the actual dose. U-net is a type of CNN originally developed to solve the image segmentation problem. In addition to the contracting layers in CNN, the pooling operation is replaced by upsampling operators in the expansive path to the original image resolution. The symmetric contraction path and expansive path in the network form a U-shape. Hence, the name U-net is used. Paired CT and planning dose images are used to train the network, which is applied to predict the dose for a new CT.

Treatment planning can be guided by the predicted dose semi- or fully automatically. DVH constraint points can be extracted from the predicted dose and used in commercial planning systems [174184], or the 3D voxel doses can be used to drive the optimization [185,186]. Particularly for the robotic platform, Landers et al. showed that dose prediction can be accurate because of isotropic distribution [187], and the combined beam orientation and fluence map optimization can be performed fully automatically [186].

AI for patient workflow management and QA

Quantitative accuracy is important to the reproducibility and quality of RT. QA is a broad topic involving geometrical accuracy, machine output, energy, process consistency, and end-to-end treatment plan validation. Although the physical measurement of the machine output, energy, and profile will always be needed, the manual consistency check is a tedious, labor-intensive, non-bulletproof process. The RT planning and delivery include several parameters, such as prescription, tumor location, plan monitor units, beam arrangement, and dose modifiers. A single error in the process can result in devastating consequences to the patient. Therefore, automated QA of the process is desired. A straightforward approach to this problem is the design of a checklist including all relevant parameters and comparing the created treatment plan with expected values [188]. For the checklist to be effective, the input needs to be structured in the electronic medical record system. However, this condition is not always the case. Often, pertinent diagnosis and prescription information are embedded in clinical notes written in a natural (human) language. However, the natural language is unstructured and can be difficult to search for specific treatment information. A natural language processing tool [189] converts the natural language to be structured into computer language, where key information can be readily extracted for quality check. The checklist method may be effective in simple cases where all variables can be enumerated. However, for complex cases, such as IMRT, the possible variables exceed the checklist capacity. In such a case, for patient-specific IMRT QA, the complex plan is delivered to a phantom, and the measured dose is compared with the expected dose. Predicting and understanding the QA results are difficult tasks. Deep-learning methods have been recently applied to tackle this problem [190192]. In these studies, the electronic portal images (EPIs) of the dose delivered with a given MLC configuration with or without introduced errors were used to train a vanilla CNN, which was then used to classify an invisible EPI. In these studies, deep learning can predict the QA passing rate for a patient, and CNNs were used to classify the presence or absence of introduced RT treatment delivery errors from patient-specific gamma images. Fig. 6 shows an example of using the deep-learning method for error classification; this method is superior in discriminability compared with the handcrafted approach [191].

At the time of plan delivery, online images are first obtained for registration and patient positioning. Rigid registration is straightforward and still applicable for well-immobilized patients and anatomies showing minimal relative motion to the bony anatomy. In complicated cases where non-rigid motion is substantial, deformable registration and adaptive planning are needed. Both procedures can similarly benefit from the AI techniques as previously described. For anatomies showing substantial intrafractional motion, gated RT is performed where the treatment beam is turned on only if the tumor is within the predefined window to minimize the treatment volume without compromising tumor coverage. Evidently, online images showing the tumor location would be desirable for such purposes. AI also facilitates the acquisition and reconstruction of high-quality tumor tracking images or images with high dimensions as described in Section “AI for image acquisition.” In addition to the benefits in image quality, pre-trained AI models can be applied in near real time, making them particularly well suited for online image guidance procedures.

AI for RT outcome prediction

Although the statistical prognosis for a patient cohort exists, predicting the outcome of an individual RT patient is important. The prediction can be used to personalize the treatment for a patient to optimize the tumor local control or normal tissue toxicity [193,194]. The prediction can be made using imaging features [195], genomic features [196], or a combination of different types of features [197]. Classical machine-learning methods, including least absolute shrinkage and selection operator (LASSO) and support vector machine (SVM), have been used to associate the imaging features with the outcomes [191,198211]. LASSO is a regression tool that “shrinks” the data toward a central point such as the mean. With sparsity regularization, LASSO encourages the use of limited parameters in a model that is more robust than the models using more parameters. SVM is another machine-learning method for regression and classification analyses. SVM attempts to find the hyperplane in the high-dimensional data space that maximally separates data belonging to distinct clusters. Although these studies showed the potential of outcome prediction and in certain cases identified the subvolumes of a tumor that can benefit from a selected radiation boost, conventional machine-learning methods rely on handcrafted features that are not robust to a different patient cohort, thus limiting their generalizability. Deep learning is well-suited to establish the correlation between the images and the outcome with improved accuracy [169,212216]. Using deep-learning architecture, the authors stratified stage III lung cancer patients into high- and low-risk groups based on their pretreatment and follow-up images [217]. The network structure used a base ResNet CNN pre-trained with natural images. Patient CT images from individual time points were inputted into separate CNNs, whose output was then inputted into recurrent neural networks (RNN). An increasing prediction power was observed by incorporating additional follow-up images.

In addition to the prediction of tumor control, deep learning is used to predict toxicity. Zhen et al. [122] used pre-trained CNN to predict rectum toxicity from RT. Given that radiation dose to the rectal wall is more important than the volume dose, rectum surface meshing and deformable registration were used to map 3D planning dose to the unfolded 2D rectal wall. The model was trained and tested on 42 cervical patients and showed a 0.7 area under the curve (AUC) and 0.89 performance in 10-fold and leave-one-out cross-validation. Ibragimov et al. [120] used the CT images, planning dose, and patient clinical information of 125 liver stereotactic body RT patients to train a CNN for toxicity prediction. Compared with conventional machine-learning methods, CNN achieved high AUC performance. The deep-learning method can also make false predictions. Notably, many published studies reported that a relatively small patient cohort and intra-data cross-validation artificially boost the performance of machine-learning methods. Rigorous tests on independent data sets and preferably on prospectively accrued patients will be essential to demonstrate the value of AI in outcome prediction.

Discussion

This technical review heavily focused on deep learning, which is far from the entire extent of AI, because the recent rapid development of deep learning has substantially accelerated AI research and its application in RT. Numerous publications are being generated on a daily basis, whereas applications are further fueled by the open-source culture in the AI community.

The many roles of AI in RT surveyed in this review can be broadly classified into three categories. The first category includes AI-automated mature tasks that have been performed with traditional methods, including segmentation, treatment planning, and QA. In this category, AI replaces tedious, repetitive, and error-prone manual tasks with automation. AI also alleviates the burden of human operators in well-resourced clinics and enables advanced treatments in resource-limited ones. In the first category, AI adoption is rapid. Various products, including AI-assisted segmentation and treatment planning, are already in the pipeline for clinical release. The second category improves existing functions, such as image reconstruction, for low-dose CBCT, fast MRI, and image synthesis. Different from the first category, the AI in this category augments the results of traditional algorithms and tools. Non-machine-learning methods for image reconstruction and synthesis existed before the recent wave of AI but were limited to one or more fields. The third category includes the roles that provide functions that are minimally available in the clinic, such as the radiomics for individualized outcome prediction. Currently, the outcome prediction is largely at a population level based on patient clinical information and genetic data. AI prediction using individual patient image biomarkers is a new and exciting opportunity to bridge the unmet needs. However, the clinical adoption of AI for the new functions can be slow because of the following reasons.

A major criticism of AI, particularly in deep learning, is that deep neural networks are opaque. Gaining insights into the features that contribute to the results is difficult. For the tasks in the first category with clear verifiable endpoints, this criticism may not be a major issue. For instance, segmentation results can be intuitively examined and validated. Understanding the networks may not be as important as in the second and third categories, where validation is less intuitive or not readily available, particularly on the prospective patient cohort. For image reconstruction of a new patient, deep-learning type of reconstruction algorithms could introduce image distortion and artifacts that are indistinguishable from actual anatomical features. For outcome prediction, understanding the features that contribute to the outcome is important to adapt treatments accordingly. Interpretable networks [218] and the biological basis of imaging features [219] need to be further researched for such purpose. Another major bottleneck for AI applications in RT is the data available for training and testing. Different from tasks, such as segmentation, image synthesis, or dose prediction, where a robust network can be trained on fewer than 100 patients, which are easily obtainable in most RT clinics, outcome prediction requires data, which are difficult to obtain, from a high number of patients. In the AI research community, the time and effort to procure data are often notably greater than those for model building and training. However, without high quality and quantity of patient data and with the opacity of deep-learning neural networks, the robustness and generalizability of many models for outcome prediction cannot be further tested and improved. The best way to overcome this challenge is by contributing to the public database, where the data burden is shared by a community, and the impact of data is multiplied. The best example of public data sharing is The Cancer Genome Atlas and The Cancer Imaging Archive [220222] that makes omics and medical imaging data available for researchers to test, develop, and compare their hypotheses and methods. Well-curated data have resulted in thousands of publications, in which many researchers directly used the data in their respective studies.

This review focused on machine-learning algorithms. Notably, AI cannot be narrowly viewed as a machine-learning algorithm. One aspect of AI is that the automated delivery of treatment cannot be achieved with algorithms alone. Medical robotic hardware, apart from what was mentioned in the introduction, needs to be developed for automated patient set-up and treatment delivery; however, hardware development can be significantly slow. Another important omission from the technological review is informatics and machine-learning research based on cell biology, such as the radiosensitivity related to gene expression, which can be appreciated in the review by Pavlopoulou et al. [223].

Apart from AI, fundamental physics, biology, and computational approaches will always be critical. AI will not replace the research on the biological effects of heavy ions and ultra-high-dose rate radiation sources or deterministic computation such as dose calculation. The efficiency gained from AI will provide time and effort for investment in basic research while simultaneously providing consistent patient care. This result is a less evident but equally important benefit of AI in RT.

Conclusions

This article reviews pertinent machine learning, AI research, and clinical applications for RT. This review follows the typical workflow of RT, including image acquisition and processing, target and OAR delineation, plan creation and delivery, and RT outcome prediction. In addition to clinical applications, representative AI methods are introduced. The article can serve as an introduction for readers who are interested in learning more about modern RT and AI research.

References

[1]

Jaffray DA, Siewerdsen JH, Wong JW, Martinez AA. Flat-panel cone-beam computed tomography for image-guided radiation therapy. Int J Radiat Oncol Biol Phys 2002; 53(5): 1337–1349

[2]

Mutic S, Dempsey JF. The ViewRay system: magnetic resonance-guided and controlled radiotherapy. Semin Radiat Oncol 2014; 24(3): 196–199

[3]

Brahme A. Current algorithms for computed electron beam dose planning. Radiother Oncol 1985; 3(4): 347–362

[4]

Brahme A, Andreo P. Dosimetry and quality specification of high energy photon beams. Acta Radiol Oncol 1986; 25(3): 213–223

[5]

Brahme A, Roos JE, Lax I. Solution of an integral equation encountered in rotation therapy. Phys Med Biol 1982; 27(10): 1221–1229

[6]

Brahme A, Agren AK. Optimal dose distribution for eradication of heterogeneous tumours. Acta Oncol 1987; 26(5): 377–385

[7]

Woods K, Lee P, Kaprealian T, Yang I, Sheng K. Cochlea-sparing acoustic neuroma treatment with 4π radiation therapy. Adv Radiat Oncol 2018; 3(2): 100–107

[8]

Yu VY, Landers A, Woods K, Nguyen D, Cao M, Du D, Chin RK, Sheng K, Kaprealian TB. A prospective 4π radiation therapy clinical study in recurrent high-grade glioma patients. Int J Radiat Oncol Biol Phys 2018; 101(1): 144–151

[9]

Murzin VL, Woods K, Moiseenko V, Karunamuni R, Tringale KR, Seibert TM, Connor MJ, Simpson DR, Sheng K, Hattangadi-Gluth JA. 4π plan optimization for cortical-sparing brain radiotherapy. Radiother Oncol 2018; 127(1): 128–135

[10]

Tran A, Woods K, Nguyen D, Yu VY, Niu T, Cao M, Lee P, Sheng K. Predicting liver SBRT eligibility and plan quality for VMAT and 4π plans. Radiat Oncol 2017; 12(1): 70

[11]

Woods K, Nguyen D, Tran A, Yu VY, Cao M, Niu T, Lee P, Sheng K. Viability of non-coplanar VMAT for liver SBRT as compared to coplanar VMAT and beam orientation optimized 4π IMRT. Adv Radiat Oncol 2016; 1(1): 67–75

[12]

Rwigema JC, Nguyen D, Heron DE, Chen AM, Lee P, Wang PC, Vargo JA, Low DA, Huq MS, Tenn S, Steinberg ML, Kupelian P, Sheng K. 4π noncoplanar stereotactic body radiation therapy for head-and-neck cancer: potential to improve tumor control and late toxicity. Int J Radiat Oncol Biol Phys 2015; 91(2): 401–409

[13]

Nguyen D, Rwigema JC, Yu VY, Kaprealian T, Kupelian P, Selch M, Lee P, Low DA, Sheng K. Feasibility of extreme dose escalation for glioblastoma multiforme using 4π radiotherapy. Radiat Oncol 2014; 9(1): 239

[14]

Dong P, Lee P, Ruan D, Long T, Romeijn E, Low DA, Kupelian P, Abraham J, Yang Y, Sheng K. 4π noncoplanar stereotactic body radiation therapy for centrally located or larger lung tumors. Int J Radiat Oncol Biol Phys 2013; 86(3): 407–413

[15]

Dong P, Lee P, Ruan D, Long T, Romeijn E, Yang Y, Low D, Kupelian P, Sheng K. 4π non-coplanar liver SBRT: a novel delivery technique. Int J Radiat Oncol Biol Phys 2013; 85(5): 1360–1366

[16]

O’Connor D, Yu V, Nguyen D, Ruan D, Sheng K. Fraction-variant beam orientation optimization for non-coplanar IMRT. Phys Med Biol 2018; 63(4): 045015

[17]

Keyrilainen J, Fernandez M, Karjalainen-Lindsberg ML, Virkkunen P, Leidenius M, von Smitten K, Sipila P, Fiedler S, Suhonen H, Suortti P, Bravin A. Toward high-contrast breast CT at low radiation dose. Radiology 2008; 249(1): 321–327

[18]

Chen H, Zhang Y, Chen Y, Zhang J, Zhang W, Sun H, Lv Y, Liao P, Zhou J, Wang G. LEARN: learned experts’ assessment-based reconstruction network for sparse-data CT. IEEE Trans Med Imaging 2018; 37(6): 1333–1347

[19]

Ha S, Mueller K. A look-up table-based ray integration framework for 2-D/3-D forward and back projection in X-ray CT. IEEE Trans Med Imaging 2018; 37(2): 361–371

[20]

He J, Yang Y, Wang Y, Zeng D, Bian Z, Zhang H, Sun J, Xu Z, Ma J. Optimizing a parameterized plug-and-play ADMM for iterative low-dose CT reconstruction. IEEE Trans Med Imaging 2019; 38(2): 371–382

[21]

Kang E, Chang W, Yoo J, Ye JC. Deep convolutional framelet denosing for low-dose CT via wavelet residual network. IEEE Trans Med Imaging 2018; 37(6): 1358–1369

[22]

Li S, Zeng D, Peng J, Bian Z, Zhang H, Xie Q, Wang Y, Liao Y, Zhang S, Huang J, Meng D, Xu Z, Ma J. An efficient iterative cerebral perfusion CT reconstruction via low-rank tensor decomposition with spatial-temporal total variation regularization. IEEE Trans Med Imaging 2019; 38(2): 360–370

[23]

Mechlem K, Ehn S, Sellerer T, Braig E, Munzel D, Pfeiffer F, Noel PB. Joint statistical iterative material image reconstruction for spectral computed tomography using a semi-empirical forward model. IEEE Trans Med Imaging 2018; 37(1): 68–80

[24]

Cai A, Li L, Zheng Z, Wang L, Yan B. Block-matching sparsity regularization-based image reconstruction for low-dose computed tomography. Med Phys 2018; 45(6): 2439–2452

[25]

Kang E, Koo HJ, Yang DH, Seo JB, Ye JC. Cycle-consistent adversarial denoising network for multiphase coronary CT angiography. Med Phys 2019; 46(2): 550–562

[26]

van Nierop BJ, Prince JF, van Rooij R, van den Bosch M, Lam M, de Jong H. Accuracy of SPECT/CT-based lung dose calculation for Holmium-166 hepatic radioembolization before OSEM convergence. Med Phys 2018; 45(8): 3871–3879

[27]

Gu C, Zeng D, Lin J, Li S, He J, Zhang H, Bian Z, Niu S, Zhang Z, Huang J, Chen B, Zhao D, Chen W, Ma J. Promote quantitative ischemia imaging via myocardial perfusion CT iterative reconstruction with tensor total generalized variation regularization. Phys Med Biol 2018; 63(12): 125009

[28]

Holbrook M, Clark DP, Badea CT. Low-dose 4D cardiac imaging in small animals using dual source micro-CT. Phys Med Biol 2018; 63(2): 025009

[29]

Yu W, Wang C, Nie X, Zeng D. Sparsity-induced dynamic guided filtering approach for sparse-view data toward low-dose X-ray computed tomography. Phys Med Biol 2018; 63(23): 235016

[30]

Bian J, Yang K, Boone JM, Han X, Sidky EY, Pan X. Investigation of iterative image reconstruction in low-dose breast CT. Phys Med Biol 2014; 59(11): 2659–2685

[31]

Shan H, Zhang Y, Yang Q, Kruger U, Kalra MK, Sun L, Cong W, Wang G. 3-D convolutional encoder-decoder network for low-dose CT via transfer learning from a 2-D trained network. IEEE Trans Med Imaging 2018; 37(6): 1522–1534

[32]

Yang Q, Yan P, Zhang Y, Yu H, Shi Y, Mou X, Kalra MK, Zhang Y, Sun L, Wang G. Low-dose CT image denoising using a generative adversarial network with wasserstein distance and perceptual loss. IEEE Trans Med Imaging 2018; 37(6): 1348–1357

[33]

Yi X, Babyn P. Sharpness-aware low-dose CT denoising using conditional generative adversarial network. J Digit Imaging 2018; 31(5): 655–669

[34]

You C, Yang Q, Shan H, Gjesteby L, Li G, Ju S, Zhang Z, Zhao Z, Zhang Y, Cong W, Wang G. Structurally-sensitive multi-scale deep neural network for low-dose CT denoising. IEEE Access 2018; 6: 41839–41855

[35]

Kang E, Min J, Ye JC. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction. Med Phys 2017; 44(10): e360–e375

[36]

Chen H, Zhang Y, Kalra MK, Lin F, Chen Y, Liao P, Zhou J, Wang G. Low-dose CT with a residual encoder-decoder convolutional neural network. IEEE Trans Med Imaging 2017; 36(12): 2524–2535

[37]

Chen H, Zhang Y, Zhang W, Liao P, Li K, Zhou J, Wang G. Low-dose CT via convolutional neural network. Biomed Opt Express 2017; 8(2): 679–694

[38]

Lyu Q, Yang C, Gao H, Xue Y, O’Connor D, Niu T, Sheng K. Technical Note: Iterative megavoltage CT (MVCT) reconstruction using block-matching 3D-transform (BM3D) regularization. Med Phys 2018; 45(6): 2603–2610

[39]

Lyu Q, Ruan D, Hoffman J, Neph R, McNitt-Gray M, Sheng K. Iterative reconstruction for low dose CT using Plug-and-Play alternating direction method of multipliers (ADMM) framework. SPIE Medical Imaging: Image Processing 2019; 2019: 10949

[40]

Stankovic U, Ploeger LS, van Herk M, Sonke JJ. Optimal combination of anti-scatter grids and software correction for CBCT imaging. Med Phys 2017; 44(9): 4437–4451

[41]

Xu J, Sisniega A, Zbijewski W, Dang H, Stayman JW, Wang X, Foos DH, Aygun N, Koliatsos VE, Siewerdsen JH. Modeling and design of a cone-beam CT head scanner using task-based imaging performance optimization. Phys Med Biol 2016; 61(8): 3180–3207

[42]

Zhang H, Kong F, Ren L, Jin JY. An inter-projection interpolation (IPI) approach with geometric model restriction to reduce image dose in cone beam CT (CBCT). In: Zhang YJ, Tavares JMRS. Computational Modeling of Objects Presented in Images. Fundamentals, Methods, and Applications. CompIMAGE 2014. Lecture Notes in Computer Science, vol 8641. Springer, Cham. 2014: 12–23

[43]

Stankovic U, van Herk M, Ploeger LS, Sonke JJ. Improved image quality of cone beam CT scans for radiotherapy image guidance using fiber-interspaced antiscatter grid. Med Phys 2014; 41(6): 061910

[44]

Sisniega A, Zbijewski W, Badal A, Kyprianou IS, Stayman JW, Vaquero JJ, Siewerdsen JH. Monte Carlo study of the effects of system geometry and antiscatter grids on cone-beam CT scatter distributions. Med Phys 2013; 40(5): 051915

[45]

Ren L, Yin FF, Chetty IJ, Jaffray DA, Jin JY. Feasibility study of a synchronized-moving-grid (SMOG) system to improve image quality in cone-beam computed tomography (CBCT). Med Phys 2012; 39(8): 5099–5110

[46]

Jin JY, Ren L, Liu Q, Kim J, Wen N, Guan H, Movsas B, Chetty IJ. Combining scatter reduction and correction to improve image quality in cone-beam computed tomography (CBCT). Med Phys 2010; 37(11): 5634–5644

[47]

Sun M, Star-Lack JM. Improved scatter correction using adaptive scatter kernel superposition. Phys Med Biol 2010; 55(22): 6695–6720

[48]

Lu Y, Peng B, Lau BA, Hu YH, Scaduto DA, Zhao W, Gindi G. A scatter correction method for contrast-enhanced dual-energy digital breast tomosynthesis. Phys Med Biol 2015; 60(16): 6323–6354

[49]

Dang H, Stayman JW, Sisniega A, Xu J, Zbijewski W, Wang X, Foos DH, Aygun N, Koliatsos VE, Siewerdsen JH. Statistical reconstruction for cone-beam CT with a post-artifact-correction noise model: application to high-quality head imaging. Phys Med Biol 2015; 60(16): 6153–6175

[50]

Watson PG, Mainegra-Hing E, Tomic N, Seuntjens J. Implementation of an efficient Monte Carlo calculation for CBCT scatter correction: phantom study. J Appl Clin Med Phys 2015; 16(4): 216–227

[51]

Kim C, Park M, Sung Y, Lee J, Choi J, Cho S. Data consistency-driven scatter kernel optimization for X-ray cone-beam CT. Phys Med Biol 2015; 60(15): 5971–5994

[52]

Xu Y, Bai T, Yan H, Ouyang L, Pompos A, Wang J, Zhou L, Jiang SB, Jia X. A practical cone-beam CT scatter correction method with optimized Monte Carlo simulations for image-guided radiation therapy. Phys Med Biol 2015; 60(9): 3567–3587

[53]

Sisniega A, Zbijewski W, Xu J, Dang H, Stayman JW, Yorkston J, Aygun N, Koliatsos V, Siewerdsen JH. High-fidelity artifact correction for cone-beam CT imaging of the brain. Phys Med Biol 2015; 60(4): 1415–1439

[54]

Ritschl L, Fahrig R, Knaup M, Maier J, Kachelriess M. Robust primary modulation-based scatter estimation for cone-beam CT. Med Phys 2015; 42(1): 469–478

[55]

Bootsma GJ, Verhaegen F, Jaffray DA. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting. Med Phys 2015; 42(1): 54–68

[56]

Zbijewski W, Sisniega A, Stayman JW, Muhit A, Thawait G, Packard N, Senn R, Yang D, Yorkston J, Carrino JA, Siewerdsen JH. High-performance soft-tissue imaging in extremity cone-beam CT. Proc SPIE Int Soc Opt Eng 2014; 9033: 903329

[57]

Pawlowski JM, Ding GX. An algorithm for kilovoltage X-ray dose calculations with applications in kV-CBCT scans and 2D planar projected radiographs. Phys Med Biol 2014; 59(8): 2041–2058

[58]

Li J, Yao W, Xiao Y, Yu Y. Feasibility of improving cone-beam CT number consistency using a scatter correction algorithm. J Appl Clin Med Phys 2013; 14(6): 4346

[59]

Aootaphao S, Thongvigitmanee SS, Rajruangrabin J, Junhunee P, Thajchayapong P. Experiment-based scatter correction for cone-beam computed tomography using the statistical method. Conf Proc IEEE Eng Med Biol Soc 2013; 2013: 5087–5090

[60]

Thing RS, Bernchou U, Mainegra-Hing E, Brink C. Patient-specific scatter correction in clinical cone beam computed tomography imaging made possible by the combination of Monte Carlo simulations and a ray tracing algorithm. Acta Oncol 2013; 52(7): 1477–1483

[61]

Muhit AA, Arora S, Ogawa M, Ding Y, Zbijewski W, Stayman JW, Thawait G, Packard N, Senn R, Yang D, Yorkston J, Bingham CO 3rd, Means K, Carrino JA, Siewerdsen JH. Peripheral quantitative CT (pQCT) using a dedicated extremity cone-beam CT scanner. Proc SPIE Int Soc Opt Eng 2013; 8672: 867203

[62]

Meng B, Lee H, Xing L, Fahimian BP. Single-scan patient-specific scatter correction in computed tomography using peripheral detection of scatter and compressed sensing scatter retrieval. Med Phys 2013; 40(1): 011907

[63]

Boylan CJ, Marchant TE, Stratford J, Malik J, Choudhury A, Shrimali R, Rodgers J, Rowbottom CG. A megavoltage scatter correction technique for cone-beam CT images acquired during VMAT delivery. Phys Med Biol 2012; 57(12): 3727–3739

[64]

Niu T, Al-Basheer A, Zhu L. Quantitative cone-beam CT imaging in radiation therapy using planning CT as a prior: first patient studies. Med Phys 2012; 39(4): 1991–2000

[65]

Hunter AK, McDavid WD. Characterization and correction of cupping effect artefacts in cone beam CT. Dentomaxillofac Radiol 2012; 41(3): 217–223

[66]

Niu T, Zhu L. Scatter correction for full-fan volumetric CT using a stationary beam blocker in a single full scan. Med Phys 2011; 38(11): 6027–6038

[67]

van Herk M, Ploeger L, Sonke JJ. A novel method for megavoltage scatter correction in cone-beam CT acquired concurrent with rotational irradiation. Radiother Oncol 2011; 100(3): 365–369

[68]

Rührnschopf EP, Klingenbeck K. A general framework and review of scatter correction methods in X-ray cone-beam computerized tomography. Part 1: Scatter compensation approaches. Med Phys 2011; 38(7): 4296–4311

[69]

Elstrøm UV, Muren LP, Petersen JB, Grau C. Evaluation of image quality for different kV cone-beam CT acquisition and reconstruction methods in the head and neck region. Acta Oncol 2011; 50(6): 908–917

[70]

Sun M, Nagy T, Virshup G, Partain L, Oelhafen M, Star-Lack J. Correction for patient table-induced scattered radiation in cone-beam computed tomography (CBCT). Med Phys 2011; 38(4): 2058–2073

[71]

Wang J, Mao W, Solberg T. Scatter correction for cone-beam computed tomography using moving blocker strips: a preliminary study. Med Phys 2010; 37(11): 5792–5800

[72]

Lazos D, Williamson JF. Monte Carlo evaluation of scatter mitigation strategies in cone-beam CT. Med Phys 2010; 37(10): 5456–5470

[73]

Niu T, Sun M, Star-Lack J, Gao H, Fan Q, Zhu L. Shading correction for on-board cone-beam CT in radiation therapy using planning MDCT images. Med Phys 2010; 37(10): 5395–5406

[74]

Mainegra-Hing E, Kawrakow I. Variance reduction techniques for fast Monte Carlo CBCT scatter correction calculations. Phys Med Biol 2010; 55(16): 4495–4507

[75]

Yu L, Vrieze TJ, Bruesewitz MR, Kofler JM, DeLone DR, Pallanch JF, Lindell EP, McCollough CH. Dose and image quality evaluation of a dedicated cone-beam CT system for high-contrast neurologic applications. AJR Am J Roentgenol 2010; 194(2): W193–201

[76]

Guan H, Dong H. Dose calculation accuracy using cone-beam CT (CBCT) for pelvic adaptive radiotherapy. Phys Med Biol 2009; 54(20): 6239–6250

[77]

Reitz I, Hesse BM, Nill S, Tucking T, Oelfke U. Enhancement of image quality with a fast iterative scatter and beam hardening correction method for kV CBCT. Z Med Phys 2009; 19(3): 158–172

[78]

Poludniowski G, Evans PM, Hansen VN, Webb S. An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT. Phys Med Biol 2009; 54(12): 3847–3864

[79]

Li H, Mohan R, Zhu XR. Scatter kernel estimation with an edge-spread function method for cone-beam computed tomography imaging. Phys Med Biol 2008; 53(23): 6729–6748

[80]

Rinkel J, Gerfault L, Esteve F, Dinten JM. A new method for X-ray scatter correction: first assessment on a cone-beam CT experimental setup. Phys Med Biol 2007; 52(15): 4633–4652

[81]

Letourneau D, Wong R, Moseley D, Sharpe MB, Ansell S, Gospodarowicz M, Jaffray DA. Online planning and delivery technique for radiotherapy of spinal metastases using cone-beam CT: image quality and system performance. Int J Radiat Oncol Biol Phys 2007; 67(4): 1229–1237

[82]

Jarry G, Graham SA, Moseley DJ, Jaffray DJ, Siewerdsen JH, Verhaegen F. Characterization of scattered radiation in kV CBCT images using Monte Carlo simulations. Med Phys 2006; 33(11): 4320–4329

[83]

Siewerdsen JH, Daly MJ, Bakhtiar B, Moseley DJ, Richard S, Keller H, Jaffray DA. A simple, direct method for X-ray scatter estimation and correction in digital radiography and cone-beam CT. Med Phys 2006; 33(1): 187–197

[84]

Ning R, Tang X, Conover D. X-ray scatter correction algorithm for cone beam CT imaging. Med Phys 2004; 31(5): 1195–1202

[85]

Wang A, Maslowski A, Messmer P, Lehmann M, Strzelecki A, Yu E, Paysan P, Brehm M, Munro P, Star-Lack J, Seghers D. Acuros CTS: a fast, linear Boltzmann transport equation solver for computed tomography scatter — Part II: system modeling, scatter correction, and optimization. Med Phys 2018; 45(5): 1914–1925

[86]

Maslowski A, Wang A, Sun M, Wareing T, Davis I, Star-Lack J. Acuros CTS: a fast, linear Boltzmann transport equation solver for computed tomography scatter — Part I: core algorithms and validation. Med Phys 2018; 45(5): 1899–1913

[87]

Harms J, Lei Y, Wang T, Zhang R, Zhou J, Tang X, Curran WJ, Liu T, Yang X. Paired cycle-GAN-based image correction for quantitative cone-beam CT. Med Phys 2019; 46(9): 3998–4009

[88]

Jiang Y, Yang C, Yang P, Hu X, Luo C, Xue Y, Xu L, Hu X, Zhang L, Wang J, Sheng K, Niu T. Scatter correction of cone-beam CT using a deep residual convolution neural network (DRCNN). Phys Med Biol 2019; 64(14): 145003

[89]

Liang X, Chen L, Nguyen D, Zhou Z, Gu X, Yang M, Wang J, Jiang S. Generating synthesized computed tomography (CT) from cone-beam computed tomography (CBCT) using CycleGAN for adaptive radiation therapy. Phys Med Biol 2019; 64(12): 125002

[90]

Nomura Y, Xu Q, Shirato H, Shimizu S, Xing L. Projection-domain scatter correction for cone beam computed tomography using a residual convolutional neural network. Med Phys 2019; 46(7): 3142–3155

[91]

Hansen DC, Landry G, Kamp F, Li M, Belka C, Parodi K, Kurz C. ScatterNet: a convolutional neural network for cone-beam CT intensity correction. Med Phys 2018; 45(11): 4916–4926

[92]

Jiang Y, Yang C, Yang P, Hu X, Luo C, Xue Y, Xu L, Hu X, Zhang L, Wang J, Sheng K, Niu T. Scatter correction of cone-beam CT using a deep residual convolution neural network (DRCNN). Phys Med Biol 2019; 64(14): 145003

[93]

Griswold MA, Jakob PM, Heidemann RM, Nittka M, Jellus V, Wang J, Kiefer B, Haase A. Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magn Reson Med 2002; 47(6): 1202–1210

[94]

Schreyer AG, Geissler A, Albrich H, Scholmerich J, Feuerbach S, Rogler G, Volk M, Herfarth H. Abdominal MRI after enteroclysis or with oral contrast in patients with suspected or proven Crohn’s disease. Clin Gastroenterol Hepatol 2004; 2(6): 491–497

[95]

Paganelli C, Kipritidis J, Lee D, Baroni G, Keall P, Riboldi M. Image-based retrospective 4D MRI in external beam radiotherapy: a comparative study with a digital phantom. Med Phys 2018; 45(7): 3161–3172

[96]

Paganelli C, Summers P, Gianoli C, Bellomi M, Baroni G, Riboldi M. A tool for validating MRI-guided strategies: a digital breathing CT/MRI phantom of the abdominal site. Med Biol Eng Comput 2017; 55(11): 2001–2014

[97]

Li G, Wei J, Olek D, Kadbi M, Tyagi N, Zakian K, Mechalakos J, Deasy JO, Hunt M. Direct comparison of respiration-correlated four-dimensional magnetic resonance imaging reconstructed using concurrent internal navigator and external bellows. Int J Radiat Oncol Biol Phys 2017; 97(3): 596–605

[98]

Bernatowicz K, Peroni M, Perrin R, Weber DC, Lomax A. Four-dimensional dose reconstruction for scanned proton therapy using liver 4DCT-MRI. Int J Radiat Oncol Biol Phys 2016; 95(1): 216–223

[99]

Glide-Hurst CK, Kim JP, To D, Hu Y, Kadbi M, Nielsen T, Chetty IJ. Four dimensional magnetic resonance imaging optimization and implementation for magnetic resonance imaging simulation. Pract Radiat Oncol 2015; 5(6): 433–442

[100]

Paganelli C, Summers P, Bellomi M, Baroni G, Riboldi M. Liver 4DMRI: a retrospective image-based sorting method. Med Phys 2015; 42(8): 4814–4821

[101]

Panandiker AS, Winchell A, Loeffler R, Song R, Rolen M, Hillenbrand C. 4DMRI provides more accurate renal motion estimation in IMRT in young children. Pract Radiat Oncol 2013; 3(2 Suppl 1): S1

[102]

Han F, Zhou Z, Du D, Gao Y, Rashid S, Cao M, Shaverdian N, Hegde JV, Steinberg M, Lee P, Raldow A, Low DA, Sheng K, Yang Y, Hu P. Respiratory motion-resolved, self-gated 4D-MRI using Rotating Cartesian K-space (ROCK): initial clinical experience on an MRI-guided radiotherapy system. Radiother Oncol 2018; 127(3): 467–473

[103]

Lustig M, Donoho D, Pauly JM. Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn Reson Med 2007; 58(6): 1182–1195

[104]

Lingala SG, Hu Y, DiBella E, Jacob M. Accelerated dynamic MRI exploiting sparsity and low-rank structure: k-t SLR. IEEE Trans Med Imaging 2011; 30(5): 1042–1054

[105]

Otazo R, Candes E, Sodickson DK. Low-rank plus sparse matrix decomposition for accelerated dynamic MRI with separation of background and dynamic components. Magn Reson Med 2015; 73(3): 1125–1136

[106]

Asif MS, Hamilton L, Brummer M, Romberg J. Motion-adaptive spatio-temporal regularization for accelerated dynamic MRI. Magn Reson Med 2013; 70(3): 800–812

[107]

Zhao N, O’Connor D, Basarab A, Ruan D, Sheng K. Motion compensated dynamic MRI reconstruction with local affine optical flow estimation. IEEE Trans Biomed Eng 2019; 66(11): 3050–3059

[108]

Zhou Z, Han F, Ghodrati V, Gao Y, Yin W, Yang Y, Hu P. Parallel imaging and convolutional neural network combined fast MR image reconstruction: applications in low-latency accelerated real-time imaging. Med Phys 2019; 46(8): 3399–3413

[109]

Biswas S, Aggarwal HK, Jacob M. Dynamic MRI using model-based deep learning and SToRM priors: MoDL-SToRM. Magn Reson Med 2019; 82(1): 485–494

[110]

Xiang L, Chen Y, Chang W, Zhan Y, Lin W, Wang Q, Shen D. Deep leaning based multi-modal fusion for fast MR reconstruction. IEEE Trans Biomed Eng 2019; 66(7): 2105–2114

[111]

Yang G, Yu S, Dong H, Slabaugh G, Dragotti PL, Ye X, Liu F, Arridge S, Keegan J, Guo Y, Firmin D. DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction. IEEE Trans Med Imaging 2018; 37(6): 1310–1321

[112]

Schlemper J, Caballero J, Hajnal JV, Price AN, Rueckert D. A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE Trans Med Imaging 2018; 37(2): 491–503

[113]

Zhang D, Banalagay R, Wang J, Zhao Y, Noble JH, Dawant BM. Two-level training of a 3D U-Net for accurate segmentation of the intra-cochlear anatomy in head CTs with limited ground truth training data. Proc SPIE Int Soc Opt Eng 2019; 10949

[114]

Byra M, Wu M, Zhang X, Jang H, Ma YJ, Chang EY, Shah S, Du J. Knee menisci segmentation and relaxometry of 3D ultrashort echo time cones MR imaging using attention U-Net with transfer learning. Magn Reson Med 2020; 83(3): 1109–1122

[115]

Park J, Yun J, Kim N, Park B, Cho Y, Park HJ, Song M, Lee M, Seo JB. Fully automated lung lobe segmentation in volumetric chest CT with 3D U-Net: validation with intra- and extra-datasets. J Digit Imaging 2020; 33(1): 221–230

[116]

Nguyen D, Jia X, Sher D, Lin MH, Iqbal Z, Liu H, Jiang S. 3D radiotherapy dose prediction on head and neck cancer patients with a hierarchically densely connected U-net deep learning architecture. Phys Med Biol 2019; 64(6): 065020

[117]

Huang Q, Sun J, Ding H, Wang X, Wang G. Robust liver vessel extraction using 3D U-Net with variant dice loss function. Comput Biol Med 2018; 101: 153–162

[118]

Blanc-Durand P, Van Der Gucht A, Schaefer N, Itti E, Prior JO. Automatic lesion detection and segmentation of 18F-FET PET in gliomas: a full 3D U-Net convolutional neural network study. PLoS One 2018; 13(4): e0195798

[119]

Boldrini L, Bibault JE, Masciocchi C, Shen Y, Bittner MI. Deep learning: a review for the radiation oncologist. Front Oncol 2019; 9: 977

[120]

Ibragimov B, Toesca D, Chang D, Yuan Y, Koong A, Xing L. Development of deep neural network for individualized hepatobiliary toxicity prediction after liver SBRT. Med Phys 2018; 45(10): 4763–4774

[121]

Valdes G, Interian Y. Comment on ‘Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study’. Phys Med Biol 2018; 63(6): 068001

[122]

Zhen X, Chen J, Zhong Z, Hrycushko B, Zhou L, Jiang S, Albuquerque K, Gu X. Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study. Phys Med Biol 2017; 62(21): 8246–8263

[123]

Epp ER, Weiss H, Djordjevic B, Santomasso A. The radiosensitivity of cultured mammalian cells exposed to single high intensity pulses of electrons in various concentrations of oxygen. Radiat Res 1972; 52(2): 324–332

[124]

Adrian G, Konradsson E, Lempart M, Back S, Ceberg C, Petersson K. The FLASH effect depends on oxygen concentration. Br J Radiol 2020; 93(1106): 20190702

[125]

Maxim PG, Tantawi SG, Loo BW Jr. PHASER: a platform for clinical translation of FLASH cancer radiotherapy. Radiother Oncol 2019; 139: 28–33

[126]

Vozenin MC, De Fornel P, Petersson K, Favaudon V, Jaccard M, Germond JF, Petit B, Burki M, Ferrand G, Patin D, Bouchaab H, Ozsahin M, Bochud F, Bailat C, Devauchelle P, Bourhis J. The advantage of FLASH radiotherapy confirmed in mini-pig and Cat-cancer patients. Clin Cancer Res 2019; 25(1): 35–42

[127]

Zhu YM, Dean AE, Horikoshi N, Heer C, Spitz DR, Gius D. Emerging evidence for targeting mitochondrial metabolic dysfunction in cancer therapy. J Clin Invest 2018; 128(9): 3682–3691

[128]

Alexander MS, Wilkes JG, Schroeder SR, Buettner GR, Wagner BA, Du J, Gibson-Corley K, O’Leary BR, Spitz DR, Buatti JM, Berg DJ, Bodeker KL, Vollstedt S, Brown HA, Allen BG, Cullen JJ. Pharmacologic ascorbate reduces radiation-induced normal tissue toxicity and enhances tumor radiosensitization in pancreatic cancer. Cancer Res 2018; 78(24): 6838–6851

[129]

Schoenfeld JD, Sibenaller ZA, Mapuskar KA, Wagner BA, Cramer-Morales KL, Furqan M, Sandhu S, Carlisle TL, Smith MC, Abu Hejleh T, Berg DJ, Zhang J, Keech J, Parekh KR, Bhatia S, Monga V, Bodeker KL, Ahmann L, Vollstedt S, Brown H, Kauffman EPS, Schall ME, Hohl RJ, Clamon GH, Greenlee JD, Howard MA, Schultz MK, Smith BJ, Riley DP, Domann FE, Cullen JJ, Buettner GR, Buatti JM, Spitz DR, Allen BG. Correction: O2.– and H2O2-mediated disruption of Fe metabolism causes the differential susceptibility of NSCLC and GBM cancer cells to pharmacological ascorbate. Cancer Cell 2017; 32(2): 268–268

[130]

Aykin-Burns N, Ahmad IM, Zhu Y, Oberley LW, Spitz DR. Increased levels of superoxide and H2O2 mediate the differential susceptibility of cancer cells versus normal cells to glucose deprivation. Biochem J 2009; 418(1): 29–37

[131]

Schoenfeld JD, Sibenaller ZA, Mapuskar KA, Wagner BA, Cramer-Morales KL, Furqan M, Sandhu S, Carlisle TL, Smith MC, Abu Hejleh T, Berg DJ, Zhang J, Keech J, Parekh KR, Bhatia S, Monga V, Bodeker KL, Ahmann L, Vollstedt S, Brown H, Kauffman EPS, Schall ME, Hohl RJ, Clamon GH, Greenlee JD, Howard MA, Shultz MK, Smith BJ, Riley DP, Domann FE, Cullen JJ, Buettner GR, Buatti JM, Spitz DR, Allen BG. O2.– and H2O2-mediated disruption of Fe metabolism causes the differential susceptibility of NSCLC and GBM cancer cells to pharmacological ascorbate. Cancer Cell 2017; 31(4): 487–500.e8

[132]

Hall EJ. Radiobiology for the Radiologist. 2d ed. Hagerstown, MD: Medical Dept., Harper & Row, 1978

[133]

Balakrishnan G, Zhao A, Sabuncu MR, Guttag J, Dalca AV. VoxelMorph: a learning framework for deformable medical image registration. IEEE Trans Med Imaging 2019; 38(8): 1788–1800

[134]

Hall EJ. Radiobiology for the Radiologist. 4th ed. Philadelphia: J.B. Lippincott, 1994

[135]

Hall EJ. Radiobiology for the Radiologist. 3rd ed. Philadelphia: Lippincott, 1988

[136]

Gutiontov SI, Shin EJ, Lok B, Lee NY, Cabanillas R. Intensity-modulated radiotherapy for head and neck surgeons. Head Neck 2016; 38(Suppl 1): E2368–E2373

[137]

Nelms BE, Tome WA, Robinson G, Wheeler J. Variations in the contouring of organs at risk: test case from a patient with oropharyngeal cancer. Int J Radiat Oncol Biol Phys 2012; 82(1): 368–378

[138]

Castelli J, Simon A, Lafond C, Perichon N, Rigaud B, Chajon E, De Bari B, Ozsahin M, Bourhis J, de Crevoisier R. Adaptive radiotherapy for head and neck cancer. Acta Oncol 2018; 57(10): 1284–1292

[139]

Gupta V, Wang Y, Mendez Romero A, Myronenko A, Jordan P, Maurer C, Heijmen B, Hoogeman M. Fast and robust adaptation of organs-at-risk delineations from planning scans to match daily anatomy in pre-treatment scans for online-adaptive radiotherapy of abdominal tumors. Radiother Oncol 2018; 127(2): 332–338

[140]

Pollard JM, Wen Z, Sadagopan R, Wang J, Ibbott GS. The future of image-guided radiotherapy will be MR guided. Br J Radiol 2017; 90(1073): 20160667

[141]

Han X, Hoogeman MS, Levendag PC, Hibbard LS, Teguh DN, Voet P, Cowen AC, Wolf TK. Atlas-based auto-segmentation of head and neck CT images. Med Image Comput Comput Assist Interv 2008; 11(Pt 2): 434–441

[142]

Bondiau PY, Malandain G, Chanalet S, Marcy PY, Habrand JL, Fauchon F, Paquis P, Courdi A, Commowick O, Rutten I, Ayache N. Atlas-based automatic segmentation of MR images: validation study on the brainstem in radiotherapy context. Int J Radiat Oncol Biol Phys 2005; 61(1): 289–298

[143]

Roberts MG, Cootes TF, Adams JE. Vertebral shape: automatic measurement with dynamically sequenced active appearance models. In: Duncan JS, Gerig G. Medical Image Computing and Computer-Assisted Intervention — MICCAI 2005. Lecture Notes in Computer Science, vol 3750. Springer, Berlin, Heidelberg. 2005

[144]

Fritscher KD, Peroni M, Zaffino P, Spadea MF, Schubert R, Sharp G. Automatic segmentation of head and neck CT images for radiotherapy treatment planning using multiple atlases, statistical appearance models, and geodesic active contours. Med Phys 2014; 41(5): 051910

[145]

Setio AA, Ciompi F, Litjens G, Gerke P, Jacobs C, van Riel SJ, Wille MM, Naqibullah M, Sanchez CI, van Ginneken B. Pulmonary nodule detection in CT images: false positive reduction using multi-view convolutional networks. IEEE Trans Med Imaging 2016; 35(5): 1160–1169

[146]

Qi D, Hao C, Lequan Y, Lei Z, Jing Q, Defeng W, Mok VC, Lin S, Pheng-Ann H. Automatic detection of cerebral microbleeds from MR images via 3D convolutional neural networks. IEEE Trans Med Imaging 2016; 35(5): 1182–1195

[147]

Li X, Chen H, Qi X, Dou Q, Fu CW, Heng PA. H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans Med Imaging 2018; 37(12): 2663–2674

[148]

Chen L, Bentley P, Mori K, Misawa K, Fujiwara M, Rueckert D. DRINet for medical image segmentation. IEEE Trans Med Imaging 2018; 37(11): 2453–2462

[149]

Moeskops P, Viergever MA, Mendrik AM, de Vries LS, Benders MJ, Isgum I. Automatic segmentation of MR brain images with a convolutional neural network. IEEE Trans Med Imaging 2016; 35(5): 1252–1261

[150]

Cao X, Yang J, Wang L, Xue Z, Wang Q, Shen D. Deep learning based inter-modality image registration supervised by intra-modality similarity. Mach Learn Med Imaging 2018; 11046: 55–63

[151]

Haskins G, Kruecker J, Kruger U, Xu S, Pinto PA, Wood BJ, Yan P. Learning deep similarity metric for 3D MR-TRUS image registration. Int J CARS 2019; 14(3): 417–425

[152]

Zhu X, Ding M, Huang T, Jin X, Zhang X. PCANet-based structural representation for nonrigid multimodal medical image registration. Sensors (Basel) 2018; 18(5): 1477

[153]

Ibragimov B, Xing L. Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks. Med Phys 2017; 44(2): 547–557

[154]

Tong N, Gou S, Yang S, Ruan D, Sheng K. Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks. Med Phys 2018; 45(10): 4558–4567

[155]

Tong N, Gou S, Yang S, Cao M, Sheng K. Shape constrained fully convolutional DenseNet with adversarial training for multiorgan segmentation on head and neck CT and low-field MR images. Med Phys 2019; 46(6): 2669–2682

[156]

Hall EJ. Radiobiology for the Radiologist. 5th ed. Philadelphia: Lippincott Williams & Wilkins, 2000

[157]

Hall EJ, Giaccia AJ. Radiobiology for the Radiologist. 6th ed. Philadelphia: Lippincott Williams & Wilkins, 2006

[158]

Wang ZS, Wei LF, Wang L, Gao YZ, Chen WF, Shen DG. Hierarchical vertex regression-based segmentation of head and neck CT images for radiotherapy planning. IEEE Trans Image Process 2018; 27(2): 923–937

[159]

Moore KL, Brame RS, Low DA, Mutic S. Experience-based quality control of clinical intensity-modulated radiotherapy planning. Int J Radiat Oncol Biol Phys 2011; 81(2): 545–551

[160]

Yuan L, Ge Y, Lee WR, Yin FF, Kirkpatrick JP, Wu QJ. Quantitative analysis of the factors which affect the interpatient organ-at-risk dose sparing variation in IMRT plans. Med Phys 2012; 39(11): 6868–6878

[161]

Nelms BE, Robinson G, Markham J, Velasco K, Boyd S, Narayan S, Wheeler J, Sobczak ML. Variation in external beam treatment plan quality: an inter-institutional study of planners and planning systems. Pract Radiat Oncol 2012; 2(4): 296–305

[162]

Shiraishi S, Moore KL. Knowledge-based prediction of three-dimensional dose distributions for external beam radiotherapy. Med Phys 2016; 43(1): 378–387

[163]

McIntosh C, Purdie TG. Voxel-based dose prediction with multi-patient atlas selection for automated radiotherapy treatment planning. Phys Med Biol 2017; 62(2): 415–431

[164]

Ziemer BP, Shiraishi S, Hattangadi-Gluth JA, Sanghvi P, Moore KL. Fully automated, comprehensive knowledge-based planning for stereotactic radiosurgery: preclinical validation through blinded physician review. Pract Radiat Oncol 2017; 7(6): e569–e578

[165]

Wu B, Ricchetti F, Sanguineti G, Kazhdan M, Simari P, Chuang M, Taylor R, Jacques R, McNutt T. Patient geometry-driven information retrieval for IMRT treatment plan quality control. Med Phys 2009; 36(12): 5497–5505

[166]

Zhu X, Ge Y, Li T, Thongphiew D, Yin FF, Wu QJ. A planning quality evaluation tool for prostate adaptive IMRT based on machine learning. Med Phys 2011; 38(2): 719–726

[167]

Tran A, Woods K, Nguyen D, Yu VY, Niu T, Cao M, Lee P, Sheng K. Predicting liver SBRT eligibility and plan quality for VMAT and 4p plans. Radiat Oncol 2017; 12(1): 70

[168]

Nguyen D, Long T, Jia X, Lu W, Gu X, Iqbal Z, Jiang S. A feasibility study for predicting optimal radiation therapy dose distributions of prostate cancer patients from patient anatomy using deep learning. Sci Rep 2019; 9(1): 1076

[169]

Men K, Geng H, Zhong H, Fan Y, Lin A, Xiao Y. A deep learning model for predicting xerostomia due to radiotherapy for head-and-neck squamous cell carcinoma in the RTOG 0522 clinical trial. Int J Radiat Oncol Biol Phys 2019; 105(2): 440–447

[170]

Ma M. Buyyounouski MK, Vasudevan V, Xing L, Yang Y. Dose distribution prediction in isodose feature-preserving voxelization domain using deep convolutional neural network. Med Phys 2019; 46: 2978–2987

[171]

Ma M, Kovalchuk N, Buyyounouski MK, Xing L, Yang Y. Incorporating dosimetric features into the prediction of 3D VMAT dose distributions using deep convolutional neural network. Phys Med Biol 2019; 64(12): 125017

[172]

Liu Z, Fan J, Li M, Yan H, Hu Z, Huang P, Tian Y, Miao J, Dai J. A deep learning method for prediction of three-dimensional dose distribution of helical tomotherapy. Med Phys 2019; 46(5): 1972–1983

[173]

Kajikawa T, Kadoya N, Ito K, Takayama Y, Chiba T, Tomori S, Takeda K, Jingu K. Automated prediction of dosimetric eligibility of patients with prostate cancer undergoing intensity-modulated radiation therapy using a convolutional neural network. Radiological Phys Technol 2018; 11(3): 320–327

[174]

Heijmen B, Voet P, Fransen D, Penninkhof J, Milder M, Akhiat H, Bonomo P, Casati M, Georg D, Goldner G, Henry A, Lilley J, Lohr F, Marrazzo L, Pallotta S, Pellegrini R, Seppenwoolde Y, Simontacchi G, Steil V, Stieler F, Wilson S, Breedveld S. Fully automated, multi-criterial planning for Volumetric Modulated Arc Therapy — an international multi-center validation for prostate cancer. Radiother Oncol 2018; 128(2): 343–348

[175]

van Duren-Koopman MJ, Tol JP, Dahele M, Bucko E, Meijnen P, Slotman BJ, Verbakel WF. Personalized automated treatment planning for breast plus locoregional lymph nodes using Hybrid RapidArc. Pract Radiat Oncol 2018; 8(5): 332–341

[176]

Babier A, Boutilier JJ, McNiven AL, Chan TCY. Knowledge-based automated planning for oropharyngeal cancer. Med Phys 2018; 45(7): 2875–2883

[177]

Zhang Y, Li T, Xiao H, Ji W, Guo M, Zeng Z, Zhang J. A knowledge-based approach to automated planning for hepatocellular carcinoma. J Appl Clin Med Phys 2018; 19(1): 50–59

[178]

Ziemer BP, Sanghvi P, Hattangadi-Gluth J, Moore KL. Heuristic knowledge-based planning for single-isocenter stereotactic radiosurgery to multiple brain metastases. Med Phys 2017; 44(10): 5001–5009

[179]

Ziemer BP, Shiraishi S, Hattangadi-Gluth JA, Sanghvi P, Moore KL. Fully automated, comprehensive knowledge-based planning for stereotactic radiosurgery: preclinical validation through blinded physician review. Pract Radiat Oncol 2017; 7(6): e569–e578

[180]

Buergy D, Sharfo AW, Heijmen BJ, Voet PW, Breedveld S, Wenz F, Lohr F, Stieler F. Fully automated treatment planning of spinal metastases — a comparison to manual planning of Volumetric Modulated Arc Therapy for conventionally fractionated irradiation. Radiat Oncol 2017; 12(1): 33

[181]

Wu H, Jiang F, Yue H, Zhang H, Wang K, Zhang Y. Applying a RapidPlan model trained on a technique and orientation to another: a feasibility and dosimetric evaluation. Radiat Oncol 2016; 11(1): 108

[182]

Krayenbuehl J, Norton I, Studer G, Guckenberger M. Evaluation of an automated knowledge based treatment planning system for head and neck. Radiat Oncol 2015; 10(1): 226

[183]

Fogliata A, Nicolini G, Clivio A, Vanetti E, Laksar S, Tozzi A, Scorsetti M, Cozzi L. A broad scope knowledge based model for optimization of VMAT in esophageal cancer: validation and assessment of plan quality among different treatment centers. Radiat Oncol 2015; 10(1): 220

[184]

Schmidt M, Lo JY, Grzetic S, Lutzky C, Brizel DM, Das SK. Semiautomated head-and-neck IMRT planning using dose warping and scaling to robustly adapt plans in a knowledge database containing potentially suboptimal plans. Med Phys 2015; 42(8): 4428–4434

[185]

Fan J, Wang J, Chen Z, Hu C, Zhang Z, Hu W. Automatic treatment planning based on three-dimensional dose distribution predicted from deep learning technique. Med Phys 2019; 46(1): 370–381

[186]

Landers A, O’Connor D, Ruan D, Sheng K. Automated 4π radiotherapy treatment planning with evolving knowledge-base. Med Phys 2019; 46(9): 3833–3843

[187]

Landers A, Neph R, Scalzo F, Ruan D, Sheng K. Performance comparison of knowledge-based dose prediction techniques based on limited patient data. Technol Cancer Res Treat 2018; 17: 1533033818811150

[188]

Li HH, Wu Y, Yang D, Mutic S. Software tool for physics chart checks. Pract Radiat Oncol 2014; 4(6): e217–e225

[189]

Yim WW, Yetisgen M, Harris WP, Kwan SW. Natural language processing in oncology: a review. JAMA Oncol 2016; 2(6): 797–804

[190]

Interian Y, Rideout V, Kearney VP, Gennatas E, Morin O, Cheung J, Solberg T, Valdes G. Deep nets vs expert designed features in medical physics: an IMRT QA case study. Med Phys 2018; 45(6): 2672–2680

[191]

Nyflot MJ, Thammasorn P, Wootton LS, Ford EC, Chaovalitwongse WA. Deep learning for patient-specific quality assurance: Identifying errors in radiotherapy delivery by radiomic analysis of gamma images with convolutional neural networks. Med Phys 2019; 46(2): 456–464

[192]

Tomori S, Kadoya N, Takayama Y, Kajikawa T, Shima K, Narazaki K, Jingu K. A deep learning-based prediction model for gamma evaluation in patient-specific quality assurance. Med Phys 2018; 45(9): 4055–4065

[193]

Shiradkar R, Podder TK, Algohary A, Viswanath S, Ellis RJ, Madabhushi A. Radiomics based targeted radiotherapy planning (Rad-TRaP): a computational framework for prostate cancer treatment planning with MRI. Radiat Oncol 2016; 11(1): 148

[194]

Arimura H, Soufi M, Kamezawa H, Ninomiya K, Yamada M. Radiomics with artificial intelligence for precision medicine in radiation therapy. J Radiat Res (Tokyo) 2019; 60(1): 150–157

[195]

Aerts HJ, Velazquez ER, Leijenaar RT, Parmar C, Grossmann P, Carvalho S, Bussink J, Monshouwer R, Haibe-Kains B, Rietveld D, Hoebers F, Rietbergen MM, Leemans CR, Dekker A, Quackenbush J, Gillies RJ, Lambin P. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat Commun 2014; 5(1): 4006

[196]

Bergom C, West CM, Higginson DS, Abazeed ME, Arun B, Bentzen SM, Bernstein JL, Evans JD, Gerber NK, Kerns SL, Keen J, Litton JK, Reiner AS, Riaz N, Rosenstein BS, Sawakuchi GO, Shaitelman SF, Powell SN, Woodward WA. The implications of genetic testing on radiotherapy decisions: a guide for radiation oncologists. Int J Radiat Oncol Biol Phys 2019; 105(4): 698–712

[197]

El Naqa I, Kerns SL, Coates J, Luo Y, Speers C, West CML, Rosenstein BS, Ten Haken RK. Radiogenomics and radiotherapy response modeling. Phys Med Biol 2017; 62(16): R179–R206

[198]

Sollini M, Cozzi L, Chiti A, Kirienko M. Texture analysis and machine learning to characterize suspected thyroid nodules and differentiated thyroid cancer: where do we stand? Eur J Radiol 2018; 99: 1–8

[199]

Rathore S, Akbari H, Doshi J, Shukla G, Rozycki M, Bilello M, Lustig R, Davatzikos C. Radiomic signature of infiltration in peritumoral edema predicts subsequent recurrence in glioblastoma: implications for personalized radiotherapy planning. J Med Imaging (Bellingham) 2018; 5(2): 021219

[200]

Pota M, Scalco E, Sanguineti G, Farneti A, Cattaneo GM, Rizzo G, Esposito M. Early prediction of radiotherapy-induced parotid shrinkage and toxicity based on CT radiomics and fuzzy classification. Artif Intell Med 2017; 81: 41–53

[201]

Leger S, Zwanenburg A, Pilz K, Lohaus F, Linge A, Zophel K, Kotzerke J, Schreiber A, Tinhofer I, Budach V, Sak A, Stuschke M, Balermpas P, Rodel C, Ganswindt U, Belka C, Pigorsch S, Combs SE, Monnich D, Zips D, Krause M, Baumann M, Troost EGC, Lock S, Richter C. A comparative study of machine learning methods for time-to-event survival data for radiomics risk modelling. Sci Rep 2017; 7(1): 13206

[202]

M. D. Anderson Cancer Center Head and Neck Quantitative Imaging Working Group. Investigation of radiomic signatures for local recurrence using primary tumor texture analysis in oropharyngeal head and neck cancer patients. Sci Rep 2018; 8(1): 1524

[203]

Sun W, Jiang M, Dang J, Chang P, Yin FF. Effect of machine learning methods on predicting NSCLC overall survival time based on radiomics analysis. Radiat Oncol 2018; 13(1): 197

[204]

Sun R, Limkin EJ, Vakalopoulou M, Dercle L, Champiat S, Han SR, Verlingue L, Brandao D, Lancia A, Ammari S, Hollebecque A, Scoazec JY, Marabelle A, Massard C, Soria JC, Robert C, Paragios N, Deutsch E, Ferte C. A radiomics approach to assess tumour-infiltrating CD8 cells and response to anti-PD-1 or anti-PD-L1 immunotherapy: an imaging biomarker, retrospective multicohort study. Lancet Oncol 2018; 19(9): 1180–1191

[205]

Peeken JC, Bernhofer M, Spraker MB, Pfeiffer D, Devecka M, Thamer A, Shouman MA, Ott A, Nusslin F, Mayr NA, Rost B, Nyflot MJ, Combs SE. CT-based radiomic features predict tumor grading and have prognostic value in patients with soft tissue sarcomas treated with neoadjuvant radiation therapy. Radiother Oncol 2019; 135: 187–196

[206]

Li S, Wang K, Hou Z, Yang J, Ren W, Gao S, Meng F, Wu P, Liu B, Liu J, Yan J. Use of radiomics combined with machine learning method in the recurrence patterns after intensity-modulated radiotherapy for nasopharyngeal carcinoma: a preliminary study. Front Oncol 2018; 8: 648

[207]

Giraud P, Giraud P, Gasnier A, El Ayachy R, Kreps S, Foy JP, Durdux C, Huguet F, Burgun A, Bibault JE. Radiomics and machine learning for radiotherapy in head and neck cancers. Front Oncol 2019; 9: 174

[208]

Elhalawani H, Lin TA, Volpe S, Mohamed ASR, White AL, Zafereo J, Wong AJ, Berends JE, AboHashem S, Williams B, Aymard JM, Kanwar A, Perni S, Rock CD, Cooksey L, Campbell S, Yang P, Nguyen K, Ger RB, Cardenas CE, Fave XJ, Sansone C, Piantadosi G, Marrone S, Liu R, Huang C, Yu K, Li T, Yu Y, Zhang Y, Zhu H, Morris JS, Baladandayuthapani V, Shumway JW, Ghosh A, Pöhlmann A, Phoulady HA, Goyal V, Canahuate G, Marai GE, Vock D, Lai SY, Mackin DS, Court LE, Freymann J, Farahani K, Kaplathy-Cramer J, Fuller CD. Machine learning applications in head and neck radiation oncology: lessons from open-source radiomics challenges. Front Oncol 2018; 8: 294

[209]

de Jong EEC, van Elmpt W, Rizzo S, Colarieti A, Spitaleri G, Leijenaar RTH, Jochems A, Hendriks LEL, Troost EGC, Reymen B, Dingemans AC, Lambin P. Applicability of a prognostic CT-based radiomic signature model trained on stage I–III non-small cell lung cancer in stage IV non-small cell lung cancer. Lung Cancer 2018; 124: 6–11

[210]

Cha YJ, Jang WI, Kim MS, Yoo HJ, Paik EK, Jeong HK, Youn SM. Prediction of response to stereotactic radiosurgery for brain metastases using convolutional neural networks. Anticancer Res 2018; 38(9): 5437–5445

[211]

Buizza G, Toma-Dasu I, Lazzeroni M, Paganelli C, Riboldi M, Chang Y, Smedby O, Wang C. Early tumor response prediction for lung cancer patients using novel longitudinal pattern features from sequential PET/CT image scans. Phys Med 2018; 54: 21–29

[212]

Hosny A, Parmar C, Coroller TP, Grossmann P, Zeleznik R, Kumar A, Bussink J, Gillies RJ, Mak RH, Aerts H. Deep learning for lung cancer prognostication: a retrospective multi-cohort radiomics study. PLoS Med 2018; 15(11): e1002711

[213]

Cui S, Luo Y, Tseng HH, Ten Haken RK, El Naqa I. Combining handcrafted features with latent variables in machine learning for prediction of radiation-induced lung damage. Med Phys 2019; 46(5): 2497–2511

[214]

Cui S, Luo Y, Tseng HH, Ten Haken RK, El Naqa I. Artificial neural network with composite architectures for prediction of local control in radiotherapy. IEEE Trans Radiat Plasma Med Sci 2019; 3(2): 242–249

[215]

Lee J, An JY, Choi MG, Park SH, Kim ST, Lee JH, Sohn TS, Bae JM, Kim S, Lee H, Min BH, Kim JJ, Jeong WK, Choi DI, Kim KM, Kang WK, Kim M, Seo SW. Deep learning-based survival analysis identified associations between molecular subtype and optimal adjuvant treatment of patients with gastric cancer. JCO Clin Cancer Inform 2018; 2(2): 1–14

[216]

Boon IS, Au Yong TPT, Boon CS. Assessing the role of artificial intelligence (AI) in clinical oncology: utility of machine learning in radiotherapy target volume delineation. Medicines (Basel) 2018; 5(4): 131

[217]

Xu Y, Hosny A, Zeleznik R, Parmar C, Coroller T, Franco I, Mak RH, Aerts H. Deep learning predicts lung cancer treatment response from serial medical imaging. Clin Cancer Res 2019; 25(11): 3266–3275

[218]

Ehteshami Bejnordi B, Veta M, Johannes van Diest P, van Ginneken B, Karssemeijer N, Litjens G, van der Laak J; the CAMELYON16 Consortium. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 2017; 318(22): 2199–2210

[219]

Grossmann P, Stringfield O, El-Hachem N, Bui MM, Rios Velazquez E, Parmar C, Leijenaar RT, Haibe-Kains B, Lambin P, Gillies RJ, Aerts HJ. Defining the biological basis of radiomic phenotypes in lung cancer. eLife 2017; 6: e23421

[220]

Zanfardino M, Pane K, Mirabelli P, Salvatore M, Franzese M. TCGA-TCIA impact on radiogenomics cancer research: a systematic review. Int J Mol Sci 2019; 20(23): 6033

[221]

Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, Tarbox L, Prior F. The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. J Digit Imaging 2013; 26(6): 1045–1057

[222]

Tomczak K, Czerwinska P, Wiznerowicz M. The Cancer Genome Atlas (TCGA): an immeasurable source of knowledge. Contemp Oncol (Pozn) 2015; 19(1A): A68–A77

[223]

Pavlopoulou A, Bagos PG, Koutsandrea V, Georgakilas AG. Molecular determinants of radiosensitivity in normal and tumor tissue: a bioinformatic approach. Cancer Lett 2017; 403: 37–47

RIGHTS & PERMISSIONS

Higher Education Press

AI Summary AI Mindmap
PDF (2258KB)

5488

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/