Aim: Carotid artery disease, or carotid stenosis, is a critical medical condition carrying the risk of stroke and fatality. The underlying pathology involves the accumulation of atheromatous plaques, leading to luminal constriction and disrupted blood flow. Endovascular interventions, such as carotid artery stenting (CAS), aim to restore vessel patency. Robotic assistance in CAS surgeries is rapidly evolving globally, offering precision and fatigue-free capabilities. Despite growing interest, a dedicated systematic review on robotic applications in CAS is notably absent.
Methods: Adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, a comprehensive search strategy involving five databases. Data extraction encompassed study origin, patient demographics, procedural details, procedure times, complications, fluoroscopy, and radiation parameters.
Results: Over 199 articles were identified from five databases. Seven studies meeting inclusion criteria were analyzed. The predominant robotic system, CorPath GRX, demonstrated advantages such as remote operation, precision, and compatibility with various catheter sizes. Magellan Robotic System, employed in one study, showcased remote-controlled capabilities. These studies involved a total of 44 patients undergoing carotid artery stenosis procedures with robotic assistance. Reported complications included access site conversion, Angio-Seal device failure, and residual stenosis. Mean operation time varied from 34 to 85 min.
Conclusions: The integration of robotic assistance in carotid interventions, as demonstrated by the CorPath GRX and Magellan Robotic System, holds significant promise in improving the precision and safety of CAS procedures. However, the limited number of studies, the high risk of bias, and the need for further research and standardization highlight the evolving nature of this technology.
The rapid evolution of modern technology has made artificial intelligence (AI) an important emerging tool in healthcare. AI, which is a broad field of computer science, can be used to develop systems or machines equipped with the ability to tackle tasks that traditionally necessitate human intelligence. AI can be used to perform multifaceted tasks that involve the synthesis of large amounts of data with the generation of solutions, algorithms, and decision support tools. Various AI approaches, including machine learning (ML) and natural language processing (NLP), are increasingly being used to analyze vast healthcare datasets. In addition, visual AI has the potential to revolutionize surgery and the intraoperative experience for surgeons through augmented reality enhancing surgical navigation in real-time. Specific applications of AI in hepatobiliary tumors such as hepatocellular carcinoma and biliary tract cancer can improve patient diagnosis, prognostic risk stratification, as well as treatment allocation based on ML-based models. The integration of radiomics data and AI models can also improve clinical decision making. We herein review how AI may be of particular interest in the care of patients with complex cancers, such as hepatobiliary tumors, as these patients often require a multimodal treatment approach.
Sentinel lymph node (SLN) biopsy has revolutionized the staging and prognosis of breast cancer and melanoma. Because of the complicated lymphatic network around the esophagus, the utility of SLN biopsy for esophageal cancer is less clear. The accuracy of SLN mapping in esophageal cancer depends on tumor site, disease stage, use of neoadjuvant therapy, and patient characteristics. SLN biopsy may improve staging and result in less morbidity in patients with early esophageal cancer, compared with radical lymphadenectomy and esophagectomy. A recent study that investigated hybrid tracers in sentinel node navigation surgery (SNNS) demonstrated promising results for the detection of peritumoral SLNs. However, evidence that firmly establishes the concept of the SLN for esophageal cancer is still lacking. Big data analytics and artificial intelligence have been associated with improvements in the detection and prognosis of esophageal cancer. This review considers the roles of the evolving technologies of SLN biopsy and artificial intelligence, which together have the potential to further improve prognoses and outcomes for patients with esophageal cancer. Additional investigation is necessary to establish standardized protocols and to determine the long-term effectiveness of these approaches in settings involving neoadjuvant therapy and advanced-stage disease.
Aim: Surgeon’s intraoperative decisions significantly impact patient outcomes. In the reconciliation cycle, interoperative decisions are guided by probabilistic reasoning, which is informed by the evolving intraoperative features. This paper aims to compare the utility of a traditional logistic regression (LR) model for critical view of safety (CVS) achievement to Bayesian network (BN) maps using intraoperative features. It hypothesizes that BN mapping better integrates with surgeon heuristics.
Methods: Using prospectively gathered intraoperative data, BN maps were developed and tested to determine their ability to predict critical view of safety achievement. Performance was compared to traditional logistic regression models to consider their utility in practice.
Results: In total, 4,663 patients were identified. Of these patients, 2,837 (61%) presented acutely and 3,122 (67%) were female. CVS was achieved in 4,131 (92%) of patients. In total, four BN were developed. Optimal performance was seen in model 2 with an AUC of 0.879 (0.798-0.960) (P < 0.001). Selecting a cut-off of 0.6 gave an optimized sensitivity of 99% and a specificity of 45% for CVS achievement. In comparison to this, for the combined acute LR model, ROC curve analysis gave an AUC of 0.829 (0.787-0.872 ) (P < 0.001). A cut-off of 75% probability resulted in a sensitivity of 95% and a specificity of 38% for CVS achievement.
Conclusion: The present study illustrates how BN modeling can map to surgeon decision making to facilitate reasoning in complex environments. Further work is needed to facilitate data capture and implementation. Despite this, they represent a promising avenue for intuitive decision support tools.
Aim: Video review programs in hospitals play a crucial role in optimizing operating room workflows. In scenarios where split-seconds can change the outcome of a surgery, the potential of such programs to improve safety and efficiency is profound. However, leveraging this potential requires a systematic and automated analysis of human actions. Existing methods predominantly employ manual methods, which are labor-intensive, inconsistent, and difficult to scale. Here, we present an AI-based approach to systematically analyze the behavior and actions of individuals from operating rooms (OR) videos.
Methods: We designed a novel framework for human mesh recovery from long-duration surgical videos by integrating existing human detection, tracking, and mesh recovery models. We then trained an action recognition model to predict surgical actions from the predicted temporal mesh sequences. To train and evaluate our approach, we annotated an in-house dataset of 864 five-second clips from simulated surgical videos with their corresponding actions.
Results: Our best model achieves an F1 score and the area under the precision-recall curve (AUPRC) of 0.81 and 0.85, respectively, demonstrating that human mesh sequences can be successfully used to recover surgical actions from operating room videos. Model ablation studies suggest that action recognition performance is enhanced by composing human mesh representations with lower arm, pelvic, and cranial joints.
Conclusion: Our work presents promising opportunities for OR video review programs to study human behavior in a systematic, scalable manner.