Infrared suppression (IRS) devices for naval ships play a crucial role in reducing the infrared radiation signature of high-temperature exhaust, thereby enhancing the survivability of ships against infrared-guided weapons. This paper provides a comprehensive review of recent advancements in the design and optimization of IRS devices. The primary research problem of the devices is the need to effectively suppress infrared radiation from ship exhaust gases, which are the main targets of infrared-guided missiles. To achieve this, the paper analyzes the infrared characteristics of exhaust systems from the perspectives of fluid dynamics, radiation sources, and radiation transmission, with a detailed explanation of the associated physical mechanisms and computational methods. The working principles and structural features of commonly used IRS devices, such as eductor/diffuser (E/D) devices and DRES-Ball devices, are introduced, with a focus on the design and optimization of key components, including nozzles, mixing diffusers, and optical blocking obstacles. Advanced suppression technologies, such as water injection and aerosol particle dispersion, are also discussed as auxiliary methods to enhance the infrared stealth capabilities. The review highlights that the advanced cooling mechanisms and optical property modifications can significantly reduce the infrared radiation of exhaust plumes. Furthermore, the paper identifies several challenges and future research directions, including the performance impacts of multi-device coordinated operation, the development of intelligent adaptive control systems, and the pursuit of lightweight and modular designs to meet the high mobility requirements of modern naval ships. This review aims to provide theoretical support and technical guidance for the practical design of IRS devices, offering valuable insights for the development of next-generation infrared stealth technologies for naval vessels.
Tracking control of multibody systems is a challenging task requiring detailed modeling and control expertise. Especially in the case of closed-loop mechanisms, inverse kinematics as part of the controller may become a game stopper due to the extensive calculations required for solving nonlinear equations and inverting complicated functions. The procedure introduced in this paper substitutes such advanced human expertise by artificial intelligence through the utilization of surrogates, which may be trained from data obtained by classical simulation. The necessary steps are demonstrated along a parallel mechanism called λ-robot. Based on its mechanical model, the workspace is investigated, which is required to set proper initial conditions for generating data covering the used operation space of the robot. Based on these data, artificial neural networks are trained as surrogates for inverse kinematics and inverse dynamics. They provide forward control information such that the remaining error behavior is governed by a linear ordinary differential equation, which allows applying a linear quadratic regulator (LQR) from linear control theory. An additional feedback loop of the tracking error accounts for model uncertainties. Simulation results validate the applicability of the proposed concept.
AI for PDEs has garnered significant attention, particularly physics-informed neural networks (PINNs). However, PINNs are typically limited to solving specific problems, and any changes in problem conditions necessitate retraining. Therefore, we explore the generalization capability of transfer learning in the strong and energy forms of PINNs across different boundary conditions, materials, and geometries. The transfer learning methods we employ include full finetuning, lightweight finetuning, and low-rank adaptation (LoRA). Numerical experiments include the Taylor-Green Vortex in fluid mechanics and functionally graded materials with elastic properties, as well as a square plate with a circular hole in solid mechanics. The results demonstrate that full finetuning and LoRA can significantly improve convergence speed while providing a slight enhancement in accuracy. However, the overall performance of lightweight finetuning is suboptimal, as its accuracy and convergence speed are inferior to those of full finetuning and LoRA.
The advancement of artificial intelligence (AI) in material design and engineering has led to significant improvements in predictive modeling of material properties. However, the lack of interpretability in machine learning (ML)-based material informatics presents a major barrier to its practical adoption. This study proposes a novel quantitative computational framework that integrates ML models with explainable artificial intelligence (XAI) techniques to enhance both predictive accuracy and interpretability in material property prediction. The framework systematically incorporates a structured pipeline, including data processing, feature selection, model training, performance evaluation, explainability analysis, and real-world deployment. It is validated through a representative case study on the prediction of high-performance concrete (HPC) compressive strength, utilizing a comparative analysis of ML models such as Random Forest, XGBoost, Support Vector Regression (SVR), and Deep Neural Networks (DNNs). The results demonstrate that XGBoost achieves the highest predictive performance (R2=0.918), while SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) provide detailed insights into feature importance and material interactions. Additionally, the deployment of the trained model as a cloud-based Flask-Gunicorn API enables real-time inference, ensuring its scalability and accessibility for industrial and research applications. The proposed framework addresses key limitations of existing ML approaches by integrating advanced explainability techniques, systematically handling nonlinear feature interactions, and providing a scalable deployment strategy. This study contributes to the development of interpretable and deployable AI-driven material informatics, bridging the gap between data-driven predictions and fundamental material science principles.
Traditional boundary element method (BEM) faces significant challenges in addressing dynamic problems in thin-walled structures. These challenges arise primarily from the complexities of handling time-dependent terms and nearly singular integrals in structures with thin-shapes. In this study, we reformulate time derivative terms as domain integrals and approximate the unknown functions using radial basis functions (RBFs). This reformulation simplifies the treatment of transient terms and enhances computational efficiency by reducing the complexity of time-dependent formulations. The resulting domain integrals are efficiently evaluated using the scaled coordinate transformation BEM (SCT-BEM), which converts domain integrals into equivalent boundary integrals, thereby improving numerical accuracy and stability. Furthermore, to tackle the challenges inherent in thin-body structures, a nonlinear coordinate transformation is introduced to effectively remove the near-singular behavior of the integrals. The proposed method offers several advantages, including greater flexibility in managing transient terms, lower computational costs, and improved stability for thin-body problems.
The performance and lifespan of Li-ion batteries used in electric vehicles are influenced by operating and environmental conditions. An understanding of the mechanisms leading to performance degradation and capacity fading can aid in the design of better battery systems. In the present study, numerical models are developed to estimate the capacity fading, battery performance, and residual life. Furthermore, key associated parameters are identified as state of charge, charging protocols, and temperature. Later on, a deep machine learning (DML) model consisting of one input, four hidden, and one output layer is developed to estimate the residual life of a battery system. The five input parameters considered include voltage, current, temperature, number of cycles, and time, apart from residual life as the output parameter. The proposed DML model consists of five dense layers and three dropout layers with 2889 trainable parameters in total, with higher neuron counts in initial layers to process diverse inputs and fewer neurons in later layers to ensure compact feature representation as well as to make better and faster predictions. Results from the numerical and DML models are compared to the reported experimental results, where good agreement is observed. Thus, the developed model is tested on Lithium based Nickel Manganese Cobalt Oxide and Nickel Cobalt Aluminum Oxide batteries, for which parametric studies are performed to investigate the influence of the operating temperature, rate of charge/discharge, and pulse charging on the battery life. Therefore, the technologies proposed in this study can contribute to the development of intelligent battery management systems, enabling enhanced performance, and hence prolonged life of battery systems.
This study investigates the hydraulic performance of an Ogee spillway under varying flow rate conditions, gate opening heights, and spillway widths. Numerical simulations using Flow-3D, incorporating the (k-ε) turbulence model and Large Eddy Simulation (LES), were employed alongside surrogate models using MATLAB codes and LP-TAU to predict flow behavior. The analysis focused on pressure distribution, water velocity, and shear stress variations across seven sensor locations along the spillway. Results indicate that pressure distribution generally decreases with increasing flow rate but rises with greater gate opening height or spillway width. A reduction in gate opening height lowers the pressure in the initial region but increases it downstream. Two negative pressure zones were identified, one at the Ogee curve and another at the downstream sloping section, highlighting potential cavitation risks. Comparisons with experimental data confirmed a strong correlation, with minor discrepancies in specific sensors under varying conditions. The study demonstrates that numerical modeling, particularly using the (k-ε) turbulence model in Flow-3D, effectively assesses the hydraulic performance of controlled Ogee-type spillways.
This paper proposes a hybrid algorithm based on the physics-informed kernel function neural networks (PIKFNNs) and the direct probability integral method (DPIM) for calculating the probability density function of stochastic responses for structures in the deep marine environment. The underwater acoustic information is predicted utilizing the PIKFNNs, which integrate prior physical information. Subsequently, a novel uncertainty quantification analysis method, the DPIM, is introduced to establish a stochastic response analysis model of underwater acoustic propagation. The effects of random load, variable sound speed, fluctuating ocean density, and random material properties of shell on the underwater stochastic sound pressure are numerically analyzed, providing a probabilistic insight for assessing the mechanical behavior of structures in the deep marine environment.
Health monitoring and damage detection for important and special infrastructures, especially marine structures, are one of the important challenges in structural engineering because they are subjected to corrosion and hydrodynamic loads. Simulation of marine structures under corrosion and hydraulic loads is complex; thus, a combination of point cloud data sets, validation finite element model, parametric studies, and machine-learning methods was used in this study to estimate the damaged surface of retaining reinforced concrete walls (RRCWs) and the load-carrying capacity of RRCWs according to design parameters of RRCWs. After validation of the finite element method (FEM), 144 specimens were simulated using the FEM and the obtained displacement-control loading. Compressive strength, thickness of RRCWs, strength of reinforcement bars, and ratio of reinforcement bars were considered as the design parameters. The results show that the thickness of RRCWs has the most effect on decreasing the damaged surface and load-carrying capacity. Furthermore, the results demonstrate that Gene Expression Programming (GEP) performs better than all models and can predict the damaged surface and load-carrying capacity with 99% and 97% accuracy, respectively. Moreover, by decreasing the thickness of RRCWs, the damaged surface is reduced to 2.5%, and by increasing the thickness, the load-carrying capacity is increased to 51%–59%.
In this article, the physics informed neural networks (PINNs) is employed for the numerical simulation of heat transfer involving a moving source under mixed boundary conditions. To reduce computational effort and increase accuracy, a new training method is proposed that uses a continuous time-stepping through transfer learning. A single network is initialized and used as a sliding window function across the time domain. On this single network each time interval is trained with the initial condition for (n+1)th iteration as the solution obtained at nth iteration. Thus, this framework enables the computation of large temporal intervals without increasing the complexity of the network itself. The proposed framework is used to estimate the temperature distribution in a homogeneous medium with a moving heat source. The results from the proposed framework is compared with traditional finite element method and a good agreement is seen.
Ultra-precision machining (UPM) has been extensively employed for the production of high-end precision components. The process is highly precise, and the associated cost of production is also high. Optimization of machining parameters in UPM can significantly improve machining efficiency and surface roughness. This study proposes an innovative approach that couples transfer matrix methods for multibody systems (MSTMM) and particle swarm optimization (PSO) to optimize the machining parameters, aiming to simultaneously improve the machining efficiency and surface roughness of UPM machined components. Initially, the dynamic model of an ultra-precision fly-cutting (UPFC) machine tool was developed using MSTMM and validated by machining tests. Subsequently, the PSO algorithm was employed to optimize the machining parameters. Based on the optimized parameters, a 40% reduction in machining time and an 18.6% improvement in surface roughness peak-to-valley (PV) value have been achieved. The proposed method and the optimized parameters were verified through simulations using the MSTMM model, resulting in a minimal error of only 0.9%.
Non-Gaussian random vibrations have gained more attention in the dynamics-research community due to the frequently encountered non-Gaussian dynamic environments in engineering practice. This work proposes a novel non-Gaussian random vibration test method by simultaneous control of multiple correlation coefficients, skewness, and kurtoses. The multi-channel time-domain coupling model is first constructed which is mainly composed of the designed parameters and independent signal sources. The designed parameters are related to the defined correlation coefficients and root mean square values. The synthesized multiple non-Gaussian random signals are unitized to provide independent signal sources for coupling. The first four statistical characteristics of the synthesized non-Gaussian random signals are theoretically derived so that the relationships among the generated signals, independent signal sources, and correlation coefficients are achieved. Subsequently, a multi-channel closed-loop equalization procedure for non-Gaussian random vibration control is presented to produce a multi-channel correlated non-Gaussian random vibration environment. Finally, a simulation example and an experimental verification are provided. Results from the simulation and experiment indicate that the multi-channel response spectral densities, correlation coefficients, skewnesses, and kurtoses can be stably and effectively controlled within the corresponding tolerances by the proposed method.