To fulfill the training requirements for the daily operations of multirotor unmanned aerial vehicles (UAV) clusters, a UAV cluster collaborative task integrated simulation platform (UAV-TISP) was developed. The platform integrates a suite of hardware and software to simulate a range of collaborative UAV cluster operation scenarios. It features modules for collaborative task planning, UAV cluster simulations, and tactical monitoring. The platform significantly reduces training costs by eliminating physical drone dependencies while offering a flexible environment for testing swarm algorithms. UAV-TISP supports both individual UAV and swarm operations, incorporating high-fidelity flight dynamics, real-time communication via user datagram protocol (UDP), and collision avoidance strategies. Utilizing the OSGEarth engine, it enables dynamic 3D environment visualization and scenario customization. Three key task scenarios—route flight, formation reconstruction, and formation transformation—were tested to validate the platform’s efficacy. Results demonstrated robust formation maintenance, adaptive collision avoidance, and seamless task execution. Comparative analysis with Gazebo Sim revealed lower trajectory deviations in UAV-TISP, highlighting its superior accuracy in simulating real-world flight dynamics. Future work will focus on enhancing scalability for diverse UAV models, optimizing swarm networking under communication constraints, and expanding mission scenarios. UAV-TISP serves as a versatile tool for both operational training and advanced algorithm development in UAV cluster applications.
The convergence of Internet of things (IoT) and 5G holds immense potential for transforming industries by enabling real-time, massive-scale connectivity and automation. However, the growing number of devices connected to the IoT systems demands a communication network capable of handling vast amounts of data with minimal delay. These generated enormous complex, high-dimensional, high-volume, and high-speed data also brings challenges on its storage, transmission, processing, and energy cost, due to the limited computing capabilities, battery capacity, memory, and energy utilization of current IoT networks. In this paper, a seamless architecture by combining mobile and cloud computing is proposed. It can agilely bargain with 5G-IoT devices, sensor nodes, and mobile computing in a distributed manner, enabling minimized energy cost, high interoperability, and high scalability as well as overcoming the memory constraints. An artificial intelligence (AI)-powered green and energy-efficient architecture is then proposed for 5G-IoT systems and sustainable smart cities. The experimental results reveal that the proposed approach dramatically reduces the transmitted data volume and power consumption and yields superior results regarding interoperability, compression ratio, and energy saving. This is especially critical in enabling the deployment of 5G and even 6G wireless systems for smart cities.
Side-channel analysis (SCA) has emerged as a research hotspot in the field of cryptanalysis. Among various approaches, unsupervised deep learning-based methods demonstrate powerful information extraction capabilities without requiring labeled data. However, existing unsupervised methods, particularly those represented by differential deep learning analysis (DDLA) and its improved variants, while overcoming the dependency on labeled data inherent in template analysis, still suffer from high time complexity and training costs when handling key byte difference comparisons. To address this issue, this paper introduces invariant information clustering (IIC) into SCA for the first time, and thus proposes a novel unsupervised learning-based SCA method, named IIC-SCA. By leveraging mutual information maximization techniques for automatic feature extraction of power leakage data, our approach achieves key recovery through a single training session, eliminating the prohibitive computational overhead of traditional methods that require separate training for all possible key bytes. Experimental results on the ASCAD dataset demonstrate successful key extraction using only 50000 training traces and 2000 attack traces. Furthermore, compared with DDLA, the proposed method reduces training time by approximately 93.40% and memory consumption by about 6.15%, significantly decreasing the temporal and resource costs of unsupervised SCA. This breakthrough provides new insights for developing low-cost, high-efficiency cryptographic attack methodologies.
The rapid and increasing growth in the volume and number of cyber threats from malware is not a real danger; the real threat lies in the obfuscation of these cyberattacks, as they constantly change their behavior, making detection more difficult. Numerous researchers and developers have devoted considerable attention to this topic; however, the research field has not yet been fully saturated with high-quality studies that address these problems. For this reason, this paper presents a novel multi-objective Markov-enhanced adaptive whale optimization (MOMEAWO) cybersecurity model to improve the classification of binary and multi-class malware threats through the proposed MOMEAWO approach. The proposed MOMEAWO cybersecurity model aims to provide an innovative solution for analyzing, detecting, and classifying the behavior of obfuscated malware within their respective families. The proposed model includes three classification types: Binary classification and multi-class classification (e.g., four families and 16 malware families). To evaluate the performance of this model, we used a recently published dataset called the Canadian Institute for Cybersecurity Malware Memory Analysis (CIC-MalMem-2022) that contains balanced data. The results show near-perfect accuracy in binary classification and high accuracy in multi-class classification compared with related work using the same dataset.
Single-phase non-isolated microinverters used in photovoltaic (PV) systems commonly encounter two persistent challenges: High-frequency leakage current and fluctuating power delivery. This paper presents a novel single-phase, non-isolated multi-input microinverter topology with a common-ground structure that effectively eliminates ground leakage current without requiring additional active components. The proposed microinverter architecture integrates a dual-boost configuration and uses only four active switches. This is especially advantageous in terms of the component count, which is beneficial to enhance reliability, reduce cost, and simplify the overall system design. With one, two, or four PV inputs, it can operate without interruption under unbalanced voltage or partial shading and even if some inputs drop to zero. A tailored modulation scheme minimizes conduction losses while maintaining a stable direct-current (DC)-link voltage, and a decoupling capacitor efficiently absorbs the single-phase pulsating power, thus overcoming one major limitation in existing microinverter designs. By validating with a 1-kW GaN-based prototype, both the simulated and experimental results demonstrate its high efficiency, robustness, and practical suitability for cost-effective PV applications, with a peak efficiency value of 94.8%.
Neural network-based methods for intrapulse modulation recognition in radar signals have demonstrated significant improvements in classification accuracy. However, these approaches often rely on complex network structures, resulting in high computational resource requirements that limit their practical deployment in real-world settings. To address this issue, this paper proposes a Bottleneck Residual Network with Efficient Soft-Thresholding (BRN-EST) network, which integrates multiple lightweight design strategies and noise-reduction modules to maintain high recognition accuracy while significantly reducing computational complexity. Experimental results on the classical low-probability-of-intercept (LPI) radar signal dataset demonstrate that BRN-EST achieves comparable accuracy to state-of-the-art methods while reducing computational complexity by approximately 50 %.
This article presents a compact crab-shaped gain-reconfigurable antenna (CSRA) designed for 5G sub-6 GHz wireless applications. The antenna achieves enhanced gain in a miniaturized form factor by incorporating a hexagonal split-ring structure controlled via two radio frequency (RF) positive-intrinsic-negative (PIN) diodes (BAR64-02V). While the antenna is primarily designed to operate at 3.50 GHz for sub-6 GHz 5G applications, RF switching enables the CSRA to cover a broader frequency spectrum, including the S-band, X-band, and portions of the Ku-band. The proposed antenna offers several advantages: It is low-cost (fabricated on an FR-4 substrate), compact (achieving 64.07 % size reduction compared to conventional designs), and features both frequency and gain reconfigurability through digitally controlled PIN diode switching. The reflection coefficients of the antenna, both without diodes and across all four switching states, were experimentally validated in the laboratory using a Keysight FieldFox microwave analyzer (N9916A, 14 GHz). The simulated radiation patterns and gain characteristics closely matched the measured values, demonstrating an excellent agreement. This study bridges the gap between traditional and next-generation antenna designs by offering a compact, cost-effective, and high-performance solution for multiband, reconfigurable wireless communication systems. The integration of double-split-ring resonators and dynamic reconfigurability makes the proposed antenna a strong candidate for various applications, including S-band and X-band systems, as well as the emerging lower 6G band (7.125–8.400 GHz).
In the era of rapidly expanding wireless technologies, the push for larger spectrum efficiency and better signal integrity has intensified the need for high-efficient and low noise amplifiers (LNAs). A two-stage LNA based on the GaAs/InGaAs pseudomorphic high electron mobility transistor (pHEMT) with a relatively large gate length of 2 μm is designed for high-performance 2.4-GHz wireless communication. The I-V characteristic and two-port high-frequency S-parameter of the transistor are measured by on-wafer probing techniques. The results indicate that a discrete transistor with a gate size of 2 μm × 50 μm can provide a maximum transconductance of 16 mS, corresponding to a maximum current-gain cut-off frequency of 7 GHz and maximum oscillation frequency of 8 GHz at the 1-V drain-source voltage. With the impedance matching networks based transmission line technique, an extended integrated layout structure is designed and simulated by using the momentum simulation tool embedded in Advanced Design System, to alleviate the trade-off between noise figure (NF) and gain of the circuit. The findings show that the transistor based on the GaAs/InGaAs technology is capable of delivering high performance with power consumption low to 16 mW, where the maximum simulated gain of 21.5 dB and minimum NF of 2.4 dB are achieved. In terms of linearity, the proposed LNA provides terrific output 1-dB compression of −3 dBm and output third-order intercept point values of 10 dBm. The bandwidth of 0.12 GHz and figure-of-merit of 12 are obtained, which are comparable to that of existing LNAs based on pHEMT. Such a device may benefit to accelerate the development of more robust and power-efficient front-end modules in modern wireless systems, especially for advancing performance-driven applications.