Mid-wavelength infrared (MWIR) detection and long-wavelength infrared (LWIR) detection constitute the key technologies for space-based Earth observation and astronomical detection. The advanced ability of infrared (IR) detection technology to penetrate the atmosphere and identify the camouflaged targets makes it excellent for space-based remote sensing. Thus, such detectors play an essential role in detecting and tracking low-temperature and far-distance moving targets. However, due to the diverse scenarios in which space-based IR detection systems are built, the key parameters of IR technologies are subject to unique demands. We review the developments and features of MWIR and LWIR detectors with a particular focus on their applications in space-based detection. We conduct a comprehensive analysis of key performance indicators for IR detection systems, including the ground sampling distance (GSD), operation range, and noise equivalent temperature difference (NETD) among others, and their interconnections with IR detector parameters. Additionally, the influences of pixel distance, focal plane array size, and operation temperature of space-based IR remote sensing are evaluated. The development requirements and technical challenges of MWIR and LWIR detection systems are also identified to achieve high-quality space-based observation platforms.
Industrial Internet, motivated by the deep integration of new-generation information and communication technology (ICT) and advanced manufacturing technology, will open up the production chain, value chain, and industry chain by establishing complete interconnections between humans, machines, and things. This will also help establish novel manufacturing and service modes, where personalized and customized production for differentiated services is a typical paradigm of future intelligent manufacturing. Thus, there is an urgent requirement to break through the existing chimney-like service mode provided by the hierarchical heterogeneous network architecture and establish a transparent channel for manufacturing and services using a flat network architecture. Starting from the basic concepts of process manufacturing and discrete manufacturing, we first analyze the basic requirements of typical manufacturing tasks. Then, with an overview on the developing process of industrial Internet, we systematically compare the current networking technologies and further analyze the problems of the present industrial Internet. On this basis, we propose to establish a novel “thin waist” that integrates sensing, communication, computing, and control for the future industrial Internet. Furthermore, we perform a deep analysis and engage in a discussion on the key challenges and future research issues regarding the multi-dimensional collaborative sensing of task–resource, the end-to-end deterministic communication of heterogeneous networks, and virtual computing and operation control of industrial Internet.
Camouflaged targets are a type of nonsalient target with high foreground and background fusion and minimal target feature information, making target recognition extremely difficult. Most detection algorithms for camouflaged targets use only the target’s single-band information, resulting in low detection accuracy and a high missed detection rate. We present a multimodal image fusion camouflaged target detection technique (MIF-YOLOv5) in this paper. First, we provide a multimodal image input to achieve pixel-level fusion of the camouflaged target’s optical and infrared images to improve the effective feature information of the camouflaged target. Second, a loss function is created, and the K-Means++ clustering technique is used to optimize the target anchor frame in the dataset to increase camouflage personnel detection accuracy and robustness. Finally, a comprehensive detection index of camouflaged targets is proposed to compare the overall effectiveness of various approaches. More crucially, we create a multispectral camouflage target dataset to test the suggested technique. Experimental results show that the proposed method has the best comprehensive detection performance, with a detection accuracy of 96.5%, a recognition probability of 92.5%, a parameter number increase of 1×104, a theoretical calculation amount increase of 0.03 GFLOPs, and a comprehensive detection index of 0.85. The advantage of this method in terms of detection accuracy is also apparent in performance comparisons with other target algorithms.
At the Annual International Cryptology Conference in 2019, Gohr introduced a deep learning based cryptanalysis technique applicable to the reduced-round lightweight block ciphers with a short block of SPECK32/64. One significant challenge left unstudied by Gohr’s work is the implementation of key recovery attacks on large-state block ciphers based on deep learning. The purpose of this paper is to present an improved deep learning based framework for recovering keys for large-state block ciphers. First, we propose a key bit sensitivity test (KBST) based on deep learning to divide the key space objectively. Second, we propose a new method for constructing neural distinguisher combinations to improve a deep learning based key recovery framework for large-state block ciphers and demonstrate its rationality and effectiveness from the perspective of cryptanalysis. Under the improved key recovery framework, we train an efficient neural distinguisher combination for each large-state member of SIMON and SPECK and finally carry out a practical key recovery attack on the large-state members of SIMON and SPECK. Furthermore, we propose that the 13-round SIMON64 attack is the most effective approach for practical key recovery to date. Noteworthly, this is the first attempt to propose deep learning based practical key recovery attacks on 18-round SIMON128, 19-round SIMON128, 14-round SIMON96, and 14-round SIMON64. Additionally, we enhance the outcomes of the practical key recovery attack on SPECK large-state members, which amplifies the success rate of the key recovery attack in comparison to existing results.
The synthetic minority oversampling technique (SMOTE) is a popular algorithm to reduce the impact of class imbalance in building classifiers, and has received several enhancements over the past 20 years. SMOTE and its variants synthesize a number of minority-class sample points in the original sample space to alleviate the adverse effects of class imbalance. This approach works well in many cases, but problems arise when synthetic sample points are generated in overlapping areas between different classes, which further complicates classifier training. To address this issue, this paper proposes a novel generalization-oriented rather than imputation-oriented minority-class sample point generation algorithm, named overlapping minimization SMOTE (OM-SMOTE). This algorithm is designed specifically for binary imbalanced classification problems. OM-SMOTE first maps the original sample points into a new sample space by balancing sample encoding and classifier generalization. Then, OM-SMOTE employs a set of sophisticated minority-class sample point imputation rules to generate synthetic sample points that are as far as possible from overlapping areas between classes. Extensive experiments have been conducted on 32 imbalanced datasets to validate the effectiveness of OM-SMOTE. Results show that using OM-SMOTE to generate synthetic minority-class sample points leads to better classifier training performances for the naive Bayes, support vector machine, decision tree, and logistic regression classifiers than the 11 state-of-the-art SMOTE-based imputation algorithms. This demonstrates that OM-SMOTE is a viable approach for supporting the training of high-quality classifiers for imbalanced classification. The implementation of OM-SMOTE is shared publicly on the GitHub platform at
Phone number recycling (PNR) refers to the event wherein a mobile operator collects a disconnected number and reassigns it to a new owner. It has posed a threat to the reliability of the existing authentication solution for e-commerce platforms. Specifically, a new owner of a reassigned number can access the application account with which the number is associated, and may perform fraudulent activities. Existing solutions that employ a reassigned number database from mobile operators are costly for e-commerce platforms with large-scale users. Thus, alternative solutions that depend on only the information of the applications are imperative. In this work, we study the problem of detecting accounts that have been compromised owing to the reassignment of phone numbers. Our analysis on Meituan’s real-world dataset shows that compromised accounts have unique statistical features and temporal patterns. Based on the observations, we propose a novel model called temporal pattern and statistical feature fusion model (TSF) to tackle the problem, which integrates a temporal pattern encoder and a statistical feature encoder to capture behavioral evolutionary interaction and significant operation features. Extensive experiments on the Meituan and IEEE-CIS datasets show that TSF significantly outperforms the baselines, demonstrating its effectiveness in detecting compromised accounts due to reassigned numbers.
A practical fixed-time adaptive fuzzy control strategy is investigated for uncertain nonlinear systems with time-varying asymmetric constraints and input quantization. To overcome the difficulties of designing controllers under the state constraints, a unified barrier function approach is employed to construct a coordinate transformation that maps the original constrained system to an equivalent unconstrained one, thus relaxing the time-varying asymmetric constraints upon system states and avoiding the feasibility check condition typically required in the traditional barrier Lyapunov function based control approach. Meanwhile, the “explosion of complexity” problem in the traditional backstepping approach arising from repeatedly derivatives of virtual controllers is solved by using the command filter method. It is verified via the fixed-time Lyapunov stability criterion that the system output can track a desired signal within a small error range in a predetermined time, and that all system states remain in the constraint range. Finally, two simulation examples are offered to demonstrate the effectiveness of the proposed strategy.
A low-profile dual-broadband dual-circularly-polarized (dual-CP) reflectarray (RA) is proposed and demonstrated, supporting independent beamforming for right-/left-handed CP waves at both K-band and Ka-band. Such functionality is achieved by incorporating multi-layered phase shifting elements individually operating in the K- and Ka-band, which are then interleaved in a shared aperture, resulting in a cell thickness of only about 0.1λL. By rotating the designed K- and Ka-band elements around their own geometrical centers, the dual-CP waves in each band can be modulated separately. To reduce the overall profile, planar K-/Ka-band dual-CP feeds with a broad band are designed based on the magnetoelectric dipoles and multi-branch hybrid couplers. The planar feeds achieve bandwidths of about 32% and 26% at K- and Ka-band respectively with reflection magnitudes below −13 dB, an axial ratio smaller than 2 dB, and a gain variation of less than 1 dB. A proof-of-concept dual-band dual-CP RA integrated with the planar feeds is fabricated and characterized which is capable of generating asymmetrically distributed dual-band dual-CP beams. The measured peak gain values of the beams are around 24.3 and 27.3 dBic, with joint gain variation <1 dB and axial ratio <2 dB bandwidths wider than 20.6% and 14.6% at the lower and higher bands, respectively. The demonstrated dual-broadband dual-CP RA with four degrees of freedom of beamforming could be a promising candidate for space and satellite communications.
Ultrafast fiber lasers are indispensable components in the field of ultrafast optics, and their continuous performance advancements are driving the progress of this exciting discipline. Micro/Nanofibers (MNFs) possess unique properties, such as a large fractional evanescent field, flexible and controllable dispersion, and high nonlinearity, making them highly valuable for generating ultrashort pulses. Particularly, in tasks involving mode-locking and dispersion and nonlinearity management, MNFs provide an excellent platform for investigating intriguing nonlinear dynamics and related phenomena, thereby promoting the advancement of ultrafast fiber lasers. In this paper, we present an introduction to the mode evolution and characteristics of MNFs followed by a comprehensive review of recent advances in using MNFs for ultrafast optics applications including evanescent field modulation and control, dispersion and nonlinear management techniques, and nonlinear dynamical phenomenon exploration. Finally, we discuss the potential application prospects of MNFs in the realm of ultrafast optics.
Quantitative investment (abbreviated as “quant” in this paper) is an interdisciplinary field combining financial engineering, computer science, mathematics, statistics, etc. Quant has become one of the mainstream investment methodologies over the past decades, and has experienced three generations: quant 1.0, trading by mathematical modeling to discover mis-priced assets in markets; quant 2.0, shifting the quant research pipeline from small “strategy workshops” to large “alpha factories”; quant 3.0, applying deep learning techniques to discover complex nonlinear pricing rules. Despite its advantage in prediction, deep learning relies on extremely large data volume and labor-intensive tuning of “black-box” neural network models. To address these limitations, in this paper, we introduce quant 4.0 and provide an engineering perspective for next-generation quant. Quant 4.0 has three key differentiating components. First, automated artificial intelligence (AI) changes the quant pipeline from traditional hand-crafted modeling to state-of-the-art automated modeling and employs the philosophy of “algorithm produces algorithm, model builds model, and eventually AI creates AI.” Second, explainable AI develops new techniques to better understand and interpret investment decisions made by machine learning black boxes, and explains complicated and hidden risk exposures. Third, knowledge-driven AI supplements data-driven AI such as deep learning and incorporates prior knowledge into modeling to improve investment decisions, in particular for quantitative value investing. Putting all these together, we discuss how to build a system that practices the quant 4.0 concept. We also discuss the application of large language models in quantitative finance. Finally, we propose 10 challenging research problems for quant technology, and discuss potential solutions, research directions, and future trends.
Internet of Things (IoT) devices are becoming increasingly ubiquitous, and their adoption is growing at an exponential rate. However, they are vulnerable to security breaches, and traditional security mechanisms are not enough to protect them. The massive amounts of data generated by IoT devices can be easily manipulated or stolen, posing significant privacy concerns. This paper is to provide a comprehensive overview of the integration of blockchain and IoT technologies and their potential to enhance the security and privacy of IoT systems. The paper examines various security issues and vulnerabilities in IoT and explores how blockchain-based solutions can be used to address them. It provides insights into the various security issues and vulnerabilities in IoT and explores how blockchain can be used to enhance security and privacy. The paper also discusses the potential applications of blockchain-based IoT (B-IoT) systems in various sectors, such as healthcare, transportation, and supply chain management. The paper reveals that the integration of blockchain and IoT has the potential to enhance the security, privacy, and trustworthiness of IoT systems. The multi-layered architecture of B-IoT, consisting of perception, network, data processing, and application layers, provides a comprehensive framework for the integration of blockchain and IoT technologies. The study identifies various security solutions for B-IoT, including smart contracts, decentralized control, immutable data storage, identity and access management (IAM), and consensus mechanisms. The study also discusses the challenges and future research directions in the field of B-IoT.
Many areas are now experiencing data streams that contain privacy-sensitive information. Although the sharing and release of these data are of great commercial value, if these data are released directly, the private user information in the data will be disclosed. Therefore, how to continuously generate publishable histograms (meeting privacy protection requirements) based on sliding data stream windows has become a critical issue, especially when sending data to an untrusted third party. Existing histogram publication methods are unsatisfactory in terms of time and storage costs, because they must cache all elements in the current sliding window (SW). Our work addresses this drawback by designing an efficient online histogram publication (EOHP) method for local differential privacy data streams. Specifically, in the EOHP method, the data collector first crafts a histogram of the current SW using an approximate counting method. Second, the data collector reduces the privacy budget by using the optimized budget absorption mechanism and adds appropriate noise to the approximate histogram, making it possible to publish the histogram while retaining satisfactory data utility. Extensive experimental results on two different real datasets show that the EOHP algorithm significantly reduces the time and storage costs and improves data utility compared to other existing algorithms.
The performance of complementary metal oxide semiconductor (CMOS) circuits is affected by electromagnetic interference (EMI), and the study of the circuit’s ability to resist EMI will facilitate the design of circuits with better performance. Current-mode CMOS circuits have been continuously developed in recent years due to their advantages of high speed and low power consumption over conventional circuits under the deep submicron process; their EMI resistance performance deserves further study. This paper introduces three kinds of NOT gate circuits: conventional voltage-mode CMOS, MOS current-mode logic (MCML) with voltage signal of input and output, and current-mode CMOS with current signal of input and output. The effects of EMI on three NOT gate circuits are investigated using Cadence Virtuoso software simulation, and a disturbance level factor is defined to compare the effects of different interference terminals, interference signals’ waveforms, and interference signals’ frequencies on the circuits in the 65 nm process. The relationship between input resistance and circuit EMI resistance performance is investigated by varying the value of cascade resistance at the input of the current-mode CMOS circuits. Simulation results show that the current-mode CMOS circuits have better resistance performance to EMI at high operating frequencies, and the higher the operating frequency of the current-mode CMOS circuits, the better the resistance performance of the circuits to EMI. Additionally, the effects of different temperatures and different processes on the resistance performance of three circuits are also studied. In the temperature range of −40 ℃ to 125 ℃, the higher the temperature, the weaker the resistance ability of voltage-mode CMOS and MCML circuits, and the stronger the resistance ability of current-mode CMOS circuits. In the 28 nm process, the current-mode CMOS circuit interference resistance ability is relatively stronger than that of the other two kinds of circuits. The relative interference resistance ability of voltage-mode CMOS and MCML circuits in the 28 nm process is similar to that of the 65 nm process, while the relative interference resistance ability of current-mode CMOS circuits in the 28 nm process is stronger than that of the 65 nm process. This study provides a basis for the design of current-mode CMOS circuits against EMI.
We investigate the impact of network topology characteristics on flocking fragmentation for a multi-robot system under a multi-hop and lossy ad hoc network, including the network’s hop count features and information’s successful transmission probability (STP). Specifically, we first propose a distributed communication–calculation–execution protocol to describe the practical interaction and control process in the ad hoc network based multi-robot system, where flocking control is realized by a discrete-time Olfati-Saber model incorporating STP-related variables. Then, we develop a fragmentation prediction model (FPM) to formulate the impact of hop count features on fragmentation for specific flocking scenarios. This model identifies the critical system and network features that are associated with fragmentation. Further considering general flocking scenarios affected by both hop count features and STP, we formulate the flocking fragmentation probability (FFP) by a data fitting model based on the back propagation neural network, whose input is extracted from the FPM. The FFP formulation quantifies the impact of key network topology characteristics on fragmentation phenomena. Simulation results verify the effectiveness and accuracy of the proposed prediction model and FFP formulation, and several guidelines for constructing the multi-robot ad hoc network are concluded.
This paper concerns the event-triggered distributed cross-dimensional formation control problem of heterogeneous multi-agent systems (HMASs) subject to limited network resources. The central aim is to design an effective distributed formation control scheme that will achieve the desired formation control objectives even in the presence of restricted communication. Consequently, a multi-dimensional HMAS is first developed, where a group of agents are assigned to several subgroups based on their dimensions. Then, to mitigate the excessive consumption of communication resources, a cross-dimensional event-triggered communication mechanism is designed to reduce the information interaction among agents with different dimensions. Under the proposed event-based communication mechanism, the problem of HMAS cross-dimensional formation control is transformed into the asymptotic stability problem of a closed-loop error system. Furthermore, several stability criteria for designing a cross-dimensional formation control protocol and communication schedule are presented in an environment where there is no information interaction among follower agents. Finally, a simulation case study is provided to validate the effectiveness of the proposed formation control protocol.
This article investigates the event-triggered adaptive neural network (NN) tracking control problem with deferred asymmetric time-varying (DATV) output constraints. To deal with the DATV output constraints, an asymmetric time-varying barrier Lyapunov function (ATBLF) is first built to make the stability analysis and the controller construction simpler. Second, an event-triggered adaptive NN tracking controller is constructed by incorporating an error-shifting function, which ensures that the tracking error converges to an arbitrarily small neighborhood of the origin within a predetermined settling time, consequently optimizing the utilization of network resources. It is theoretically proven that all signals in the closed-loop system are semi-globally uniformly ultimately bounded (SGUUB), while the initial value is outside the constraint boundary. Finally, a single-link robotic arm (SLRA) application example is employed to verify the viability of the acquired control algorithm.
A critical step in digital dentistry is to accurately and automatically characterize the orientation and position of individual teeth, which can subsequently be used for treatment planning and simulation in orthodontic tooth alignment. This problem remains challenging because the geometric features of different teeth are complicated and vary significantly, while a reliable large-scale dataset is yet to be constructed. In this paper we propose a novel method for automatic tooth orientation estimation by formulating it as a six-degree-of-freedom (6-DoF) tooth pose estimation task. Regarding each tooth as a three-dimensional (3D) point cloud, we design a deep neural network with a feature extractor backbone and a two-branch estimation head for tooth pose estimation. Our model, trained with a novel loss function on the newly collected large-scale dataset (10 393 patients with 280 611 intraoral tooth scans), achieves an average Euler angle error of only 4.780°–5.979° and a translation L1 error of 0.663 mm on a hold-out set of 2598 patients (77 870 teeth). Comprehensive experiments show that 98.29% of the estimations produce a mean angle error of less than 15°, which is acceptable for many clinical and industrial applications.
Dynamic bandwidth allocation (DBA) is a fundamental challenge in the realm of networking. The rapid, accurate, and fair allocation of bandwidth is crucial for network service providers to fulfill service-level agreements, alleviate link congestion, and devise strategies to counter network attacks. However, existing bandwidth allocation algorithms operate mainly on the control plane of the software-defined networking paradigm, which can lead to considerable probing overhead and convergence latency. Moreover, contemporary network architectures necessitate a hierarchical bandwidth allocation system that addresses latency requirements. We introduce a finegrained, hierarchical, and scalable DBA algorithm, i.e., the HSDBA algorithm, implemented on the programmable data plane. This algorithm reduces network overhead and latency between the data plane and the controller, and it is proficient in dynamically adding and removing network configurations. We investigate the practicality of HSDBA using protocol-oblivious forwarding switches. Experimental results show that HSDBA achieves fair bandwidth allocation and isolation guarantee within approximately 25 packets. It boasts a convergence speed 0.5 times higher than that of the most recent algorithm, namely, approximate hierarchical allocation of bandwidth (AHAB); meanwhile, it maintains a bandwidth enforcement accuracy of 98.1%.
This paper uses the semi-tensor product (STP) of matrices and adopts algebraic methods to study the controllability, reachability, and stabilizability of extended finite state machines (EFSMs). First, we construct the bilinear dynamic system model of the EFSM, laying the foundation for further research. Second, combined with this bilinear dynamic system model, we propose theorems for the controllability, reachability, and stabilizability of the bilinear dynamic system model of the EFSM. Finally, we design an algorithm to determine the controllability and stabilizability of the EFSM. The correctness of the main results is verified through examples.
Reinforcement learning (RL) has shown significant potential for dealing with complex decision-making problems. However, its performance relies heavily on the availability of a large amount of high-quality data. In many real-world situations, data distribution in the target domain may differ significantly from that in the source domain, leading to a significant drop in the performance of RL algorithms. Domain adaptation (DA) strategies have been proposed to address this issue by transferring knowledge from a source domain to a target domain. However, there have been no comprehensive and in-depth studies to evaluate these approaches. In this paper we present a comprehensive and systematic study of DA in RL. We first introduce the basic concepts and formulations of DA in RL and then review the existing DA methods used in RL. Our main objective is to fill the existing literature gap regarding DA in RL. To achieve this, we conduct a rigorous evaluation of state-of-the-art DA approaches. We aim to provide comprehensive insights into DA in RL and contribute to advancing knowledge in this field. The existing DA approaches are divided into seven categories based on application domains. The approaches in each category are discussed based on the important data adaptation metrics, and then their key characteristics are described. Finally, challenging issues and future research trends are highlighted to assist researchers in developing innovative improvements.
The combination of terahertz and massive multiple-input multiple-output (MIMO) is promising for meeting the increasing data rate demand of future wireless communication systems thanks to the significant band-width and spatial degrees of freedom. However, unique channel features, such as the near-field beam split effect, make channel estimation particularly challenging in terahertz massive MIMO systems. On one hand, adopting the conventional angular domain transformation dictionary designed for low-frequency far-field channels will result in degraded channel sparsity and destroyed sparsity structure in the transformed domain. On the other hand, most existing compressive sensing based channel estimation algorithms cannot achieve high performance and low complexity simultaneously. To alleviate these issues, in this study, we first adopt frequency-dependent near-field dictionaries to maintain good channel sparsity and sparsity structure in the transformed domain under the near-field beam split effect. Then, a deep unfolding based wideband terahertz massive MIMO channel estimation algorithm is proposed. In each iteration of the approximate message passing-sparse Bayesian learning algorithm, the optimal update rule is learned by a deep neural network (DNN), whose architecture is customized to effectively exploit the inherent channel patterns. Furthermore, a mixed training method based on novel designs of the DNN architecture and the loss function is developed to effectively train data from different system configurations. Simulation results validate the superiority of the proposed algorithm in terms of performance, complexity, and robustness.
The construction of an integrated solution for cyberspace defense with dynamic, flexible, and intelligent features is a new idea. To solve the problem whereby traditional static protection methods cannot respond to various network attacks or security demands in an adversarial network environment in time, and to form a complete integrated solution from “threat discovery” to “decision-making generation,” we propose an ontology-based security model, OntoCSD, for an integrated solution of cyberspace defense that uses Web ontology language (OWL) to represent the ontology classes and relationships of threat monitoring, decision-making, response, and defense in cyberspace, and uses semantic Web rule language (SWRL) to design the defensive reasoning rules. OntoCSD can discover potential relationships among network attacks, vulnerabilities, the security state, and defense strategies. Further, an artificial intelligence (AI) expert system based on case-based reasoning (CBR) is used to quickly generate a detailed and comprehensive decision-making scheme. Finally, through Kendall’s coefficient of concordance (W) and four experimental cases in a typical computer network defense (CND) system, which reasons on represented facts and the ontology, OntoCSD’s consistency and its feasibility to solve the issues in the field of cyberspace defense are validated. OntoCSD supports automatic association and reasoning, and provides an integrated solution framework of cyberspace defense.
This study investigates how the events of deception attacks are distributed during the fusion of multi-sensor nonlinear systems. First, a deception attack with limited energy (DALE) is introduced under the framework of distributed extended Kalman consensus filtering (DEKCF). Next, a hypothesis testing-based mechanism to detect the abnormal data generated by DALE, in the presence of the error term caused by the linearization of the nonlinear system, is established. Once the DALE is detected, a new rectification strategy can be triggered to recalibrate the abnormal data, restoring it to its normal state. Then, an attack-resilient DEKCF (AR-DEKCF) algorithm is proposed, and its fusion estimation errors are demonstrated to satisfy the mean square exponential boundedness performance, under appropriate conditions. Finally, the effectiveness of the AR-DEKCF algorithm is confirmed through simulations involving multi-unmanned aerial vehicle (multi-UAV) tracking problems.
As the number of cores in a multicore system increases, the communication pressure on the interconnection network also increases. The network-on-chip (NoC) architecture is expected to take on the ever-expanding communication demands triggered by the ever-increasing number of cores. The communication behavior of the NoC architecture exhibits significant spatial–temporal variation, posing a considerable challenge for NoC reconfiguration. In this paper, we propose a traffic-oriented reconfigurable NoC with augmented inter-port buffer sharing to adapt to the varying traffic flows with a high flexibility. First, a modified input port is introduced to support buffer sharing between adjacent ports. Specifically, the modified input port can be dynamically reconfigured to react to on-demand traffic. Second, it is ascertained that a centralized output-oriented buffer management works well with the reconfigurable input ports. Finally, this reconfiguration method can be implemented with a low overhead hardware design without imposing a great burden on the system implementation. The experimental results show that compared to other proposals, the proposed NoC architecture can greatly reduce the packet latency and improve the saturation throughput, without incurring significant area and power overhead.
This paper focuses on addressing the problems of finite-time boundedness and guaranteed cost control in switched systems under asynchronous switching. To reduce redundant information transmission and alleviate data congestion of sensor nodes, two schemes are proposed: the event-triggered scheme (ETS) and the round-robin protocol (RRP). These schemes are designed to ensure that the system exhibits good dynamic characteristics while reducing communication resources. In the field of finite-time control, a switching signal is designed using the admissible edge-dependent average dwell time (AED-ADT) method. This method involves a slow AED-ADT switching and a fast AED-ADT switching, which are respectively suitable for finite-time stable and finite-time unstable situations of the controlled system within the asynchronous switching interval. By constructing a double-mode dependent Lyapunov function, the finite-time bounded criterion and the controller gain of the switched systems are obtained. Finally, the validity of the proposed results is showcased by implementing a buck-boost voltage circuit model.
Human emotions are intricate psychological phenomena that reflect an individual’s current physiological and psychological state. Emotions have a pronounced influence on human behavior, cognition, communication, and decision-making. However, current emotion recognition methods often suffer from suboptimal performance and limited scalability in practical applications. To solve this problem, a novel electroencephalogram (EEG) emotion recognition network named VG-DOCoT is proposed, which is based on depthwise over-parameterized convolutional (DO-Conv), transformer, and variational automatic encoder-generative adversarial network (VAE-GAN) structures. Specifically, the differential entropy (DE) can be extracted from EEG signals to create mappings into the temporal, spatial, and frequency information in preprocessing. To enhance the training data, VAE-GAN is employed for data augmentation. A novel convolution module DO-Conv is used to replace the traditional convolution layer to improve the network. A transformer structure is introduced into the network framework to reveal the global dependencies from EEG signals. Using the proposed model, a binary classification on the DEAP dataset is carried out, which achieves an accuracy of 92.52% for arousal and 92.27% for valence. Next, a ternary classification is conducted on SEED, which classifies neutral, positive, and negative emotions; an impressive average prediction accuracy of 93.77% is obtained. The proposed method significantly improves the accuracy for EEG-based emotion recognition.
Joint Photographic Experts Group (JPEG) format is extensively used for images in many practical applications due to its excellent compression ratio and satisfactory image quality. Considering compelling concerns about the invasion of privacy, this paper proposes an effective reversible data hiding scheme for encrypted JPEG bitstreams, to provide security and privacy for both secret messages and valuable carriers. First, a format-compatibility and file size preserving encryption algorithm is applied to encipher the plaintext JPEG image into a noise-like version. Then, we present an effective reversible data hiding scheme in encrypted JPEG bitstreams using adaptive RZL rotation, where the secret messages are concealed with the sequence of RZL pairs. When the authorized user receives the marked encrypted JPEG bitstreams, the error-free extraction of secret messages and the lossless recovery of the original plaintext JPEG image can be accomplished separately. Extensive experiments are conducted to show that, compared to some state-of-the-art schemes, the proposed scheme has a superior performance in terms of embedding capacity, while keeping file size preservation and format compatibility.
Reversible data hiding in encrypted images (RDHEI) is essential for safeguarding sensitive information within the encrypted domain. In this study, we propose an intelligent pixel predictor based on a residual group block and a spatial attention module, showing superior pixel prediction performance compared to existing predictors. Additionally, we introduce an adaptive joint coding method that leverages bit-plane characteristics and intra-block pixel correlations to maximize embedding space, outperforming single coding approaches. The image owner employs the presented intelligent predictor to forecast the original image, followed by encryption through additive secret sharing before conveying the encrypted image to data hiders. Subsequently, data hiders encrypt secret data and embed them within the encrypted image before transmitting the image to the receiver. The receiver can extract secret data and recover the original image losslessly, with the processes of data extraction and image recovery being separable. Our innovative approach combines an intelligent predictor with additive secret sharing, achieving reversible data embedding and extraction while ensuring security and lossless recovery. Experimental results demonstrate that the predictor performs well and has a substantial embedding capacity. For the Lena image, the number of prediction errors within the range of [−5, 5] is as high as 242 500 and our predictor achieves an embedding capacity of 4.39 bpp.
We first introduce a new approach for optimising a cascaded spline adaptive filter (CSAF) to identify unknown nonlinear systems by using a meta-heuristic optimisation algorithm (MOA). The CSAF architecture combines Hammerstein and Wiener systems, where the nonlinear blocks are implemented with the spline network. The algorithms used optimise the weights of the spline interpolation function and linear filter by using an adequately weighted cost function, leading to improved filter stability, steady state performance, and guaranteed convergence to globally optimal solutions. We investigate two CSAF architectures: Hammerstein–Wiener SAF (HW-SAF) and Wiener–Hammerstein SAF (WH-SAF) structures. These architectures have been designed using gradient-based approaches which are inefficient due to poor convergence speed, and produce suboptimal solutions in a Gaussian noise environment. To avert these difficulties, we estimate the design parameters of the CSAF architecture using four independent MOAs: differential evolution (DE), brainstorm optimisation (BSO), multi-verse optimiser (MVO), and a recently proposed remora optimisation algorithm (ROA). In ROA, the remora factor’s control parameters produce near-global optimal parameters with a higher convergence speed. ROA also ensures the most balanced exploration and exploitation phases compared to DE-, BSO-, and MVO-based design approaches. Finally, the identification results of three numerical and industry-specific benchmark systems, including coupled electric drives, a thermic wall, and a continuous stirred tank reactor, are presented to emphasise the effectiveness of the ROA-based CSAF design.
In edge control systems (ECSs), edge computing demands more local data processing power, while traditional industrial programmable logic controllers (PLCs) cannot meet this demand. Thus, edge intelligent controllers (EICs) have been developed, making their secure and reliable operation crucial. However, as EICs communicate sensitive information with resource-limited terminal devices (TDs), a low-cost, efficient authentication solution is urgently needed since it is challenging to implement traditional asymmetric cryptography on TDs. In this paper, we design a lightweight authentication scheme for ECSs using low-computational-cost hash functions and exclusive OR (XOR) operations; this scheme can achieve bidirectional anonymous authentication and key agreement between the EIC and TDs to protect the privacy of the devices. Through security analysis, we demonstrate that the authentication scheme can provide the necessary security features and resist major known attacks. Performance analysis and comparisons indicate that the proposed authentication scheme is effective and feasible for deployment in ECSs.