The printed circuit board (PCB) stands as the cornerstone of electronic equipment, with its schematic holding paramount importance for system performance and reliability. In light of the pervasive use of electronic devices in society, concerns regarding maintenance, safety, backdoors, and other latent issues have garnered significant attention. Automatic schematic generation (ASG), with its distinct capability for generating circuit schematics autonomously, not only plays a pivotal role in electronic design automation (EDA) but also aids in deciphering the fundamental principles of PCB equipment to effectively address these underlying issues. However, constrained by the increasingly sophisticated manufacturing processes of PCBs and the inherent legal and ethical controversies surrounding reverse engineering, the development of related technologies faces notable bottlenecks. To break through technical barriers and advance technological progress, this paper comprehensively combs through the existing ASG, offers in-depth description of the core algorithms of the technology-layout and routing, and for the application of the technology in PCB reverse engineering, analyzes in detail the current challenges and the faced problems. Around these challenges, feasible solutions are discussed in this paper, with the aims of promoting the research of automatic PCB schematic generation technology and contributing new strength to EDA and PCB reverse engineering automation.
In this study, osculating caustic developable surfaces and rectifying caustic developable surfaces were obtained by considering space curves and curves on surfaces as base curves and changing the direction of the light source reflected by the mirror surface. It was proved that pseudo-evolute curves represent the striction curves (regression edges) of these surfaces. For developable surfaces based on curves on surfaces, it was observed that osculating caustic developable surfaces are equivalent to rectifying caustic developable surfaces if the curve is geodesic. Additionally, when the base curve was taken over any surface, the caustic surfaces were characterized as flat or normal approximation surfaces, depending on the direction of the light source.
The diversity and complexity of the user population on the campus network increase the risk of computer virus infection during terminal information interactions. Therefore, it is crucial to explore how computer viruses propagate between terminals in such a network. In this study, we establish a novel computer virus spreading model based on the characteristics of the basic network structure and a classical epidemic-spreading dynamics model, adapted to real-world university scenarios. The proposed model contains six groups:susceptible, unisolated latent, isolated latent, infection, recovery, and crash. We analyze the proposed model's basic reproduction number and disease-free equilibrium point. Using real-world university terminal computer virus propagation data, a basic computer virus infection rate, a basic computer virus removal rate, and a security protection strategy deployment rate are proposed to define the conversion probability of each group and perceive each group's variation tendency. Furthermore, we analyze the spreading trend of computer viruses in the campus network in terms of the proposed computer virus spreading model. We propose specific measures to suppress the spread of computer viruses in terminals, ensuring the safe and stable operation of the campus network terminals to the greatest extent.
With the introduction of underwater bionic camouflage covert communication, conventional communication signal recognition methods can no longer meet the needs of current underwater military confrontations. However, the research on bionic communication signal recognition is still not comprehensive. This paper takes underwater communication signals that mimic dolphin whistles through phase-shifting modulation as the research object, and proposes a recognition method based on a convolutional neural network. A time-frequency contour (TFC) masking filtering method is designed, which uses image technology to obtain the TFC mask of whistles and extracts whistles from the obtained mask. Spatial diversity combining is used to suppress the signal fading in multipath channels. The phase derivative spectrum image is obtained by Hilbert transform and continuous wavelet transform, and is then used as the basis for recognition. Finally, the effectiveness of the proposed method is verified by simulations and lake experiments. In the simulations, a recognition accuracy of 90% is achieved at a signal-to-noise ratio (SNR) of 0 dB in multipath channels. In the real underwater communication environment, a recognition accuracy of 81% is achieved at a symbol width of 50 ms and an SNR of 6.36 dB.
The increasing adoption of smart devices and cloud services, coupled with limitations in local computing and storage resources, prompts numerous users to transmit private data to cloud servers for processing. However, the transmission of sensitive data in plaintext form raises concerns regarding users' privacy and security. To address these concerns, this study proposes an efficient privacy-preserving secure neural network inference scheme based on homomorphic encryption and secure multi-party computation, which ensures the privacy of both the user and the cloud server while enabling fast and accurate ciphertext inference. First, we divide the inference process into three stages, including the merging stage for adjusting the network structure, the preprocessing stage for performing homomorphic computations, and the online stage for floating-point operations on the secret sharing of private data. Second, we propose an approach of merging network parameters, thereby reducing the cost of multiplication levels and decreasing both ciphertext-plaintext multiplication and addition operations. Finally, we propose a fast convolution algorithm to enhance computational efficiency. Compared with other state-of-the-art methods, our scheme reduces the linear operation time in the online stage by at least 11%, significantly reducing inference time and communication overhead.
This paper focuses on the design of event-triggered controllers for the synchronization of delayed Takagi-Sugeno (T-S) fuzzy neural networks (NNs) under deception attacks. The traditional event-triggered mechanism (ETM) determines the next trigger based on the current sample, resulting in network congestion. Furthermore, such methods suffer from the issues of deception attacks and unmeasurable system states. To enhance the system stability, we adaptively detect the occurrence of events over a period of time. In addition, deception attacks are recharacterized to describe general scenarios. Specifically, the following enhancements are implemented:First, we use a Bernoulli process to model the occurrence of deception attacks, which can describe a variety of attack scenarios as a type of general Markov process. Second, we introduce a sum-based dynamic discrete event-triggered mechanism (SDDETM), which uses a combination of past sampled measurements and internal dynamic variables to determine subsequent triggering events. Finally, we incorporate a dynamic output feedback controller (DOFC) to ensure the system stability. The concurrent design of the DOFC and SDDETM parameters is achieved through the application of the cone complement linearization (CCL) algorithm. We further perform two simulation examples to validate the effectiveness of the algorithm.
In this paper, we numerically analyze the factors determining localization precision and resolution in single emitter localization-based imaging systems. While previous studies have considered a limited set of parameters, our numerical approach incorporates additional parameters with significant reference values, yielding a more comprehensive analysis of the results. We differentiate between the effects of additive and multiplicative noise on localization precision using numerical modeling and take the influence of the sampling frequency into account, computing the optimal sampling frequency for varying resolution requirements. Leveraging a suite of derived equations, we systematically simulate and quantify how variations in these parameters influence system performance. Furthermore, we provide guidelines for optimizing signal-to-noise ratio (SNR) requirements and pixel size selection based on point spread function (PSF) width in single emitter localization-based imaging systems. This numerically driven research offers critical insights for the analysis of more complex imaging systems.
Deep learning (DL) accelerators are critical for handling the growing computational demands of modern neural networks. Systolic array (SA)-based accelerators consist of a 2D mesh of processing elements (PEs) working cooperatively to accelerate matrix multiplication. The power efficiency of such accelerators is of primary importance, especially considering the edge AI regime. This work presents the SAPER-AI accelerator, an SA accelerator with power intent specified via a unified power format representation in a simplified manner with negligible microarchitectural optimization effort. Our proposed accelerator switches off rows and columns of PEs in a coarse-grained manner, thus leading to SA microarchitecture complying with the varying computational requirements of modern DL workloads. Our analysis demonstrates enhanced power efficiency ranging between 10% and 25% for the best case 32×32 and 64×64 SA designs, respectively. Additionally, the power delay product (PDP) exhibits a progressive improvement of around 6% for larger SA sizes. Moreover, a performance comparison between the MobileNet and ResNet50 models indicates generally better SA performance for the ResNet50 workload. This is due to the more regular convolutions portrayed by ResNet50 that are more favored by SAs, with the performance gap widening as the SA size increases.
Traffic flow prediction is crucial for intelligent transportation and aids in route planning and navigation. However, existing studies often focus on prediction accuracy improvement, while neglecting external influences and practical issues like resource constraints and data sparsity on edge devices. We propose an online transfer learning (OTL) framework with a multi-layer perceptron (MLP)-assisted graph convolutional network (GCN), termed OTL-GM, which consists of two parts:transferring source-domain features to edge devices and using online learning to bridge domain gaps. Experiments on four data sets demonstrate OTL's effectiveness; in a comparison with models not using OTL, the reduction in the convergence time of the OTL models ranges from 24.77% to 95.32%.
Shape-changing interfaces use physical changes of shape as input or output to convey information, and interact with users. Plants are natural shape-changing interfaces, expert in adjusting their shape or modality to adapt to the environment. In this paper, plant-derived natural shape-changing phenomena are systematically analyzed. Then, several corresponding plant-inspired design strategies for shape-changing interfaces are summarized with recent advancements including material selections and syntheses, fabrication methods, and actuating mechanisms. Practical applications across diverse domains aim to prove the advantages and potential of plant-inspired shape-changing interfaces in agriculture, healthcare, architecture, robotics, etc. Furthermore, the opportunities and challenges are also discussed, such as design thinking in interdisciplinary tasks, dynamic behavior and control principles, novel materials and processes, application scenario and functionality matching, and large-scale application requirements. This paper is expected to inspire in-depth research on plant-inspired shape-changing interfaces.
This paper discusses the problem of low-elevation target height estimation for multiple-input multiple-output (MIMO) radar in multipath environments. The beamspace compresses the data and is ideal for reducing the computational burden of elevation estimation. To obtain the height parameter of the target accurately, we propose a height estimation method based on a beamspace joint alternating iterative (BJAI) algorithm in MIMO radar. This method mainly converts the reduced-dimensional MIMO radar element space data into beamspace data and whitens them to improve the reliability. Then, a simplified model is used to obtain the initial value of the elevation, and we combine the reflection coefficient and the target elevation angle for alternate estimation. Finally, we calculate the target height using the obtained elevation information. Simulation results verify that the proposed algorithm has high estimation accuracy and strong robustness.
In this article, the robust control problem of discrete-time Markov jump systems (MJSs) with actuator saturation is investigated via the passivity theory. Under the assumption of mode synchronization between the system and the controller, sufficient conditions are established to guarantee the system to be mean-square stable and stochastically passive in the domain of attraction via the saturation-dependent Lyapunov function approach and the linear matrix inequality (LMI) technique. The coupling between the system variables is decoupled, which greatly facilitates the design of the synchronization controller. Moreover, the estimation of the domain of attraction for the considered MJSs is accomplished through the solution of an optimization problem (OP). By degenerating the mode-dependent controller into its mode-independent counterpart, we derive sufficient conditions to ensure system robustness under the mode-independent control strategy, and then systematically summarize these conditions. Finally, the effectiveness of the proposed integrated design methodology is validated through numerical simulations.
Average power analysis plays a crucial role in the design of large-scale digital integrated circuits (ICs). The integration of data-driven machine learning (ML) methods into the electronic design automation (EDA) fields has increased the demand for extensive datasets. To address this need, we propose a novel pseudo-circuit generation algorithm rooted in graph topology. This algorithm efficiently produces a multitude of power analysis examples by converting randomly generated directed acyclic graphs (DAGs) into gate-level Verilog pseudo-combinational circuit netlists. The subsequent introduction of register units transforms pseudo-combinational netlists into pseudo-sequential circuit netlists. Hyperparameters facilitate the control of circuit topology, while appropriate sequential constraints are applied during synthesis to yield a pseudo-circuit dataset. We evaluate our approach using the mainstream power analysis software, conducting pre-layout average power tests on the generated circuits, comparing their performance against benchmark datasets, and verifying the results through circuit topology complexity analysis and static timing analysis (STA). The results confirm the effectiveness of the dataset, and demonstrate the operational efficiency and robustness of the algorithm, underscoring its research value.
Deepfake poses significant threats to various fields, including politics, journalism, and entertainment. Although many defense methods against deepfake have been proposed based on either passive detection or proactive defense, few have achieved both passive detection and proactive defense. To address this issue, we propose a full-defense framework (FDF) based on cross-domain feature fusion and separable watermarks (SepMark) to achieve copyright protection and deepfake detection, combining the ideas of passive detection and proactive defense. The proactive defense module consists of one encoder and two separable decoders, where the encoder embeds one watermark into the protected face, and two decoders separately extract two watermarks with different robustness. The robust watermark can reliably trace the trusted marked face while the semi-robust watermark is sensitive to malicious distortions that make the watermark disappear after deepfake or watermark removal attack. The passive detection module fuses spatial- and frequency-domain features to further differentiate between deepfake content and watermark removal attacks in the absence of watermarks. The proposed cross-domain feature fusion involves substituting the "secondary" channels of spatial-domain features with the "primary" channels of frequency-domain features. Subsequently, the "primary" channels of spatial-domain features are used to replace the "secondary" channels of frequency-domain features. Extensive experiments demonstrate that our approach not only offers proactive defense mechanisms by using extracted watermarks, i.e., source tracing and copyright protection, but also achieves passive detection when there are no watermarks, to further differentiate between deepfake content and watermark removal attacks, thereby offering a full-defense approach.
The dung beetle optimizer (DBO) is a metaheuristic algorithm with fast convergence and powerful search capabilities, which has shown excellent performance in solving various optimization problems. However, it suffers from the problems of easily falling into local optimal solutions and poor convergence accuracy when dealing with large-scale complex optimization problems. Therefore, we propose an adaptive DBO (ADBO) based on an elastic annealing mechanism to address these issues. First, the convergence factor is adjusted in a nonlinear decreasing manner to balance the requirements of global exploration and local exploitation, thus improving the convergence speed and search quality. Second, a greedy difference optimization strategy is introduced to increase population diversity, improve the global search capability, and avoid premature convergence. Finally, the elastic annealing mechanism is used to perturb the randomly selected individuals, helping the algorithm escape local optima and thereby improve solution quality and algorithm stability. The experimental results on the CEC 2017 and CEC 2022 benchmark function sets and MCNC benchmark circuits verify the effectiveness, superiority, and universality of ADBO.
Quadruped robots are able to exhibit a range of gaits, each with its own traversability and energy efficiency characteristics. By actively coordinating between gaits in different scenarios, energy-efficient and adaptive locomotion can be achieved. This study investigates the performances of learned energy-efficient policies for quadrupedal gaits under different commands. We propose a training-synthesizing framework that integrates learned gait-conditioned locomotion policies into an efficient multiskill locomotion policy. The resulting control policy achieves low-cost smooth switching and controllable gaits. Our results of the learned multiskill policy demonstrate seamless gait transitions while maintaining energy optimality across all commands.
The advancement of the fifth generation (5G) mobile communication and Internet of Things (IoT) has facilitated the development of intelligent applications, but has also rendered these networks increasingly complex and vulnerable to various targeted attacks. Numerous anomaly detection (AD) models, particularly those using deep learning technologies, have been proposed to monitor and identify network anomalous events. However, the implementation of these models poses challenges for network operators due to lacking expert knowledge of these black-box systems. In this study, we present a comprehensive review of current AD models and methods in the field of communication networks. We categorize these models into four methodological groups based on their underlying principles and structures, with particular emphasis on the role of recent promising large language models (LLMs) in the field of AD. Additionally, we provide a detailed discussion of the models in the following four application areas:network traffic monitoring, networking system log analysis, cloud and edge service provisioning, and IoT security. Based on these application requirements, we examine the current challenges and offer insights into future research directions, including robustness, explainability, and the integration of LLMs for AD.