Jun 2022, Volume 16 Issue 3
    

  • Select all
    Excellent Young Computer Scientists Forum
  • RESEARCH ARTICLE
    Xiaoheng JIANG, Hao LIU, Li ZHANG, Geyang LI, Mingliang XU, Pei LV, Bing ZHOU

    In recent years, crowd counting has increasingly drawn attention due to its widespread applications in the field of computer vision. Most of the existing methods rely on datasets with scarce labeled images to train networks. They are prone to suffer from the over-fitting problem. Further, these existing datasets usually just give manually labeled annotations related to the head center position. This kind of annotation provides limited information. In this paper, we propose to exploit virtual synthetic crowd scenes to improve the performance of the counting network in the real world. Since we can obtain people masks easily in a synthetic dataset, we first learn to distinguish people from the background via a segmentation network using the synthetic data. Then we transfer the learned segmentation priors from synthetic data to real-world data. Finally, we train a density estimation network on real-world data by utilizing the obtained people masks. Our experiments on two crowd counting datasets demonstrate the effectiveness of the proposed method.

  • Architecture
  • RESEARCH ARTICLE
    Dunbo ZHANG, Chaoyang JIA, Li SHEN

    GPUs are widely used in modern high-performance computing systems. To reduce the burden of GPU programmers, operating system and GPU hardware provide great supports for shared virtual memory, which enables GPU and CPU to share the same virtual address space. Unfortunately, the current SIMT execution model of GPU brings great challenges for the virtual-physical address translation on the GPU side, mainly due to the huge number of virtual addresses which are generated simultaneously and the bad locality of these virtual addresses. Thus, the excessive TLB accesses increase the miss ratio of TLB. As an attractive solution, Page Walk Cache (PWC) has received wide attention for its capability of reducing the memory accesses caused by TLB misses.

    However, the current PWC mechanism suffers from heavy redundancies, which significantly limits its efficiency. In this paper, we first investigate the facts leading to this issue by evaluating the performance of PWC with typical GPU benchmarks. We find that the repeated L4 and L3 indices of virtual addresses increase the redundancies in PWC, and the low locality of L2 indices causes the low hit ratio in PWC. Based on these observations, we propose a new PWC structure, namely Compressed Page Walk Cache (CPWC), to resolve the redundancy burden in current PWC. Our CPWC can be organized in either direct-mapped mode or set-associated mode. Experimental results show that CPWC increases by 3 times over TPC in the number of page table entries, increases by 38.3% over PWC in L2 index hit ratio and reduces by 26.9% in the memory accesses of page tables. The average memory accesses caused by each TLB miss is reduced to 1.13. Overall, the average IPC can improve by 25.3%.

  • RESEARCH ARTICLE
    Xin YOU, Hailong YANG, Zhongzhi LUAN, Depei QIAN

    The cryo-electron microscopy (cryo-EM) is one of the most powerful technologies available today for structural biology. The RELION (Regularized Likelihood Optimization) implements a Bayesian algorithm for cryo-EM structure determination, which is one of the most widely used software in this field. Many researchers have devoted effort to improve the performance of RELION to satisfy the analysis for the ever-increasing volume of datasets. In this paper, we focus on performance analysis of the most time-consuming computation steps in RELION and identify their performance bottlenecks for specific optimizations. We propose several performance optimization strategies to improve the overall performance of RELION, including optimization of expectation step, parallelization of maximization step, accelerating the computation of symmetries, and memory affinity optimization. The experiment results show that our proposed optimizations achieve significant speedups of RELION across representative datasets. In addition, we perform roofline model analysis to understand the effectiveness of our optimizations.

  • RESEARCH ARTICLE
    Changpeng ZHU, Bo HAN, Yinliang ZHAO

    Container-based virtualization techniques are becoming an alternative to traditional virtual machines, due to less overhead and better scaling. As one of the most widely used open-source container orchestration systems, Kubernetes provides a built-in mechanism, that is, horizontal pod autoscaler (HPA), for dynamic resource provisioning. By default, scaling pods only based on CPU utilization, a single performance metric, HPA may create more pods than actually needed. Through extensive measurements of a containerized n-tier application benchmark, RUBBoS, we find that excessive pods consume more CPU and memory and even deteriorate response times of applications, due to interference. Furthermore, a Kubernetes service does not balance incoming requests among old pods and new pods created by HPA, due to stateful HTTP. In this paper, we propose a bi-metric approach to scaling pods by taking into account both CPU utilization and utilization of a thread pool, which is a kind of important soft resource in Httpd and Tomcat. Our approach collects the utilization of CPU and memory of pods. Meanwhile, it makes use of ELBA, a milli-bottleneck detector, to calculate queue lengths of Httpd and Tomcat pods and then evaluate the utilization of their thread pools. Based on the utilization of both CPU and thread pools, our approach could scale up less replicas of Httpd and Tomcat pods, contributing to a reduction of hardware resource utilization. At the same time, our approach leverages preStop hook along with liveness and readiness probes to relieve load imbalance among old Tomcat pods and new ones. Based on the containerized RUBBoS, our experimental results show that the proposed approach could not only reduce the usage of CPU and memory by as much as 14% and 24% when compared with HPA, but also relieve the load imbalance to reduce average response time of requests by as much as 80%. Our approach also demonstrates that it is better to scale pods by multiple metrics rather than a single one.

  • Software
  • RESEARCH ARTICLE
    Pengfei GAO, Yongjie XU, Fu SONG, Taolue CHEN

    JavaScript has become one of the most widely used languages for Web development. Its dynamic and event-driven features make it challenging to ensure the correctness of Web applications written in JavaScript. A variety of dynamic analysis techniques have been proposed which are, however, limited in either coverage or scalability. In this paper, we propose a simple, yet effective, model-based automated testing approach to achieve a high code-coverage within the time budget via testing with longer event sequences. We implement our approach as an open-source tool LJS, and perform extensive experiments on 21 publicly available benchmarks. On average, LJS is able to achieve 86.5% line coverage in 10 minutes. Compared with JSDEP, a state-of-the-art breadth-first search based automated testing tool enriched with partial order reduction, the coverage of LJS is 11%–19% higher than that of JSDEP on real-world large Web applications. Our empirical findings support that proper longer test sequences can achieve a higher code coverage in JavaScript Web application testing.

  • Artificial Intelligence
  • RESEARCH ARTICLE
    Junsong FAN, Yuxi WANG, He GUAN, Chunfeng SONG, Zhaoxiang ZHANG

    Domain adaptation (DA) for semantic segmentation aims to reduce the annotation burden for the dense pixel-level prediction task. It focuses on tackling the domain gap problem and manages to transfer knowledge learned from abundant source data to new target scenes. Although recent works have achieved rapid progress in this field, they still underperform fully supervised models with a large margin due to the absence of any available hints in the target domain. Considering that few-shot labels are cheap to obtain in practical applications, we attempt to leverage them to mitigate the performance gap between DA and fully supervised methods. The key to this problem is to leverage the few-shot labels to learn robust domain-invariant predictions effectively. To this end, we first design a data perturbation strategy to enhance the robustness of the representations. Furthermore, a transferable prototype module is proposed to bridge the domain gap based on the source data and few-shot targets. By means of these proposed methods, our approach can perform on par with the fully supervised models to some extent. We conduct extensive experiments to demonstrate the effectiveness of the proposed methods and report the state-of-the-art performance on two popular DA tasks, i.e., from GTA5 to Cityscapes and SYNTHIA to Cityscapes.

  • LETTER
    Zhiqiang YU, Yantuan XIAN, Zhengtao YU, Yuxin HUANG, Junjun GUO
  • LETTER
    Yu HU, Derong SHEN, Tiezheng NIE, Yue KOU, Ge YU
  • RESEARCH ARTICLE
    Zhenghao TAN, Songcan CHEN

    Deep learning performs as a powerful paradigm in many real-world applications; however, its mechanism remains much of a mystery. To gain insights about nonlinear hierarchical deep networks, we theoretically describe the coupled nonlinear learning dynamic of the two-layer neural network with quadratic activations, extending existing results from the linear case. The quadratic activation, although rarely used in practice, shares convexity with the widely used ReLU activation, thus producing similar dynamics. In this work, we focus on the case of a canonical regression problem under the standard normal distribution and use a coupled dynamical system to mimic the gradient descent method in the sense of a continuous-time limit, then use the high order moment tensor of the normal distribution to simplify these ordinary differential equations. The simplified system yields unexpected fixed points. The existence of these non-global-optimal stable points leads to the existence of saddle points in the loss surface of the quadratic networks. Our analysis shows there are conserved quantities during the training of the quadratic networks. Such quantities might result in a failed learning process if the network is initialized improperly. Finally, We illustrate the comparison between the numerical learning curves and the theoretical one, which reveals the two alternately appearing stages of the learning process.

  • RESEARCH ARTICLE
    Song SUN, Bo ZHAO, Muhammad MATEEN, Xin CHEN, Junhao WEN

    Recent studies have shown remarkable success in face image generation task. However, existing approaches have limited diversity, quality and controllability in generating results. To address these issues, we propose a novel end-to-end learning framework to generate diverse, realistic and controllable face images guided by face masks. The face mask provides a good geometric constraint for a face by specifying the size and location of different components of the face, such as eyes, nose and mouse. The framework consists of four components: style encoder, style decoder, generator and discriminator. The style encoder generates a style code which represents the style of the result face; the generator translate the input face mask into a real face based on the style code; the style decoder learns to reconstruct the style code from the generated face image; and the discriminator classifies an input face image as real or fake. With the style code, the proposed model can generate different face images matching the input face mask, and by manipulating the face mask, we can finely control the generated face image. We empirically demonstrate the effectiveness of our approach on mask guided face image synthesis task.

  • LETTER
    Hao WANG, Liyan DONG, Minghui SUN
  • Theoretical Computer Science
  • LETTER
    Xiaoling MO, Daoyun XU, Kai YAN, Zaijun ZHANG
  • RESEARCH ARTICLE
    Elisabetta DE MARIA, Abdorrahim BAHRAMI, Thibaud L’YVONNET, Amy FELTY, Daniel GAFFÉ, Annie RESSOUCHE, Franck GRAMMONT

    Having a formal model of neural networks can greatly help in understanding and verifying their properties, behavior, and response to external factors such as disease and medicine. In this paper, we adopt a formal model to represent neurons, some neuronal graphs, and their composition. Some specific neuronal graphs are known for having biologically relevant structures and behaviors and we call them archetypes. These archetypes are supposed to be the basis of typical instances of neuronal information processing. In this paper we study six fundamental archetypes (simple series, series with multiple outputs, parallel composition, negative loop, inhibition of a behavior, and contralateral inhibition), and we consider two ways to couple two archetypes: (i) connecting the output(s) of the first archetype to the input(s) of the second archetype and (ii) nesting the first archetype within the second one. We report and compare two key approaches to the formal modeling and verification of the proposed neuronal archetypes and some selected couplings. The first approach exploits the synchronous programming language Lustre to encode archetypes and their couplings, and to express properties concerning their dynamic behavior. These properties are verified thanks to the use of model checkers. The second approach relies on a theorem prover, the Coq Proof Assistant, to prove dynamic properties of neurons and archetypes.

  • RESEARCH ARTICLE
    Yurong JI, Jinmeng LIU, Yujie BAI, Shufei WU

    Let G be a connected simple graph with vertex set V(G) and edge set E(G). A binary vertex labeling f:V(G) Z2, is said to be friendly if the number of vertices with different labels differs by at most one. Each vertex friendly labeling f induces an edge labeling f:E(G)Z2, defined by f(xy)= f(x)+f(y) for each xyE(G). Let ef(i)={eE(G):f(e)= i}. The full friendly index set of G, denoted by FFI(G), is the set {ef(1)ef(0):fisfriendly}. In this paper, we determine the full friendly index set of a family of cycle union graphs which are edge subdivisions of P2×Pn.

  • Networks and Communication
  • LETTER
    Lei NIE, Bo LIU, Peng LI, Heng HE, Libing WU
  • Information Systems
  • LETTER
    Xingshen SONG, Jinsheng DENG, Fengcai QIAO, Kun JIANG
  • RESEARCH ARTICLE
    Xiao PAN, Lei WU, Fenjie LONG, Ang MA

    With increasing popularity of mobile devices and flourish of social networks, a large number of trajectory data is accumulated. Trajectory data contains a wealth of information, including spatiality, time series, and other external descriptive attributes (i.e., travelling mode, activities, etc.). Trajectory recommendation is especially important to users for finding the routes meeting the user’s travel needs quickly. Most existing trajectory recommendation works return the same route to different users given an origin and a destination. However, the users’ behavior preferences can be learned from users’ historical multi-attributes trajectories. In this paper, we propose two novel personalized trajectory recommendation methods, i.e., user behavior probability learning based on matrix decomposition and user behavior probability learning based on Kernel density estimation. We transform the route recommendation problem to a shortest path problem employing Bayesian probability model. Combining the user input (i.e., an origin and a destination), the trajectory query is performed on a behavior graph based on the learned behavior probability automatically. Finally, a series of experiments on two real datasets validate the effectiveness of our proposed methods.

  • RESEARCH ARTICLE
    Yunhao SUN, Guanyu LI, Jingjing DU, Bo NING, Heng CHEN

    The problem of subgraph matching is one fundamental issue in graph search, which is NP-Complete problem. Recently, subgraph matching has become a popular research topic in the field of knowledge graph analysis, which has a wide range of applications including question answering and semantic search. In this paper, we study the problem of subgraph matching on knowledge graph. Specifically, given a query graph q and a data graph G, the problem of subgraph matching is to conduct all possible subgraph isomorphic mappings of q on G. Knowledge graph is formed as a directed labeled multi-graph having multiple edges between a pair of vertices and it has more dense semantic and structural features than general graph. To accelerate subgraph matching on knowledge graph, we propose a novel subgraph matching algorithm based on subgraph index for knowledge graph, called as F G q T-Match. The subgraph matching algorithm consists of two key designs. One design is a subgraph index of matching-driven flow graph ( F G q T), which reduces redundant calculations in advance. Another design is a multi-label weight matrix, which evaluates a near-optimal matching tree for minimizing the intermediate candidates. With the aid of these two key designs, all subgraph isomorphic mappings are quickly conducted only by traversing F G q T. Extensive empirical studies on real and synthetic graphs demonstrate that our techniques outperform the state-of-the-art algorithms.

  • LETTER
    Shuai WANG, Chunyi CHEN, Guijie ZHANG
  • Image and Graphics
  • RESEARCH ARTICLE
    Shiwei CHENG, Jialing WANG, Xiaoquan SHEN, Yijian CHEN, Anind DEY

    Code review is intended to find bugs in early development phases, improving code quality for later integration and testing. However, due to the lack of experience with algorithm design, or software development, individual novice programmers face challenges while reviewing code. In this paper, we utilize collaborative eye tracking to record the gaze data from multiple reviewers, and share the gaze visualization among them during the code review process. The visualizations, such as borders highlighting current reviewed code lines, transition lines connecting related reviewed code lines, reveal the visual attention about program functions that can facilitate understanding and bug tracing. This can help novice reviewers to make sense to confirm the potential bugs or avoid repeated reviewing of code, and potentially even help to improve reviewing skills. We built a prototype system, and conducted a user study with paired reviewers. The results showed that the shared real-time visualization allowed the reviewers to find bugs more efficiently.

  • RESEARCH ARTICLE
    Yujin CHAI, Yanlin WENG, Lvdi WANG, Kun ZHOU

    In this paper, we present an efficient algorithm that generates lip-synchronized facial animation from a given vocal audio clip. By combining spectral-dimensional bidirectional long short-term memory and temporal attention mechanism, we design a light-weight speech encoder that learns useful and robust vocal features from the input audio without resorting to pre-trained speech recognition modules or large training data. To learn subject-independent facial motion, we use deformation gradients as the internal representation, which allows nuanced local motions to be better synthesized than using vertex offsets. Compared with state-of-the-art automatic-speech-recognition-based methods, our model is much smaller but achieves similar robustness and quality most of the time, and noticeably better results in certain challenging cases.

  • Information Security
  • RESEARCH ARTICLE
    Jingya FENG, Lang LI

    In this paper, we propose a new lightweight block cipher called SCENERY. The main purpose of SCENERY design applies to hardware and software platforms. SCENERY is a 64-bit block cipher supporting 80-bit keys, and its data processing consists of 28 rounds. The round function of SCENERY consists of 8 4 × 4 S-boxes in parallel and a 32 × 32 binary matrix, and we can implement SCENERY with some basic logic instructions. The hardware implementation of SCENERY only requires 1438 GE based on 0.18 um CMOS technology, and the software implementation of encrypting or decrypting a block takes approximately 1516 clock cycles on 8-bit microcontrollers and 364 clock cycles on 64-bit processors. Compared with other encryption algorithms, the performance of SCENERY is well balanced for both hardware and software. By the security analyses, SCENERY can achieve enough security margin against known attacks, such as differential cryptanalysis, linear cryptanalysis, impossible differential cryptanalysis and related-key attacks.

  • RESEARCH ARTICLE
    Qiao XUE, Youwen ZHU, Jian WANG

    The fast development of the Internet and mobile devices results in a crowdsensing business model, where individuals (users) are willing to contribute their data to help the institution (data collector) analyze and release useful information. However, the reveal of personal data will bring huge privacy threats to users, which will impede the wide application of the crowdsensing model. To settle the problem, the definition of local differential privacy (LDP) is proposed. Afterwards, to respond to the varied privacy preference of users, researchers propose a new model, i.e., personalized local differential privacy (PLDP), which allow users to specify their own privacy parameters. In this paper, we focus on a basic task of calculating the mean value over a single numeric attribute with PLDP. Based on the previous schemes for mean estimation under LDP, we employ PLDP model to design novel schemes (LAP, DCP, PWP) to provide personalized privacy for each user. We then theoretically analysis the worst-case variance of three proposed schemes and conduct experiments on synthetic and real datasets to evaluate the performance of three methods. The theoretical and experimental results show the optimality of PWP in the low privacy regime and a slight advantage of DCP in the high privacy regime.

  • RESEARCH ARTICLE
    Haixia ZHAO, Yongzhuang WEI

    Highly nonlinear resilient functions play a crucial role in nonlinear combiners which are usual hardware oriented stream ciphers. During the past three decades, the main idea of construction of highly nonlinear resilient functions are benefited from concatenating a large number of affine subfunctions. However, these resilient functions as core component of ciphers usually suffered from the guess and determine attack or algebraic attack since the n-variable nonlinear Boolean functions can be easily given rise to partial linear relations by fixing at most n/2 variables of them. How to design highly nonlinear resilient functions (S-boxes) without concatenating a large number of n/2 variables affine subfunctions appears to be an important task. In this article, a new construction of highly nonlinear resilient functions is proposed. These functions consist of two classes subfunctions. More specially, the first class (nonlinear part) contains both the bent functions with 2 k variables and some affine subfunctions with n/2 − k variables which are attained by using [ n/2 − k, m, d] disjoint linear codes. The second class (linear part) includes some linear subfunctions with n/2 variables which are attained by using [ n/2, m, d] disjoint linear codes. It is illustrated that these resilient functions have high nonlinearity and high algebraic degree. In particular, It is different from previous well-known resilient S-boxes, these new S-boxes cannot be directly decomposed into some affine subfunctions with n/2 variables by fixing at most n/2 variables. It means that the S-boxes (vectorial Boolean functions) which use these resilient functions as component functions have more favourable cryptography properties against the guess and determine attack or algebraic attacks.