In the design of dependable software for embedded and real-time operating systems, time analysis is a crucial but extremely difficult issue, the challenge of which is exacerbated due to the randomness and nondeterminism of interrupt handling behaviors. Thus research into a theory that integrates interrupt behaviors and time analysis seems to be important and challenging. In this paper, we present a programming language to describe programs with interrupts that is comprised of two essential parts: main program and interrupt handling programs. We also explore a timed operational semantics and a denotational semantics to specify the meanings of our language. Furthermore, a strategy of deriving denotational semantics from the timed operational semantics is provided to demonstrate the soundness of our operational semantics by showing the consistency between the derived denotational semantics and the original denotational semantics.
Software testing is one of the most crucial and analytical aspect to assure that developed software meets prescribed quality standards. Software development process invests at least 50% of the total cost in software testing process. Optimum and efficacious test data design of software is an important and challenging activity due to the nonlinear structure of software. Moreover, test case type and scope determines the quality of test data. To address this issue, software testing tools should employ intelligence based soft computing techniques like particle swarm optimization (PSO) and genetic algorithm (GA) to generate smart and efficient test data automatically. This paper presents a hybrid PSO and GA based heuristic for automatic generation of test suites. In this paper, we described the design and implementation of the proposed strategy and evaluated our model by performing experiments with ten container classes from the Java standard library. We analyzed our algorithm statistically with test adequacy criterion as branch coverage. The performance adequacy criterion is taken as percentage coverage per unit time and percentage of faults detected by the generated test data. We have compared our work with the heuristic based upon GA, PSO, existing hybrid strategies based on GA and PSO and memetic algorithm. The results showed that the test case generation is efficient in our work.
Graph transformation is an important modeling and analysis technique widely applied in software engineering. The attributes can naturally conceptualize the properties of the modeled artifacts. However, the lack of support for the specification of such attribute correspondence by ordinary propositional temporal logics hampers its further application during verification. Different from the theoretical investigations on quantified second order logics, we propose a practical and light-weight approach for the verification of this kind of temporal specifications with associated attributes. Particularly, we apply our approach and extend the generalpurpose graph transformation modeling tool: Groove. Moreover, symmetry reduction techniques are exploited to reduce the number of states. Experiments with performance evaluations complement our discussion and demonstrate the feasibility and efficiency of our approach.
Silicon-based digital cameras can record visible and near-infrared (NIR) information, in which the full color visible image (RGB) must be restored from color filter array (CFA) interpolation. In this paper, we propose a unified framework for CFA interpolation and visible/NIR image combination. To obtain a high quality color image, the traditional color interpolation from raw CFA data is improved at each pixel, which is constrained by the corresponding monochromatic NIR image in gradient difference. The experiments indicate the effectiveness of this hybrid scheme to acquire joint color and NIR information in real-time, and show that this hybrid process can generate a better color image when compared to treating interpolation and fusion separately.
Patch-level features are essential for achieving good performance in computer vision tasks. Besides wellknown pre-defined patch-level descriptors such as scaleinvariant feature transform (SIFT) and histogram of oriented gradient (HOG), the kernel descriptor (KD) method [1] offers a new way to “grow-up” features from a match-kernel defined over image patch pairs using kernel principal component analysis (KPCA) and yields impressive results.
In this paper, we present efficient kernel descriptor (EKD) and efficient hierarchical kernel descriptor (EHKD), which are built upon incomplete Cholesky decomposition. EKD automatically selects a small number of pivot features for generating patch-level features to achieve better computational efficiency. EHKD recursively applies EKD to form image-level features layer-by-layer. Perhaps due to parsimony, we find surprisingly that the EKD and EHKD approaches achieved competitive results on several public datasets compared with other state-of-the-art methods, at an improved efficiency over KD.
Research on biometrics for high security applications has not attracted as much attention as civilian or forensic applications. Limited research and deficient analysis so far has led to a lack of general solutions and leaves this as a challenging issue. This work provides a systematic analysis and identification of the problems to be solved in order to meet the performance requirements for high security applications, a double low problem. A hybrid ensemble framework is proposed to solve this problem. Setting an adequately high threshold for each matcher can guarantee a zero false acceptance rate (FAR) and then use the hybrid ensemble framework makes the false reject rate (FRR) as low as possible. Three experiments are performed to verify the effectiveness and generalization of the framework. First, two fingerprint verification algorithms are fused. In this test only 10.55% of fingerprints are falsely rejected with zero false acceptance rate, this is significantly lower than other state of the art methods. Second, in face verification, the framework also results in a large reduction in incorrect classification. Finally, assessing the performance of the framework on a combination of face and gait verification using a heterogeneous database show this framework can achieve both 0% false rejection and 0% false acceptance simultaneously.
Pedestrian detection is a critical problem in the field of computer vision. Although most existing algorithms are able to detect pedestrians well in controlled environments, it is often difficult to achieve accurate pedestrian detection from video sequences alone, especially in pedestrian-intensive scenes wherein pedestrians may cause mutual occlusion and thus incomplete detection. To surmount these difficulties, this paper presents pedestrian detection algorithm based on video sequences and laser point cloud. First, laser point cloud is interpreted and classified to separate pedestrian data and vehicle data. Then a fusion of video image data and laser point cloud data is achieved by calibration. The region of interest after fusion is determined using feature information contained in video image and three-dimensional information of laser point cloud to remove false detection of pedestrian and thus to achieve pedestrian detection in intensive scenes. Experimental verification and analysis in video sequences demonstrate that fusion of two data improves the performance of pedestrian detection and has better detection results.
Video-on-demand (VoD) services have become popular on the Internet in recent years. In VoD, it is challenging to support the VCR functionality, especially the jumps, while maintaining a smooth streaming quality. Previous studies propose to solve this problem by predicting the jump target locations and prefetching the contents. However, through our analysis on traces from a real-world VoD service, we find that it would be fundamentally difficult to improve a viewer’s VCR experience by simply predicting his future jumps, while ignoring the intentions behind these jumps.
Instead of the prediction-based approach, in this paper, we seek to support the VCR functionality by bookmarking the videos. There are two key techniques in our proposed methodology. First, we infer and differentiate viewers’ intentions in VCR jumps by decomposing the interseek times, using an expectation-maximization (EM) algorithm, and combine the decomposed inter-seek times with the VCR jumps to compute a numerical interest score for each video segment. Second, based on the interest scores, we propose an automated video bookmarking algorithm. The algorithm employs the time-series change detection techniques of CUSUMandMB-GT, and bookmarks videos by detecting the abrupt changes on their interest score sequences. We evaluate our proposed techniques using real-world VoD traces from dozens of videos. Experimental results suggest that with our methods, viewers’ interests within a video can be precisely extracted, and we can position bookmarks on the video’s highlight events accurately. Our proposed video bookmarking methodology does not require any knowledge on video type, contents, and semantics, and can be applied on various types of videos.
It is an important task to improve performance for sparse matrix vector multiplication (SpMV), and it is a difficult task because of its irregular memory access. General purpose GPU (GPGPU) provides high computing ability and substantial bandwidth that cannot be fully exploited by SpMV due to its irregularity. In this paper, we propose two novel methods to optimize the memory bandwidth for SpMV on GPGPU. First, a new storage format is proposed to exploit memory bandwidth of GPU architecture more efficiently. The new storage format can ensure that there are as many non-zeros as possible in the format which is suitable to exploit the memory bandwidth of the GPU. Second, we propose a cache blocking method to improve the performance of SpMV on GPU architecture. The sparse matrix is partitioned into sub-blocks that are stored in CSR format. With the blocking method, the corresponding part of vector
Photon mapping is widely used for global illumination rendering because of its high computational efficiency. But its efficiency is still limited, mainly by the intensive sampling required in final gathering, a process that is critical for removing low frequency artifacts of density estimation. In this paper, we propose a method to predict the final gathering estimation with direct density estimation, thereby achieving high quality global illumination by photon mapping with high efficiency. We first sample the irradiance of a subset of shading points by both final gathering and direct radiance estimation. Then we use the samples as a training set to predict the final gathered irradiance of other shading points through regression. Consequently, we are able to achieve about three times overall speedup compared with straightforward final gathering in global illumination computation with the same rendering quality.
Context modelling involves a) characterizing a situation with related information, and b) dealing and storing the information in a computer-understandable form. It is the keystone to enable a system to possess the perception capacity and adapt its functionality properly for different situations. However, a context model focusing on the characteristics of work-based learning is not well studied by pioneering researchers. For addressing this issue, in this work we firstly analyze several existing context models to identify the essentials of context modelling, whereby a hierarchical ontology context model is proposed to characterize work-based learning. Subsequently, we present the application of the proposed model in work-based learning scenario to provide adapted learning supports to professionals. Hence, this work has significance in both theory and practice.
This paper proposes a sequential model of bargaining specifying reasoning processes of an agent behind bargaining procedures. We encode agents’ background knowledge, demands, and bargaining constraints in logic programs and represent bargaining outcomes in answer sets. We assume that in each bargaining situation, each agent has a set of goals to achieve, which are normally unachievable without an agreement among all the agents who are involved in the bargaining. Through an alternating-offers procedure, an agreement among bargaining agents may be reached by abductive reasoning. We show that the procedure converges to a Nash equilibrium if each agent makes rational offers/counteroffers in each round. In addition, the sequential model also has a number of desirable properties, such as mutual commitments, individual rationality, satisfactoriness, and honesty.
In the robot soccer competition platform, the current confrontation decision-making system suffers from difficulties in optimization and adaptability. Therefore, we propose a new self-adaptive decision-making (SADM) strategy. SADM compensates for the restrictions of robot physical movement control by updating the task assignment and role assignment module using situation assessment techniques. It designs a self-adaptive role assignment model that assists the soccer robot in adapting to competition situations similar to how humans adapt in real time. Moreover, it also builds an accurate motion model for the robot in order to improve the competition ability of individual robot soccer. Experimental results show that SADM can adapt quickly and positively to new competition situations and has excellent performance in actual competition.