In the last years, the types of devices used to access information systems have notably increased using different operating systems, screen sizes, interaction mechanisms, and software features. This device fragmentation is an important issue to tackle when developing native mobile service front-end applications. To address this issue, we propose the generation of native user interfaces (UIs) by means of model transformations, following the modelbased user interface (MBUI) paradigm. The resulting MBUI framework, called LIZARD, generates applications for multiple target platforms. LIZARD allows the definition of applications at a high level of abstraction, and applies model transformations to generate the target native UI considering the specific features of target platforms. The generated applications follow the UI design guidelines and the architectural and design patterns specified by the corresponding operating system manufacturer. The objective is not to generate generic applications following the lowest-common-denominator approach, but to follow the particular guidelines specified for each target device. We present an example application modeled in LIZARD, generating different UIs for Windows Phone and two types of Android devices (smartphones and tablets).
Multi-core homogeneous processors have been widely used to deal with computation-intensive embedded applications. However, with the continuous down scaling of CMOS technology, within-die variations in the manufacturing process lead to a significant spread in the operating speeds of cores within homogeneous multi-core processors. Task scheduling approaches, which do not consider such heterogeneity caused by within-die variations, can lead to an overly pessimistic result in terms of performance. To realize an optimal performance according to the actual maximum clock frequencies at which cores can run, we present a heterogeneity-aware schedule refining (HASR) scheme by fully exploiting the heterogeneities of homogeneous multi-core processors in embedded domains. We analyze and show how the actual maximum frequencies of cores are used to guide the scheduling. In the scheme, representative chip operating points are selected and the corresponding optimal schedules are generated as candidate schedules. During the booting of each chip, according to the actual maximum clock frequencies of cores, one of the candidate schedules is bound to the chip to maximize the performance. A set of applications are designed to evaluate the proposed scheme. Experimental results show that the proposed scheme can improve the performance by an average value of 22.2%, compared with the baseline schedule based on the worst case timing analysis. Compared with the conventional task scheduling approach based on the actual maximum clock frequencies, the proposed scheme also improves the performance by up to 12%.
A power monitoring and protection system based on an embedded processor was designed for the junction boxes (JBs) of an experimental seafloor observatory network in China. The system exhibits high reliability, fast response, and high real-time performance. A two-step power management method which uses metal-oxide-semiconductor field-effect transistors (MOSFETs) and a mechanical contactor in series was adopted to generate a reliable power switch, to limit surge currents and to facilitate automatic protection. Grounding fault diagnosis and environmental monitoring were conducted by designing a grounding fault detection circuit and by using selected sensors, respectively. The data collected from the JBs must be time-stamped for analysis and for correlation with other events and data. A highly precise system time, which is necessary for synchronizing the times within and across nodes, was generated through the IEEE 1588 (precision clock synchronization protocol for networked measurement and control systems) time synchronization method. In this method, time packets were exchanged between the grandmaster clock at the shore station and the slave clock module of the system. All the sections were verified individually in the laboratory prior to a sea trial. Finally, a subsystem for power monitoring and protection was integrated into the complete node system, installed in a frame, and deployed in the South China Sea. Results of the laboratory and sea trial experiments demonstrated that the developed system was effective, stable, reliable, and suitable for continuous deep-sea operation.
With the development of face recognition using sparse representation based classification (SRC), many relevant methods have been proposed and investigated. However, when the dictionary is large and the representation is sparse, only a small proportion of the elements contributes to the l1-minimization. Under this observation, several approaches have been developed to carry out an efficient element selection procedure before SRC. In this paper, we employ a metric learning approach which helps find the active elements correctly by taking into account the interclass/intraclass relationship and manifold structure of face images. After the metric has been learned, a neighborhood graph is constructed in the projected space. A fast marching algorithm is used to rapidly select the subset from the graph, and SRC is implemented for classification. Experimental results show that our method achieves promising performance and significant efficiency enhancement.
One recent area of interest in computer science is data stream management and processing. By ‘data stream’, we refer to continuous and rapidly generated packages of data. Specific features of data streams are immense volume, high production rate, limited data processing time, and data concept drift; these features differentiate the data stream from standard types of data. An issue for the data stream is classification of input data. A novel ensemble classifier is proposed in this paper. The classifier uses base classifiers of two weighting functions under different data input conditions. In addition, a new method is used to determine drift, which emphasizes the precision of the algorithm. Another characteristic of the proposed method is removal of different numbers of the base classifiers based on their quality. Implementation of a weighting mechanism to the base classifiers at the decision-making stage is another advantage of the algorithm. This facilitates adaptability when drifts take place, which leads to classifiers with higher efficiency. Furthermore, the proposed method is tested on a set of standard data and the results confirm higher accuracy compared to available ensemble classifiers and single classifiers. In addition, in some cases the proposed classifier is faster and needs less storage space.
Recently, dictionary learning (DL) based methods have been introduced to compressed sensing magnetic resonance imaging (CS-MRI), which outperforms pre-defined analytic sparse priors. However, single-scale trained dictionary directly from image patches is incapable of representing image features from multi-scale, multi-directional perspective, which influences the reconstruction performance. In this paper, incorporating the superior multi-scale properties of uniform discrete curvelet transform (UDCT) with the data matching adaptability of trained dictionaries, we propose a flexible sparsity framework to allow sparser representation and prominent hierarchical essential features capture for magnetic resonance (MR) images. Multi-scale decomposition is implemented by using UDCT due to its prominent properties of lower redundancy ratio, hierarchical data structure, and ease of implementation. Each sub-dictionary of different sub-bands is trained independently to form the multi-scale dictionaries. Corresponding to this brand-new sparsity model, we modify the constraint splitting augmented Lagrangian shrinkage algorithm (C-SALSA) as patch-based C-SALSA (PB C-SALSA) to solve the constraint optimization problem of regularized image reconstruction. Experimental results demonstrate that the trained sub-dictionaries at different scales, enforcing sparsity at multiple scales, can then be efficiently used for MRI reconstruction to obtain satisfactory results with further reduced undersampling rate. Multi-scale UDCT dictionaries potentially outperform both single-scale trained dictionaries and multi-scale analytic transforms. Our proposed sparsity model achieves sparser representation for reconstructed data, which results in fast convergence of reconstruction exploiting PB C-SALSA. Simulation results demonstrate that the proposed method outperforms conventional CS-MRI methods in maintaining intrinsic properties, eliminating aliasing, reducing unexpected artifacts, and removing noise. It can achieve comparable performance of reconstruction with the state-of-the-art methods even under substantially high undersampling factors.
With the continual increase in switching speed and rating of power semiconductors, the switching voltage spike becomes a serious problem. This paper describes a new technique of driving pulse edge modulation for insulated gate bipolar transistors (IGBTs). By modulating the density and width of the pulse trains, without regulating the hardware circuit, the slope of the gate driving voltage is controlled to change the switching speed. This technique is used in the driving circuit based on complex programmable logic devices (CPLDs), and the switching voltage spike of IGBTs can be restrained through software, which is easier and more flexible to adjust. Experimental results demonstrate the effectiveness and practicability of the proposed method.