Jul 2024, Volume 22 Issue 1
    

  • Select all
  • Willetts Matthew, S. Atkins Anthony

    Big data analytics has been widely adopted by large companies to achieve measurable benefits including increased profitability, customer demand forecasting, cheaper development of products, and improved stock control. Small and medium sized enterprises (SMEs) are the backbone of the global economy, comprising of 90 % of businesses worldwide. However, only 10 % SMEs have adopted big data analytics despite the competitive advantage they could achieve. Previous research has analysed the barriers to adoption and a strategic framework has been developed to help SMEs adopt big data analytics. The framework was converted into a scoring tool which has been applied to multiple case studies of SMEs in the UK. This paper documents the process of evaluating the framework based on the structured feedback from a focus group composed of experienced practitioners. The results of the evaluation are presented with a discussion on the results, and the paper concludes with recommendations to improve the scoring tool based on the proposed framework. The research demonstrates that this positioning tool is beneficial for SMEs to achieve competitive advantages by increasing the application of business intelligence and big data analytics.

  • Sorour Ali, S. Atkins Anthony

    As big data becomes an apparent challenge to handle when building a business intelligence (BI) system, there is a motivation to handle this challenging issue in higher education institutions (HEIs). Monitoring quality in HEIs encompasses handling huge amounts of data coming from different sources. This paper reviews big data and analyses the cases from the literature regarding quality assurance (QA) in HEIs. It also outlines a framework that can address the big data challenge in HEIs to handle QA monitoring using BI dashboards and a prototype dashboard is presented in this paper. The dashboard was developed using a utilisation tool to monitor QA in HEIs to provide visual representations of big data. The prototype dashboard enables stakeholders to monitor compliance with QA standards while addressing the big data challenge associated with the substantial volume of data managed by HEIs’ QA systems. This paper also outlines how the developed system integrates big data from social media into the monitoring dashboard

  • Liang Yan, Chen Song, Dong Xin, Liu Tu

    The fingerprinting-based approach using the wireless local area network (WLAN) is widely used for indoor localization. However, the construction of the fingerprint database is quite time-consuming. Especially when the position of the access point (AP) or wall changes, updating the fingerprint database in real-time is difficult. An appropriate indoor localization approach, which has a low implementation cost, excellent real-time performance, and high localization accuracy and fully considers complex indoor environment factors, is preferred in location-based services (LBSs) applications. In this paper, we proposed a fine-grained grid computing (FGGC) model to achieve decimeter-level localization accuracy. Reference points (RPs) are generated in the grid by the FGGC model. Then, the received signal strength (RSS) values at each RP are calculated with the attenuation factors, such as the frequency band, three-dimensional propagation distance, and walls in complex environments. As a result, the fingerprint database can be established automatically without manual measurement, and the efficiency and cost that the FGGC model takes for the fingerprint database are superior to previous methods. The proposed indoor localization approach, which estimates the position step by step from the approximate grid location to the fine-grained location, can achieve higher real-time performance and localization accuracy simultaneously. The mean error of the proposed model is 0.36 ​m, far lower than that of previous approaches. Thus, the proposed model is feasible to improve the efficiency and accuracy of Wi-Fi indoor localization. It also shows high-accuracy performance with a fast running speed even under a large-size grid. The results indicate that the proposed method can also be suitable for precise marketing, indoor navigation, and emergency rescue.

  • Bachir Namat, Ali Memon Qurban

    Drone or unmanned aerial vehicle (UAV) technology has undergone significant changes. The technology allows UAV to carry out a wide range of tasks with an increasing level of sophistication, since drones can cover a large area with cameras. Meanwhile, the increasing number of computer vision applications utilizing deep learning provides a unique insight into such applications. The primary target in UAV-based detection applications is humans, yet aerial recordings are not included in the massive datasets used to train object detectors, which makes it necessary to gather the model data from such platforms. You only look once (YOLO) version 4, RetinaNet, faster region-based convolutional neural network (R-CNN), and cascade R-CNN are several well-known detectors that have been studied in the past using a variety of datasets to replicate rescue scenes. Here, we used the search and rescue (SAR) dataset to train the you only look once version 5 (YOLOv5) algorithm to validate its speed, accuracy, and low false detection rate. In comparison to YOLOv4 and R-CNN, the highest mean average accuracy of 96.9% is obtained by YOLOv5. For comparison, experimental findings utilizing the SAR and the human rescue imaging database on land (HERIDAL) datasets are presented. The results show that the YOLOv5-based approach is the most successful human detection model for SAR missions.

  • Zhong Jia-Jun, Ma Yong, Niu Xin-Zheng, Fournier-Viger Philippe, Wang Bing, Wei Zu-kuan

    Long-term urban traffic flow prediction is an important task in the field of intelligent transportation, as it can help optimize traffic management and improve travel efficiency. To improve prediction accuracy, a crucial issue is how to model spatiotemporal dependency in urban traffic data. In recent years, many studies have adopted spatiotemporal neural networks to extract key information from traffic data. However, most models ignore the semantic spatial similarity between long-distance areas when mining spatial dependency. They also ignore the impact of predicted time steps on the next unpredicted time step for making long-term predictions. Moreover, these models lack a comprehensive data embedding process to represent complex spatiotemporal dependency. This paper proposes a multi-scale persistent spatiotemporal transformer (MSPSTT) model to perform accurate long-term traffic flow prediction in cities. MSPSTT adopts an encoder-decoder structure and incorporates temporal, periodic, and spatial features to fully embed urban traffic data to address these issues. The model consists of a spatiotemporal encoder and a spatiotemporal decoder, which rely on temporal, geospatial, and semantic space multi-head attention modules to dynamically extract temporal, geospatial, and semantic characteristics. The spatiotemporal decoder combines the context information provided by the encoder, integrates the predicted time step information, and is iteratively updated to learn the correlation between different time steps in the broader time range to improve the model's accuracy for long-term prediction. Experiments on four public transportation datasets demonstrate that MSPSTT outperforms the existing models by up to 9.5% on three common metrics.

  • Su Yang, Wu Yu-Mao, Hu Jun

    This paper builds a binary tree for the target based on the bounding volume hierarchy technology, thereby achieving strict acceleration of the shadow judgment process and reducing the computational complexity from the original O(N3)𝑂(𝑁3) to O(N2logN)𝑂(𝑁2log𝑁). Numerical results show that the proposed method is more efficient than the traditional method. It is verified in multiple examples that the proposed method can complete the convergence of the current. Moreover, the proposed method avoids the error of judging the lit-shadow relationship based on the normal vector, which is beneficial to current iteration and convergence. Compared with the brute force method, the current method can improve the simulation efficiency by 2 orders of magnitude. The proposed method is more suitable for scattering problems in electrically large cavities and complex scenarios.

  • Wang Peng, Guo Ji, Li Lin-Feng

    The support vector machine (SVM) is a classical machine learning method. Both the hinge loss and least absolute shrinkage and selection operator (LASSO) penalty are usually used in traditional SVMs. However, the hinge loss is not differentiable, and the LASSO penalty does not have the Oracle property. In this paper, the huberized loss is combined with non-convex penalties to obtain a model that has the advantages of both the computational simplicity and the Oracle property, contributing to higher accuracy than traditional SVMs. It is experimentally demonstrated that the two non-convex huberized-SVM methods, smoothly clipped absolute deviation huberized-SVM (SCAD-HSVM) and minimax concave penalty huberized-SVM (MCP-HSVM), outperform the traditional SVM method in terms of the prediction accuracy and classifier performance. They are also superior in terms of variable selection, especially when there is a high linear correlation between the variables. When they are applied to the prediction of listed companies, the variables that can affect and predict financial distress are accurately filtered out. Among all the indicators, the indicators per share have the greatest influence while those of solvency have the weakest influence. Listed companies can assess the financial situation with the indicators screened by our algorithm and make an early warning of their possible financial distress in advance with higher precision.