Mathematical programming solvers are software tools designed to solve real-world problems using mathematical programming algorithms. This survey explores the evolution of optimization technologies, from traditional methods such as the simplex algorithm and branch-and-bound techniques to modern advancements that are facilitated by parallel computing, GPU acceleration, and AI algorithms. We also emphasize the recent emergence of mathematical programming solvers developed by research institutes and companies headquartered in China as major players, who have achieved remarkable success in benchmarks when compared to established solvers. This article provides a comprehensive overview of the theoretical foundations, historical progress, and emerging trends in mathematical programming solvers, offering valuable insights for both researchers and practitioners in the field.
Serial multistage manufacturing systems (SMMS), comprising multiple consecutive stages, are widely adopted in modern industry. At each stage, quality-related components (QRCs) refer to machine parts that directly impact product quality, while key product characteristics (KPCs) reflect both product performance and customer requirements. The quality of KPCs serves as an indicator of both product quality and machine condition, whereas the condition of QRCs reflects component health and provides early warnings of potential quality issues. Simultaneously monitoring both KPCs and QRCs across all stages is vital for ensuring system reliability and maintaining consistent product quality. To best of our knowledge, in SMMS, existing studies have primarily focused on the economic design of condition-based maintenance (CBM) strategies for either monitoring QRCs or KPCs individually, while their joint monitoring has received limited attention. To address this gap, this study proposes a cost-effective joint CBM strategy for SMMS over a finite horizon. A multivariate generalized likelihood ratio (MGLR) chart is employed to monitor the variations in KPCs, and the hazard rate of QRCs is evaluated using a proportional hazards model (PHM). Alarm scenarios and cost formulations are developed over a finite horizon, and a simulation framework is established to calculate the expected total cost (ETC). Subsequently, a simulation-based genetic algorithm is employed to minimize the ETC. Finally, the proposed strategy is validated on the four-stage scroll machining process in a compressor manufacturing system, demonstrating its effectiveness.
Quality control (QC) serves as a cornerstone of modern manufacturing, exerting a decisive influence on production efficiency, product reliability and customer satisfaction. However, traditional QC systems, which largely rely on rule-based frameworks and narrowly defined statistical methods, face increasing limitations in handling the scale, diversity and complexity of contemporary industrial data. This limitation provides a strong motivation to explore the potential of large models (LMs) for advancing QC. Distinguished by their powerful capabilities in knowledge integration, contextual understanding and adaptive reasoning, LMs offer transformative opportunities to modernize QC. This review begins by analyzing why LMs are particularly well positioned to enhance QC, focusing on three crucial dimensions: input alignment, which enables seamless integration of heterogeneous data sources; task adaptability, which supports associative learning across multiple QC tasks and allows knowledge transfer; and augmented intelligence, which supports human experts in complex decision-making. Recent advances in industrial applications are summarized, with particular attention to methodological innovations, deployment practices and integration pathways into manufacturing workflows. To systematically structure the current landscape, the key challenges are categorized into three interrelated dimensions, i.e., data, model and evaluation, which correspond to the core requirements for model training, practical implementation and sustainable adaptability in real-world scenarios. Building on this foundation, the review further outlines future research directions, highlighting secure data collaboration, system-level integration and continual learning under dynamic environments as critical priorities for the next stage of development. Collectively, these insights underscore the promise of LMs in reshaping QC into an intelligent, resilient and future-ready paradigm.
In prognostics and health management (PHM), degradation modeling plays a central role in reliability analysis and lifetime prediction. The inverse Gaussian (IG) process has recently attracted increasing attention for its ability to describe monotonic and cumulative degradation with heavy-tailed behavior, analytical tractability, and clear physical interpretability. Meanwhile, the rapid development of artificial intelligence (AI) has created new opportunities to combine statistical modeling with learning-based approaches in reliability analysis. This paper presents a comprehensive review of IG-process-based degradation modeling, covering its theoretical foundations, model extensions, parameter estimation, and diagnostic methods. Applications in accelerated degradation test design, burn-in test, remaining useful life prediction, and maintenance optimization are systematically summarized. Recent progress on AI-integrated IG frameworks is also reviewed and critically assessed. In addition, key challenges and research opportunities are discussed to guide future developments in intelligent PHM.
Research in human–robot Interaction (HRI) has increasingly demonstrated how Augmented Reality (AR) enables better interactions between humans and robots. However, the design of HRI remains less understood. Through a systematic literature review of 53 related papers, this research provides an overview of the emerging applications and trends for AR and identifies three types of AR interfaces as follows: 1) remote modular interface, 2) proximal modular interface, and 3) proximal integral interface. The review indicates potential future directions of construction-oriented and human-centric interaction design studies, leading to four pairs of subsystems, which are frequently modularised or integrated, and three conceptual frameworks for HRI interfaces are proposed. Moreover, this research contributes to the theoretical exploration of interaction design. Future applications can adapt to various tasks by using the proposed three conceptual frameworks for interfaces, as well as combining the four proposed subsystem pairs to suit specific task requirements in the construction sector.
In the global context of sustainable development, stakeholder concerns about the environmental impacts of infrastructure projects have become increasingly prominent, which can significantly influence the progress of projects. However, integrating changing environmental opinions into project decision-making remains a challenge due to the complexity, highly dynamic nature and volume of data. Large Language Models (LLMs) have emerged as transformative tools for efficiently and rapidly analyzing this type of data, offering new opportunities for enhancing decision-making processes. This research proposes a framework utilizing LLM for three major approaches in opinion analysis among stakeholders: sentiment analysis, stance analysis, and topic modeling. The framework has been applied to the case of the Scarborough Gas Project in Western Australia. A set of smaller models, including Neural Networks (NNs), Support Vector Machines (SVMs), Random Forest, Logistic Regression, and BERT, were fine-tuned using GPT-3.5 as a base and compared for performance in sentiment and stance analysis, with SVM achieving the highest accuracy rates of 83.90% and 87.55%, respectively. Integrating LLMs into topic modeling also significantly enhanced the interpretation of stakeholder environmental opinions by transforming keyword lists generated by traditional LDA methods into coherent narratives, reducing reliance on human interpretation, refining themes, and enabling a more comprehensive understanding of environmental, political, and legal issues. This study presents the first unified framework that integrates LLM embeddings with external classifiers to simultaneously analyze all three analytical tasks, to our knowledge. Central to the framework is the theoretically grounded Sentiment–Stance–Topic Matrix and Decision-Making Map, which systematically translate unstructured stakeholder input into prioritized engagement actions. By categorizing sentiment, stance, and topic configurations into targeted strategies, the framework offers structured, data-driven guidance for project decision-makers. This approach bridges gaps in traditional stakeholder analysis and provides a transferable decision-support tool, enabling more inclusive, responsive project governance aligned with global sustainable development goals.
As international construction projects continue to expand, construction enterprises are accumulating vast amounts of contract-related text data, making the effective management and extraction of knowledge from these dense texts essential to mitigate knowledge loss and ensure efficient contract management. The advent of large language models (LLMs) presents a promising avenue for enhancing contract knowledge management through intelligent systems. However, challenges such as hallucination, inflexibility, and lack of interpretability often diminish practitioners’ confidence in applying these models to real-world scenarios. This study seeks to develop a knowledge-based question-and-answer (Q&A) system for international construction contracts by integrating both the knowledge graph (KG) and the LLM. Built upon a domain-specific KG derived from the 2022 edition of the Fédération Internationale des Ingénieurs-Conseils (FIDIC) Yellow Book and the NEC4 Conditions of Contract, the system leverages LLM to conduct synergistic reasoning with the KG, enabling it to answer complex queries using both tacit knowledge and external sources. Experimental results demonstrate that the proposed approach markedly enhances the model’s performance in Q&A tasks of contract knowledge, achieving an average success rate exceeding 87% in terms of both accuracy and interpretability. This model provides a specialized Q&A system for international construction enterprises, facilitating flexible knowledge acquisition and task-oriented analysis in contract management, while also introducing a novel framework for integrating AI technologies into the management of international construction contracts.
Most existing predictive models remain demand-centric and fail to systematically incorporate supply-side risks such as import dependence, price volatility, and market concentration. Thus, this study proposes a SHAP-driven Weighted Rule Attention Mechanism (SWRAM), and explicitly embeds energy security risk factors including import dependence, market concentration, and price volatility into natural gas consumption forecasting. The model integrates explainable machine learning with a rule-constrained attention mechanism to enable both transparent feature attribution and robust predictive performance, and, when compared with the benchmark models, it demonstrates lower forecasting errors and more stable generalization, thereby validating the effectiveness of embedding SHAP-based rule-constrained weights within the attention mechanism. Using monthly data for China from 2012 to 2024, the results show that supply capacity and infrastructure remain the dominant drivers of natural gas consumption, while risk-related factors have gained importance since 2020, reflecting the impact of global supply-chain instability. Interaction analysis reveals a strong nonlinear coupling between domestic production and import dependence, indicating that insufficient domestic output amplifies exposure to external risks. Price volatility exerts an increasingly significant effect during global energy shocks, especially between 2021 and 2023. These findings suggest that natural gas consumption is shaped jointly by short-term demand cycles and long-term structural dependencies. The SWRAM framework provides an interpretable and policy-oriented tool for improving forecasting reliability and supporting data-driven energy security governance.
Urban transportation systems exhibit structurally spatial inequity, characterized by inadequate service in non-central areas and persistent first/last-mile challenges, disproportionately impacting vulnerable populations. While existing solutions like Community-Based Transportation (CBT), micromobility, ride-hailing, microtransit, and Autonomous Shuttle Buses (ASB) offer partial remedies, they often suffer from limitations such as scale constraints, cost barriers, technological immaturity, or profit-driven biases. To overcome these systemic shortcomings, this paper proposes and elaborates on the Community-Based Ultra-Flex Autonomous Mobility System (CBUAMS), a novel, integrated socio-technical framework explicitly designed to advance urban transportation spatial equity. Crucially, CBUAMS is envisioned not as an isolated system but as a complementary and synergistic component. It is designed for seamless integration with existing urban transportation modes to enhance overall network efficiency and accessibility. CBUAMS is founded on three core pillars—community-based operations, ultra-flex service—and an adapted autonomous mobility system, and achieves equity goals through synergistic, multi-dimensional strategies: 1) Spatial restructuring via a “Three-Ring Model” (Core, Coordination, External Rings) that redefines the community as the basic unit for mobility planning and prioritizes local circulation; 2) Equity-oriented technological innovation, featuring a proposed Community-Based Autonomous Driving Classification (C-ADC) tailored for community contexts and costs, and low-cost, inclusive Vehicle-to-Everything (V2X) deployment strategies to ensure broad accessibility; and 3) Polycentric community governance through a “government-enterprise-community” tripartite model that fosters collaboration, responsiveness, and sustainability. This research details the conceptual underpinnings, operational mechanisms, key technological components, and inherent engineering management challenges of CBUAMS. By offering a holistic, integrated approach that confronts systemic inequities, CBUAMS presents a promising new paradigm and practical blueprint with significant potential to redefine urban accessibility, enhance transportation equity, and contribute to a more sustainable and just city future.
The rapid growth of shared e-scooters has presented new challenges for urban management, especially in cities newly introducing the service, where scientifically planning parking stations to prevent disorganized parking is a time-consuming and costly problem. This paper proposes a cross-city transfer learning framework designed to rapidly predict rational layouts for fixed e-scooter parking stations in data-sparse new cities. The method utilizes operational data from 25 European cities and multi-source urban open-space data, constructing a transfer prediction model by discretizing cities into hexagonal grids and embedding spatial feature vectors. The results indicate that the effectiveness of group-based transfer learning is significantly influenced by geographic location, population size, and economic level, with the most effective transfers occurring between economically similar cities (an average F1-score of 0.801 for the super-high-income group). Additionally, our multi-dimensional city similarity matching strategy—based on socio-economic, point-of-interest (POI) distribution, and spatial structure features—demonstrates better stability and generalization, particularly in achieving the Top-3 similarity match. This research provides city planners and operators with data-driven insights to design shared e-scooter parking infrastructure efficiently.
The Agile Earth Observation Satellite Scheduling Problem (AEOSSP) is a complex NP-hard challenge that involves selecting, sequencing, and timing observation tasks to maximize imaging profits while adhering to various constraints. In our study, we developed a mixed-integer programming model for AEOSSP, incorporating key constraints related to visible time windows and time dependencies. To tackle this, we propose an Evolutionary Adaptive Large Neighborhood Search Algorithm (evALNS) enhanced by Large Language Models (LLMs). Our work pioneers the application of LLMs to ALNS by being the first to automatically develop and evolve its critical destroy heuristics. However, a naive application of LLMs is insufficient for such a complex domain. We therefore introduce a novel Dual-Population Co-Evolutionary Computing Framework (DPEC) to bridge the LLM’s knowledge gap by synergizing LLM-generated heuristics with expert-designed ones. This co-evolution, guided by a Functional Natural Language Embedding (FNLE) strategy and customized prompts, significantly enhances the adaptability and efficiency of ALNS. Extensive numerical experiments demonstrated the superiority of the evALNS evolved under our framework, achieving an average profit improvement of 8.48% compared to the original ALNS with expert-designed destroy operators.
Vehicle-to-grid (V2G) technology enables electric vehicles (EVs) to discharge stored energy back into the grid, improving grid stability and renewable energy integration. Despite optimistic market forecasts projecting growth to $62 billion by 2033, V2G adoption remains predominantly at the pilot stage. This commentary reviews the current status of V2G technology, examining technical maturity, economic feasibility, stakeholder perspectives, and regulatory environments across regions. Demonstration projects in Denmark, the UK, Japan, and China have confirmed technical viability and economic promise, yet widespread commercial deployment faces significant challenges. Major barriers include immature business models, battery degradation concerns from frequent charge cycles, lack of standardized communication protocols, and insufficient consumer participation due to limited incentives and low public awareness. The study summarizes successful global pilots, identifies critical obstacles to large-scale implementation, and highlights strategic recommendations. Broad adoption requires coordinated efforts in standardizing technologies, developing clear incentives, supportive policies, and enhancing consumer engagement through targeted demonstrations and education.
The rapid evolution of robotic and intelligent technologies is propelling the construction industry toward human–robot collaboration. Consequently, robots have transcended their role as mere instruments of labor to acquire the attributes of laborers, forming a human–robot hybrid workforce that jointly undertakes productive activities. The emergence of this new labor paradigm is poised to trigger unprecedented transformations in project division of labor, organizational structure, technological coordination, management models, and governance mechanisms. However, existing research lacks a systematic understanding of this transformation and its potential cascading effects. Therefore, this paper adopts a sociotechnical systems framework to analyze human–robot collaboration, examining the technological evolution of construction robots from tools to partners and the corresponding shifts in collaboration patterns. Furthermore, drawing on the Leavitt model, human–robot collaboration is conceptualized as a coupled configuration of “people–technology–task–structure.” This perspective enables an integrated analysis of how the technical and social attributes of human–robot collaboration reshape both the technical logic and managerial paradigms of engineering management. Finally, this study identifies ten key research topics reflecting the emerging characteristics of human–robot collaboration in the construction industry, aiming to illuminate future frontiers of this transformation in engineering management.