Fiber allocation in optical cable production is critical for optimizing production efficiency, product quality, and inventory management. However, factors like fiber length and storage time complicate this process, making heuristic optimization algorithms inadequate. To tackle these challenges, this paper proposes a new framework: the dueling-double-deep Q-network with twin state-value and action-advantage functions (D3QNTF). First, dual action-advantage and state-value functions are used to prevent overestimation of action values. Second, a method for random initialization of feasible solutions improves sample quality early in the optimization. Finally, a strict penalty for errors is added to the reward mechanism, making the agent more sensitive to and better at avoiding illegal actions, which reduces decision errors. Experimental results show that the proposed method outperforms state-of-the-art algorithms, including greedy algorithms, genetic algorithms, deep Q-networks, double deep Q-networks, and standard dueling-double-deep Q-networks. The findings highlight the potential of the D3QNTF framework for fiber allocation in optical cable production.
Digital manufacturing enterprises require high operational agility due to the intricate and dynamically changing nature of their tasks. The implementation of accurate and timely predictions of task bottlenecks is therefore crucial to enhancing overall efficiency. Due to task complexities and dynamic business environments, bottleneck prediction is a challenging issue. This study introduces a novel approach that constructs a task network from extensive data accumulated within a digital enterprise to identify and depict the complex interrelations among tasks. Based on this method, we develop a Bottleneck Spatio-Temporal Graph Convolutional Network (BTGCN) model based on deep learning methods that considers spatial features of the task network and temporal data of task execution and integrates the strengths of GCN and GRU. We find that GCN effectively learns and represents the complex topology of task networks to capture spatial dependencies, while GRU adapts to the dynamic changes in task data, accurately capturing temporal dependencies. Informed by the theory of constraints, the study applies the proposed BTGCN model to the prediction of task throughput bottlenecks in digital enterprises. Experimental results demonstrate that while the model has certain limitations, it can accurately extract spatio-temporal correlations from system data, offering advantages in bottleneck prediction over other benchmark models.
This study proposes a comprehensive framework for the joint optimization of maintenance actions and safety stock policies for multi-specification small-batch (MSSB) production. The production system considered consists of multiple machines arranged in a series-parallel configuration. Given the multi-stage nature of the MSSB, a piecewise Gamma process is developed to model the degradation of machines owing to varying product specifications. A quality-based maintenance model is proposed to guide the scheduling of maintenance actions based on the observed product defect rate. The maintenance policy is optimized at two levels: at the machine level, the optimal quality of the produced products is determined, and at the system level, a threshold quality value is established to facilitate the opportunistic maintenance of machines. The relationship between the buffer stock and machine capacity is explicitly modeled to ensure production efficiency. A simulation-based multi-objective algorithm is employed to identify the optimal decision variable levels for the proposed maintenance policy. The numerical results demonstrate that the proposed method effectively balances the conflicting objectives of minimizing the expected operational costs and maximizing production efficiency.
Maintenance models based on delay-time have been extensively used in industry. However, some models still impose strong assumptions, e.g., most models do not pay attention on determining who is responsible for performing maintenance actions. Even when models do consider this, decisions regarding the moment for these actions and who is responsible for them are typically separately optimized. This paper sets out to tackle the problem of optimizing the moments for maintenance actions and the assignment of inspection teams responsible for each task, such that both decisions are jointly optimized. Thus, we propose a hybrid policy that combines inspections and age-based replacement. Due to the complexity of the problem, we propose an Adaptive Simulated Annealing algorithm, which presents percentages of optimization of up to 4.4% when compared to a general “black-box” algorithm. Our numerical results indicate that neglecting to whom the inspection would be assigned could generate worse solutions. Finally, we developed an user-friendly online app for assessing the cost-rate of the policy.
This paper focuses on the economic resilience aspects of the financialization of infrastructure projects concerning enhancing market dynamics and risk regulation. We examined 39 infrastructure REITs listed in China between June 2021 and June 2024. Utilizing an economic resilience evaluation model to assess the resistance and recovery capacities of these infrastructure REITs, we incorporate seven investor heterogeneity measures. A geographic detector model is employed to analyze divergence, identify key determinant factors, pinpoint risk zones, and investigate the interaction between these measures within the context of PPP-REITs and DI-REITs. The empirical results show that the investor ratio and expected investment tenure are critical to the construction of economic resilience indices for infrastructure REITs. Also, the interaction of factors holds significance toward influencing the divergence in economic resilience. Our findings reveal a “barrel effect” of investor heterogeneity in infrastructure project financial products, indicating a consistency between economic resilience and investor heterogeneity. By integrating the investor heterogeneity index into the resilience evaluation framework of infrastructure REITs, this study offers valuable insights into the risk-resistance capacity of infrastructure financial products and the enhancement of economic resilience in these projects.
Unethical behaviors among contractors are prevalent in engineering management activities within the construction industry, significantly affecting project performance, public safety, and the industry’s reputation. Despite the urgent need to enhance the ethical performance of contractor managers, current research lacks a theoretical framework to systematically categorize and characterize these unethical behaviors. This study fills this gap by conducting semi-structured interviews with 20 experienced construction project managers in China, followed by a qualitative content analysis. The findings indicate that contractor mangers’ unethical behaviors can be organized into a framework comprising three levels, five dimensions, and 18 themes. The most common behaviors identified include “construction disturbance,” “qualification rental,” and “deception in settlement.” Additionally, the study explores the causes of these unethical behaviors, revealing power and responsibility imbalances within the supply chain and the lack of moral competencies among contractor managers in the construction industry. This study offers a theoretical taxonomy framework for contractor managers to identify and assess ethical performance in practice and provides a scientific basis for authorities to establish ethical guidelines and enhance ethical management practices in the construction industry.
To more accurately estimate and control the magnitude of the shield tail clearance, a hybrid deep learning model with the integration of an online physics-informed deep neural network (online PDNN) and non-dominated sorting genetic algorithm-II (NSGA-II) is developed. The online PDNN has evolved from a deep learning framework constrained by the underlying physical mechanism of shield tail clearance measurements. The algorithm is used to forecast the shield tail clearance in tunnel boring machines (TBMs). The NSGA-II is employed to conduct the multi-objective optimization (MOO) process for shield tail clearance. The proposed method is validated in a tunnel case in China. Experimental results reveal that: (1) In comparison with some state-of-the-art algorithms, the online PDNN model demonstrates superior capability in predicting shield tail clearance above, upper-left, and upper-right, with R2 scores of 0.93, 0.90, and 0.90, respectively; (2) The MOO achieves a comprehensive optimal solution, with the overall improvement percentage of shield tail clearance reaching 30.87% and a hypervolume of 32 under the 20% constraint condition, which surpasses the average performance of other MOO frameworks by 23 and 5.48%, respectively. The novelty of this research lies in coupling the constructed physical constraints and the online update mechanism into a causal analysis-oriented data-driven model, which not only enhances the model’s performance and interpretability but also realizes the control for the shield tail clearance by the integration of NSGA-II.
Robots present an innovative solution to the construction industry’s challenges, including safety concerns, skilled worker shortages, and productivity issues. Successfully collaborating with robots requires new competencies to ensure safety, smooth interaction, and accelerated adoption of robotic technologies. However, limited research exists on the specific competencies needed for human–robot collaboration in construction. Moreover, the perspectives of construction industry professionals on these competencies remain underexplored. This study examines the perceptions of construction industry professionals regarding the knowledge, skills, and abilities necessary for the effective implementation of human–robot collaboration in construction. A two-round Delphi survey was conducted with expert panel members from the construction industry to assess their views on the competencies for human–robot collaboration. The results reveal that the most critical competencies include knowledge areas such as human–robot interface, construction robot applications, human–robot collaboration safety and standards, task planning and robot control system; skills such as task planning, safety management, technical expertise, human–robot interface, and communication; and abilities such as safety awareness, continuous learning, problem-solving, critical thinking, and spatial awareness. This study contributes to knowledge by identifying the most significant competencies for human–robot collaboration in construction and highlighting their relative importance. These competencies could inform the design of educational and training programs and facilitate the integration of robotic technologies in construction. The findings also provide a foundation for future research to further explore and enhance these competencies, ultimately supporting safer, more efficient, and more productive construction practices.
Modular construction (MC) is a sound strategy to alleviate global issues such as housing crisis, labor shortage, and stagnant productivity. Project managers are aspired to achieve a Just-in-Time (JIT) delivery of their MC logistics. However, the efforts often fall short without the presence of a dedicated Estimated Time of Arrival (ETA) model. This study aims to bridge this gap by developing a MC-oriented ETA model. It does so by first identifying critical factors influencing ETA accuracy in general logistics and then developing an ETA model prototype, which is then calibrated using simulations and data collected from the Internet of Things (IoTs) applied in real-life MC projects in Hong Kong, China. Validated through a cross-border MC project in China’s Greater Bay Area, the MC-oriented ETA model achieved 90.6% prediction accuracy (±10 min), reduced transportation delays by 37.5%, and slashed daily planning time from 46.75 to 18.75 min. It is expected that the ETA model can be used in predictive planning of MC logistics delivery in the future. Ultimately, it can lead to the development of smart logistics planning and control systems to expedite MC housing delivery and alleviate urban congestion in high-density cities, offering valuable insights for policymakers, construction stakeholders, and supply chain managers.
Artificial Intelligence (AI) is playing an increasingly pivotal role in New Product Development (NPD) project management. We propose a comprehensive framework to explore the impact of human–AI collaboration on organizational knowledge diffusion. First, we develop a knowledge diffusion model based on continuous human–AI interactions, and we use the Agent-Based Modeling (ABM) method to simulate the diffusion process within the collaborative team and assess diffusion rates and efficiency based on knowledge levels. Second, we examine the interdependencies among members under different roles of AI, integrating AI cognitive capabilities, human–AI cognitive trust, and task interdependencies, and build a tie strength measurement model from the Social Network Analysis (SNA) perspective. Third, an entropy-based model is introduced to measure AI’s cognitive capability, accounting for project complexity and AI-generated solution uncertainty. We also establish a dynamic cognitive trust model that incorporates both the dynamic nature of trust in human–AI interactions and AI’s cognitive capability. Task interdependencies are assessed through a multi-dimensional activity network, and visualized by the Dependency Structure Matrix (DSM) method. Finally, an industrial example is provided to demonstrate the proposed model. Results show that organizational knowledge diffusion performs best when AI acts both as a collaborator and a tool. Moreover, this paper provides new insights, including how trust and task interdependencies significantly impact knowledge diffusion in human–AI collaborative organizations.
This paper examines the intricate issue of Optimal Power Flow (OPF) optimization concerning the incorporation of renewable energy sources (RESs) into power networks. We present the Boosting Circulatory System Based Optimization (BCSBO) method, a novel modification of the original Circulatory System Based Optimization (CSBO) algorithm. The BCSBO algorithm has innovative movement techniques that markedly improve its exploration and exploitation skills, making it an effective instrument for addressing intricate optimization challenges. The suggested technique is thoroughly assessed utilizing five different objective functions alongside the IEEE 30-bus and IEEE 118-bus systems as test examples. The performance of the BCSBO algorithm is evaluated against many recognized optimization approaches, including CSBO, Moth-Flame Optimization (MFO), Particle Swarm Optimization (PSO), Thermal Exchange Optimization (TEO), and Elephant Herding Optimization (EHO). For the first case with minimizing the fuel cost associated with the thermal power generators, the total cost reported by the BCBSO is obtained as 781.8610, which is lower than other algorithms. For the second case, aimed at minimizing the total generating cost while also imposing a fixed carbon tax for thermal units, the derived total cost by the BCBSO is 810.7654. For the third case, aimed at minimizing the total cost considering prohibited operating zones of thermal units with RESs, the obtained total cost using the BCBSO is 781.9315. For case 4, with network losses included, the value of total costs obtained using the BCBSO is 880.4864. The value of total costs considering voltage deviation in case 5 is also obtained as 961.4354. For the IEEE 118-bus test system, the total cost is obtained 103,415.9315 using the BCBSO. These values reported by the BCBSO are all lower than those obtained by other methods addressed in this paper. The findings highlight the BCSBO algorithm’s potential as a crucial tool for enhancing power systems with renewable energies.
Accurate crude oil price forecasting is critical in energy economics and energy engineering, as it informs economic policy-making and investment decisions. The emergence of big data brings both new opportunities and challenges for crude oil price forecasting. This paper systematically reviews recent advances in crude oil price forecasting in the context of big data, with a focus on the evolution of data types, predictors, and modeling techniques. In particular, it analyzes key forecasting approaches, including conventional and data-driven forecasting models, while emphasizing the growing role of emerging data sources. Promising directions for future research include the integration of multi-source data, the reconstruction of high-frequency supply and demand indicators, the development of hybrid modeling approaches, the enhancement of model interpretability, and the evaluation of the economic value of forecasting outcomes.
Accurate traffic state estimations (TSEs) within road networks are crucial for enhancing intelligent transportation systems and developing effective traffic management strategies. Traditional TSE methods often assume homogeneous traffic, where all vehicles are considered identical, which does not accurately reflect the complexities of real traffic conditions that often exhibit heterogeneous characteristics. In this study, we address the limitations of conventional models by introducing a novel TSE model designed for precise estimations of heterogeneous traffic flows. We develop a comprehensive traffic feature index system tailored for heterogeneous traffic that includes four elements: basic traffic parameters, heterogeneous vehicle speeds, heterogeneous vehicle flows, and mixed flow rates. This system aids in capturing the unique traffic characteristics of different vehicle types. Our proposed high-dimensional fuzzy TSE model, termed HiF-TSE, integrates three main processes: feature selection, which eliminates redundant traffic features using Spearman correlation coefficients; dimension reduction, which utilizes the T-distributed stochastic neighbor embedding machine learning algorithm to reduce high-dimensional traffic feature data; and FCM clustering, which applies the fuzzy C-means algorithm to classify the simplified data into distinct clusters. The HiF-TSE model significantly reduces computational demands and enhances efficiency in TSE processing. We validate our model through a real-world case study, demonstrating its ability to adapt to variations in vehicle type compositions within heterogeneous traffic and accurately represent the actual traffic state.
With ongoing global industrialization, the demand for refined oil products, particularly in developing countries, is increasing significantly. Shipping companies typically transport refined oil from a primary refinery to multiple oil depots, addressing various demand tasks. To manage uncertain refined oil demand, shipping companies use both self-owned tankers and outsourced tankers, including time-chartered and voyage-chartered tankers. A time charter is a contract where the shipping company pays charter money for a specific period, while a voyage charter involves payments based on voyage frequency. This paper develops a nonlinear programming model to optimize fleet deployment, considering transportation costs and penalty costs for capacity loss during a planning period. Additionally, the model is extended to allow flexible charter types, meaning that time-chartered and voyage-chartered tankers are interchangeable based on shipping demands. A heuristic algorithm based on tabu search is designed to solve the proposed models, and four search operators are incorporated to enhance algorithm efficiency. The models and algorithms are validated using a real tanker fleet. Numerical experiments demonstrate the efficiency of the improved tabu search algorithm in obtaining exact solutions for small-scale instances. The case study indicates that the shipping company prefers waiting for tasks to avoid ship delay penalties and provide high-quality services. Moreover, the flexible charter strategy can reduce shipping costs by 16.34%. These findings offer management insights for determining charter contracts for ship fleets.
The demand-adaptive system (DAS) has been recognized as a promising transit mode for demand with high fluctuations. In this paper, we optimize the routes and request selection for a DAS with multiple service routes. Currently, most studies on DAS focus on optimizing single-route systems, where each area is exclusively served by one route and heuristic pre-assignations of requests are made. In contrast, our study addresses a more generalized routing and request selection problem for a DAS with multiple service routes. This problem jointly assigns requests to the service routes and determines the resulting routes while considering the pickup and delivery locations and the reserved boarding time for each request. A mixed-integer linear programming (MILP) model is developed to minimize the sum of bus travel time cost, passenger in-vehicle and waiting time costs, and request rejection penalties. A tailored adaptive large neighborhood search algorithm (ALNS) solves this optimization model efficiently. The numerical experiments show that, under the same optimality conditions, the proposed algorithm outperforms the exact algorithm implemented by GORUBI in terms of solution quality and computation time. The ALNS algorithm also reports cost reductions of up to 50% in comparison with prevailing benchmark metaheuristics. Moreover, the multi-route DAS in this paper has a lower rejection rate and objective value than the single-route systems examined in previous studies.
This study investigates a truck scheduling problem in open-pit mines, which focuses on optimizing truck transportation and commercial coal production. Autonomous dump trucks are essential transportation tools in the mines; they transport the raw coals and rocks excavated by electric shovels to the unloading stations. Raw coals with different calorific values are processed to produce commercial coals for sale. This process requires maintaining a calorific balance between the excavated raw coals and the blended commercial coals. We formulate a mixed-integer linear programming model for the truck scheduling problem in open-pit mines. The objective of this decision model is to minimize the total working time of all trucks. To solve the proposed model efficiently in large-scale instances, a branch-and-price based exact algorithm is devised. Based on real data of an open-pit mine in Holingol, Inner Mongolia, China, numerical experiments are performed to validate the efficiency of the proposed algorithm. The experiment results show that the optimality gap of the proposed algorithm by comparing with CPLEX is zero; and the solution time of CPLEX is 2.46 times that of the proposed algorithm. Moreover, sensitivity analyses are conducted to derive some managerial insights. For example, open-pit mine managers should carefully consider the truck fleet deployment, including the number of trucks and the capacity of trucks. Additionally, the spatial distribution of unloading stations and electric shovels is crucial for enhancing transportation efficiency in open-pit mines.
Rapid urbanization is reshaping mobility demands, calling for advanced intelligence and management capabilities in urban transport systems. Generative Artificial Intelligence (AI) presents new opportunities to enhance the efficiency and responsiveness of Intelligent Transportation Systems (ITS). This paper reviews the existing literature in transportation and AI to investigate the core technologies of Artificial Intelligence Generated Content (AIGC) – including dialog and reasoning, prediction and decision making, and multimodal generation. Applications are summarized across the four primary ITS subsystems (road subsystem, vehicle subsystem, traveler subsystem and management subsystem). This paper finds that AIGC has become an important way to promote the progress and development of ITS by exploring the research progress of cutting-edge technologies such as data generation, assisted driving decision-making, and intelligent traffic prediction. Meanwhile, this paper explores the potential challenges that AIGC brings to human society from the perspectives of safety risks of fake content, human-machine relationships, social cognition and emotional trust, and related ethical issues, providing insights for the development of safer and more sustainable ITS in the future.
In recent years, global geopolitical turmoil, including events like the US–China trade war and the Russia–Ukraine conflict, has significantly reshaped the panorama of the global supply chain (SC). Among these, the chip SC stands out as particularly impacted. Chips form the backbone of all electronic industries, therefore, there is an urgent need for a reassessment of SC security within the chip sector. In this study, we begin by conducting an LDA analysis on 320 relevant news reports to develop a thematic model for the Chinese chip supply chain (CCSC). This approach helps identify the key risk landscape, ultimately distilling 10 major risk factors and four mitigation strategies. Subsequently, we propose an improved multi-layer sequential Bayesian Network (BN) model to assess and quantify risks within CCSC. Lastly, we utilize sensitivity analysis and propagation analysis to examine the impact of risk factors on the ultimate risk of SC disruption and define the resilience and importance of the risk nodes. Our research offers fresh theoretical insight into utilizing BN and LDA methods for modeling SC disruption risk. Furthermore, the study reveals that talent shortage, patent infringement, and insufficient Research and Development (R&D) investment are the three most significant factors contributing to the risk of disruptions in the CCSC. These factors are not only the most critical but also the least resilient, underscoring that enhancing innovation capabilities should be the foremost priority for strengthening the CCSC. Increasing government subsidies is the most effective mitigation measure, providing greater financial support for enterprises, boosting their innovation capabilities and competitiveness, and attracting more investors to the industry.
With the proliferation of the Internet, advancements in big data technology, and the widespread adoption of cloud computing, numerous digital trading platforms have emerged across various industries, revolutionizing the marketing of scientific and technological (S&T) achievements by enabling their listing as specialized commodities. Consequently, this has significantly accelerated the commercialization of S&T achievements, fostering a more efficient and dynamic market for technological innovation. In this paper, we construct a platform-based supply chain system consisting of the platform, S&T achievements providers, and demanders. The platform is considered to have two operation models, demand-side charging and supply-side charging, and the corresponding game-theoretic models are built based on them. We further compare the equilibrium results under the two charging models. It was found that the platform’s preference for the charging model is influenced by the value of the S&T achievements and the strength of the platform’s attractiveness effect. Interestingly, providers of S&T achievements consistently prefer the demand-side charging model, as it allows them to achieve higher profits. In addition, the demand-side charging model is more profitable for the platform-based supply chain system. The research work enriches the operations management theory of digital platforms and guides the business practice of S&T achievements transformation platforms.
Locating the source of diffusion in complex networks is a critical and challenging problem, exemplified by tasks such as identifying the origin of power grid faults or detecting the source of computer viruses. The accuracy of source localization in most existing methods is highly dependent on the number of infected nodes. When there are few infected nodes in the network, the accuracy is relatively limited. This poses a major challenge in identifying the source in the early stages of diffusion. This article presents a novel deep learning-based model for source localization under limited information conditions, denoted as GCN-MSL (Graph Convolutional Networks and network Monitor-based Source Localization model). The GCN-MSL model is less affected by the number of infected nodes and enables the efficient identification of the diffusion source in the early stages. First, pre-deployed monitor nodes, controlled by the network administrator, continuously report real-time data, including node states and the arrival time of anomalous signals. These data, along with the network topology, are used to construct node features. Graph convolutional networks are employed to aggregate information from multiple-order neighbors, thereby forming comprehensive node representations. Subsequently, the model is trained with the true source labeled as the target, allowing it to distinguish the source node from other nodes within the network. Once trained, the model can be applied to locate hidden sources in other diffusion networks. Experimental results across multiple data sets demonstrate the superiority of the GCN-MSL model, especially in the early stages of diffusion, where it significantly enhances both the accuracy and efficiency of source localization. Additionally, the GCN-MSL model exhibits strong robustness and adaptability to variations in external parameters of monitor nodes. The proposed method holds significant value in the timely detection of anomalous signals within complex networks and preventing the spread of harmful information.
An online patient support community (OPSC) facilitates information exchange and emotional support among patients with shared health conditions. While the benefits of OPSCs for patients’ physical and mental well-being are well-documented, the antecedent factors that drive patients to provide informational and emotional support remain underexplored. This study extends social identity theory to investigate how professional capital—comprised of human capital, decisional capital, and social capital—affects the provision of social support in OPSCs. The study also examines how identity rights, representing community status and privileges, moderate this relationship. An empirical study was conducted on one of the largest diabetes OPSCs in China, analyzing 227,901 text interactions from 5,977 members. Using text mining analytics (i.e., text classifiers and latent Dirichlet allocation models), we measured professional capital and examined its influence on social support provision. The results reveal that professional capital significantly impacts the provision of both informational and emotional support. Specifically, the human and decisional dimensions of professional capital exert a stronger influence than the social dimension. Additionally, identity rights positively moderate the effects of professional capital on social support provision, showing that members with higher status and privileges contribute more actively. These findings have important implications for both management theory and practice.
Knowledge transfer among New Product Development (NPD) projects is beneficial for reducing project duration and promoting technological innovation. To support effective knowledge transfer, we propose a clustering method for NPD projects based on similarity, integrating both structural and attribute similarities. First, to measure project structural similarity, we analyze both direct and indirect knowledge transfer relationships among project activities using the dependency structure matrix (DSM). Second, we measure project attribute similarity by calculating knowledge increments derived from sequential and iterative development processes. Finally, we apply a hierarchical clustering method to group similar projects, forming different programs. An industrial example is provided to demonstrate the proposed model. The results show that clustering projects into programs can enhance multi-project management by reducing coordination time for knowledge transfer within each program. Additionally, this approach provides some new insights, including quantifying project similarity based on knowledge transfer and understanding the influence of structural and attribute similarities on multi-project management.
The acceleration of digitalization and networking in the global landscape has been prompting organizations to connect into innovation communities beyond geographic boundaries within innovation ecosystems. These communities, consisting of firms that collaborate frequently, serve as a vital sub-environment for co-innovation and value creation. Despite the significant role played by these innovation communities, the impact of a firm’s embeddedness within these communities on its innovation performance remains underexplored. This paper addresses this gap by examining the effects of both within-community and cross-community embeddedness on firm innovation, with a specific focus on the contingency of collaboration complementarity. We introduce a conceptual model analyzing the effects of both relational and structural embeddedness within and across communities. An empirical study is conducted using 22 years of panel data from the global 3D printing industry. We construct patent collaboration networks among 6,109 relevant organizations over 5-year windows and identify innovation communities in each network through topological clustering algorithms. A negative binomial regression model is employed to test our hypotheses. Our findings reveal that firms benefit from both within-community and cross-community embeddedness. Notably, firms with higher collaboration complementarity experience greater benefits from within-community relational embeddedness and cross-community structural embeddedness, while those with lower complementarity gain more from cross-community relational embeddedness. This research enriches the innovation ecosystem literature by introducing an innovation community perspective and highlighting how embeddedness, coupled with collaboration orientation, drives firm-level innovation. Additionally, it offers insights into how firms can leverage collaborations and optimize their positions within innovation ecosystems to enhance their innovation performance.
The rapid expansion of satellite Internet deployments, driven by the rise of Space-Ground Integration Network (SGIN) construction, has led to a significant increase in satellite numbers. To address the challenge of efficient networking between large-scale satellites and limited ground station resources, this paper presents a hybrid learning-assisted multi-parallel algorithm (HLMP). The HLMP features a multi-parallel solving and deconflicting framework, a learning-assisted metaheuristic (LM) algorithm combining reinforcement learning (RL) and Tabu simulated annealing (TSA), and a linear programming (LP) exact-solving algorithm. The framework first divides the problem into parallel sub-problems based on the time domain, then applies LM and LP to solve each sub-problem in parallel. LM uses LP-generated scheduling results to improve its own accuracy. The deconflicting strategy integrates and refines the planning results from all sub-problems, ensuring an optimized outcome. HLMP advances beyond traditional task-driven satellite scheduling methods by offering a novel approach for optimizing large-scale satellite-ground networks under the new macro paradigm of “maximizing linkage to the greatest extent feasible.” Experimental cases involving up to 1,000 satellites and 100 ground stations highlight HLMP’s efficiency. Comparative experiments with other metaheuristic algorithms and the CPLEX solver further demonstrate HLMP’s ability to generate high-quality solutions more quickly.
A combat system-of-systems (CSoS) is a network of independent entities that interact to provide overall operational capabilities. Enhancing the resilience of CSoS is garnering increasing attention due to its practical value in optimizing network architectures, improving network security and refining operational planning. Accordingly, we present a unified framework called CSoS space-time resilience enhancement (CSoS-STRE) to enhance the resilience of CSoS. Specifically, we develop a spatial combat network model and a space-time resilience optimization model that captures the complex spatial relationships between entities and reformulates the resilience enhancement problem as a linear optimization model with spatial features. Moreover, we extend the model to include obstacles. Next, a resilience-oriented recovery optimization method based on the improved non-dominated sorting genetic algorithm II (R-INSGA) is proposed to determine the optimal recovery sequence for the damaged entities. This method incorporates spatial features while providing the optimal travel paths for multiple recovery teams. Finally, the feasibility, effectiveness, and superiority of the CSoS-STRE are demonstrated through a case study, providing valuable insights for guiding recovery and developing more resilient CSoS.
With the rapid expansion of unmanned system capabilities, integrating and sharing computing resources has become essential. In addition to enhancing resource utilization efficiency, this architecture may also introduce conflicts related to resource competition. Therefore, effective resource-sharing configurations are crucial to ensure the Safety of the Intended Functionality (SOTIF). This paper proposes a computing resource configuration analysis and optimization methods for SOTIF. First, four SOTIF requirements are explored using the computing resource-sharing architecture for unmanned systems, encompassing computing time, computing power, energy consumption restrictions, and mutual exclusion and correlation. Secondly, the computing resource configuration model and its SOTIF constraints are formalized based on the graph and set theories. Subsequently, this study divides the design process of computing resource configuration schemes into resource selection and allocation. It introduces a resource selection optimization method based on Forward Checking and a resource allocation optimization method based on NSGA-II. Finally, a typical unmanned driving scenario is considered as an example, and the optimal resource selection and allocation schemes are sequentially determined using the proposed method on the computing platform.
The extensive integration of AI with renewable energy systems is a major trend in technological advancement, but its energy consumption and carbon emissions are also a major challenge. Generative AI can quickly generate human-like content responding to cues, with excellent reasoning and generative capabilities. Generative AI-based renewable energy systems can cope with dynamic system changes and have great potential for resilience optimization and green low-carbon transition. In this paper, we first explore the role that generative AI can play in renewable energy systems and explain shock incidents. Secondly, intelligent maintenance strategies of renewable energy systems under different failure modes are developed based on generative AI. Then spatiotemporal resilience is introduced and a spatiotemporal resilience optimization model is proposed. A green and low-carbon transformation strategy for smart renewable energy systems has also been proposed. Finally, a case study is used to illustrate the utilization of the proposed method by using a wind power system as an example of a renewable energy system.
Natural disasters have increasingly disrupted and devastated economic and social systems worldwide. Emerging technologies, such as artificial intelligence and machine learning have demonstrated significant potential for enhancing natural disaster risk management (DRM). However, existing studies predominantly emphasize practical technological applications, focusing narrowly on specific use cases. Only a limited number of conceptual frameworks have been proposed, each grounded in distinct thematic perspectives, such as principle-technology integration, life-cycle application, or operational reliability. Critically, there remains a notable gap regarding a comprehensive framework that systematically addresses data challenges inherent in DRM. This paper proposes a data-governance-oriented conceptual framework that classifies three major data challenges—insufficient data, poor data quality, and limited application, across both objective and subjective dimensions of risk management. By integrating practical case studies, the framework illustrates how emerging technologies can systematically mitigate these challenges. Furthermore, this paper identifies new data-related risks introduced by emerging technologies. By offering a closed-loop structure that aligns internal data governance with evolving DRM needs, this work contributes novel and actionable approaches to guiding the integration of emerging technologies into disaster risk management practice.