High-speed trains (HSTs) have the advantages of comfort, efficiency, and convenience and have gradually become the mainstream means of transportation. As the operating scale of HSTs continues to increase, ensuring their safety and reliability has become more imperative. As the core component of HST, the reliability of the traction system has a substantially influence on the train. During the long-term operation of HSTs, the core components of the traction system will inevitably experience different degrees of performance degradation and cause various failures, thus threatening the running safety of the train. Therefore, performing fault monitoring and diagnosis on the traction system of the HST is necessary. In recent years, machine learning has been widely used in various pattern recognition tasks and has demonstrated an excellent performance in traction system fault diagnosis. Machine learning has made considerably advancements in traction system fault diagnosis; however, a comprehensive systematic review is still lacking in this field. This paper primarily aims to review the research and application of machine learning in the field of traction system fault diagnosis and assumes the future development blueprint. First, the structure and function of the HST traction system are briefly introduced. Then, the research and application of machine learning in traction system fault diagnosis are comprehensively and systematically reviewed. Finally, the challenges for accurate fault diagnosis under actual operating conditions are revealed, and the future research trends of machine learning in traction systems are discussed.
Remanufacturing is widely recognized as beneficial to the environment and a circular economy. However, remanufacturing is more complex than traditional manufacturing due to the effects of government policy, uncertainty of consumer preferences, competition and cooperation among firms, and so on. These factors motivate academics to optimize remanufacturing outcomes, especially for product pricing and production. This study reviews the published literature on pricing and production strategies in remanufacturing from four perspectives of supply chain, namely, government policy, consumer characteristics, relationships among firms, and supply chain structures. Review results can benefit scholars/practitioners in the future by highlighting the challenges and opportunities in remanufacturing strategies.
This study explores the integration of ChatGPT and AI-generated content (AIGC) in engineering management. It assesses the impact of AIGC services on engineering management processes, mapping out the potential development of AIGC in various engineering functions. The study categorizes AIGC services within the domain of engineering management and conceptualizes an AIGC-aided engineering lifecycle. It also identifies key challenges and emerging trends associated with AIGC. The challenges highlighted are ethical considerations, reliability, and robustness in engineering management. The emerging trends are centered on AIGC-aided optimization design, AIGC-aided engineering consulting, and AIGC-aided green engineering initiatives.
Predicting demand for bike share systems (BSSs) is critical for both the management of an existing BSS and the planning for a new BSS. While researchers have mainly focused on improving prediction accuracy and analysing demand-influencing factors, there are few studies examining the inherent randomness of stations’ observed demands and to what degree the demands at individual stations are predictable. Using Divvy bike-share one-year data from Chicago, USA, we measured demand entropy and quantified the station-level predictability. Additionally, to verify that these predictability measures could represent the performance of prediction models, we implemented two commonly used demand prediction models to compare the empirical prediction accuracy with the calculated entropy and predictability. Furthermore, we explored how city- and system-specific temporally-constant features would impact entropy and predictability to inform estimating these measures when historical demand data are unavailable. Our results show that entropy and predictability of demands across stations are polarized as some stations exhibit high uncertainty (a low predictability of 0.65) and others have almost no check-out demand uncertainty (a high predictability of around 1.0). We also validated that the entropy and predictability are a priori model-free indicators for prediction error, given a sequence of bike usage demands. Lastly, we identified that key factors contributing to station-level entropy and predictability include per capita income, spatial eccentricity, and the number of parking lots near the station. Findings from this study provide more fundamental understanding of BSS demand prediction, which can help decision makers and system operators anticipate diverse station-level prediction errors from their prediction models both for existing stations and for new ones.
Under the ambitious goal of carbon neutralization, photovoltaic (PV)-driven electrolytic hydrogen (PVEH) production is emerging as a promising approach to reduce carbon emission. Considering the intermittence and variability of PV power generation, the deployment of battery energy storage can smoothen the power output. However, the investment cost of battery energy storage is pertinent to non-negligible expenses. Thus, the installation of energy-storage equipment in a PVEH system is a complex trade-off problem. The primary goals of this study are to compare the engineering economics of PVEH systems with and without energy storage, and to explore time nodes when the cost of the former scenario can compete with the latter by factoring the technology learning curve. The levelized cost of hydrogen (LCOH) is a widely used economic indicator. Represented by seven areas in seven regions of China, results show that the LCOH with and without energy storage is approximately 22.23 and 20.59 yuan/kg in 2020, respectively. In addition, as technology costs drop, the LCOH of a PVEH system with energy storage will be less than that without energy storage in 2030.
In recent years, the architecture, engineering, construction, and facility management (FM) industries have been applying various emerging digital technologies to facilitate the design, construction, and management of infrastructure facilities. Digital twin (DT) has emerged as a solution for enabling real-time data acquisition, transfer, analysis, and utilization for improved decision-making toward smart FM. Substantial research on DT for FM has been undertaken in the past decade. This paper presents a bibliometric analysis of the literature on DT for FM. A total of 248 research articles are obtained from the Scopus and Web of Science databases. VOSviewer is then utilized to conduct bibliometric analysis and visualize keyword co-occurrence, citation, and co-authorship networks; furthermore, the research topics, authors, sources, and countries contributing to the use of DT for FM are identified. The findings show that the current research of DT in FM focuses on building information modeling-based FM, artificial intelligence (AI)-based predictive maintenance, real-time cyber–physical system data integration, and facility lifecycle asset management. Several areas, such as AI-based real-time asset prognostics and health management, virtual-based intelligent infrastructure monitoring, deep learning-aided continuous improvement of the FM systems, semantically rich data interoperability throughout the facility lifecycle, and autonomous control feedback, need to be further studied. This review contributes to the body of knowledge on digital transformation and smart FM by identifying the landscape, state-of-the-art research trends, and future needs with regard to DT in FM.
The advancement of renewable energy (RE) represents a pivotal strategy in mitigating climate change and advancing energy transition efforts. A current of research pertains to strategies for fostering RE growth. Among the frequently proposed approaches, employing optimization models to facilitate decision-making stands out prominently. Drawing from an extensive dataset comprising 32806 literature entries encompassing the optimization of renewable energy systems (RES) from 1990 to 2023 within the Web of Science database, this study reviews the decision-making optimization problems, models, and solution methods thereof throughout the renewable energy development and utilization chain (REDUC) process. This review also endeavors to structure and assess the contextual landscape of RES optimization modeling research. As evidenced by the literature review, optimization modeling effectively resolves decision-making predicaments spanning RE investment, construction, operation and maintenance, and scheduling. Predominantly, a hybrid model that combines prediction, optimization, simulation, and assessment methodologies emerges as the favored approach for optimizing RES-related decisions. The primary framework prevalent in extant research solutions entails the dissection and linearization of established models, in combination with hybrid analytical strategies and artificial intelligence algorithms. Noteworthy advancements within modeling encompass domains such as uncertainty, multienergy carrier considerations, and the refinement of spatiotemporal resolution. In the realm of algorithmic solutions for RES optimization models, a pronounced focus is anticipated on the convergence of analytical techniques with artificial intelligence-driven optimization. Furthermore, this study serves to facilitate a comprehensive understanding of research trajectories and existing gaps, expediting the identification of pertinent optimization models conducive to enhancing the efficiency of REDUC development endeavors.
Climate change and rapid urbanization are pressing environmental and social concerns, with approximately 56% of the global population living in urban areas. This number is expected to rise to 68% by 2050, leading to the expansion of cities and encroachment upon natural areas, including wetlands, causing their degradation and fragmentation. To mitigate these challenges, green and blue infrastructures (GBIs), such as constructed wetlands, have been proposed to emulate and replace the functions of natural wetlands. This study evaluates the potential of eight constructed wetlands near Beijing, China, focusing on their ecosystem services (ESs), cost savings related to human health, growing/maintenance expenses, and disservices using an emergy-based assessment procedure. The results indicate that all constructed wetlands effectively purify wastewater, reducing nutrient concentrations (e.g., total nitrogen, total phosphorus, and total suspended solids). Among the studied wetlands, the integrated vertical subsurface flow constructed wetland (CW-4) demonstrates the highest wastewater purification capability (1.63E+14 sej/m2/yr) compared to other types (6.78E+13 and 2.08E+13 sej/m2/yr). Additionally, constructed wetlands contribute to flood mitigation, groundwater recharge, wildlife habitat protection, and carbon sequestration, resembling the functions of natural wetlands. However, the implementation of constructed wetlands in cities is not without challenges, including greenhouse gas emissions, green waste management, mosquito issues, and disturbances in the surrounding urban areas, negatively impacting residents. The ternary phase diagram reveals that all constructed wetlands provide more benefits than costs and impacts. CW-4 shows the highest benefit‒cost ratio, reaching 50%, while free water surface constructed wetland (CW-3) exhibits the lowest benefits (approximately 38%), higher impacts (approximately 25%), and lower costs (approximately 37%) compared to other wetlands. The study advocates the use of an emergy approach as a reliable method to assess the quality of constructed wetlands, providing valuable insights for policymakers in selecting suitable constructed wetlands for effective urban ecological management.
In recent decades, healthcare providers have faced mounting pressure to effectively manage highly perishable and limited medical resources. This article offers a comprehensive review of supply chain management pertaining to such resources, which include transplantable organs and healthcare products. The review encompasses 93 publications from 1990 to 2022, illustrating a discernible upward trajectory in annual publications. The surveyed literature is categorized into three levels: Strategic, tactical, and operational. Key problem attributes and methodologies are analyzed through the assessment of pertinent publications for each problem level. Furthermore, research on service innovation, decision analytics, and supply chain resilience elucidates potential areas for future research.
Over the last two decades, many modeling and optimization techniques have been developed for earth observation satellite (EOS) scheduling problems, but few of them show good generality to be engineered in real-world applications. This study proposes a general modeling and optimization technique for common and real-world EOS scheduling cases; it includes a decoupled framework, a general modeling method, and an easy-to-use algorithm library. In this technique, a framework that decouples the modeling, constraints, and optimization of EOS scheduling problems is built. With this framework, the EOS scheduling problems are appropriately modeled in a general manner, where the executable opportunity, another format of the well-known visible time window per EOS operation, is viewed as a selectable resource to be optimized. On this basis, 10 types of optimization algorithms, such as Tabu search and genetic algorithm, and a parallel competitive memetic algorithm, are developed. For simplified EOS scheduling problems, the proposed technique shows better performance in applicability and effectiveness than the state-of-the-art algorithms. In addition, a complicatedly constrained real-world benchmark exampled by a four-EOS Chinese commercial constellation is provided, and the technique is qualified and outperforms the in-use scheduling system by more than 50%.
Amidst the inefficiencies of traditional job-seeking approaches in the recruitment ecosystem, the importance of automated job recommendation systems has been magnified. However, existing models optimized to maximize user clicks for general product recommendations prove inept in addressing the unique challenges of job recommendation, namely reciprocity and competition. Moreover, sparse data on online recruitment platforms can further negatively impact the performance of existing job recommendation algorithms. To counteract these limitations, we propose a bilateral heterogeneous graph-based competition iteration model. This model comprises three integral components: 1) two bilateral heterogeneous graphs for capturing multi-source information from people and jobs and alleviating data sparsity, 2) fusion strategies for synthesizing attributes and preferences to produce mutually beneficial job matches, and 3) a competition-enhancing strategy for dispersing competition realized through a two-stage optimization algorithm. Augmented by granular attention mechanisms for enhanced interpretability, the model’s efficacy, competition dispersion, and interpretability are validated through rigorous empirical evaluations on a real-world recruitment platform.
Roadside green swales have emerged as popular stormwater management infrastructure in urban areas, serving to mitigate stormwater pollution and reduce urban surface water discharge. However, there is a limited understanding of the various types, structures, and functions of swales, as well as the potential challenges they may face in the future. In recent years, China has witnessed a surge in the adoption of roadside green swales, especially as part of the prestigious Sponge City Program (SCP). These green swales play a crucial role in controlling stormwater pollution and conserving urban water resources by effectively removing runoff pollutants, including suspended solids, nitrogen, and phosphorus. This review critically examines recent research findings, identifies key knowledge gaps, and presents future recommendations for designing green swales for effective stormwater management, with a particular emphasis on ongoing major Chinese infrastructure projects. Despite the growing global interest in bioswales and their significance in urban development, China’s current classification of such features lacks a clear definition or specific consideration of bioswales. Furthermore, policymakers have often underestimated the adverse environmental effects of road networks, as reflected in existing laws and planning documents. This review argues that the construction and maintenance of roadside green swales should be primarily based on three critical factors: Well-thought-out road planning, suitable construction conditions, and sustainable long-term funding. The integration of quantitative environmental standards into road planning is essential to effectively address the challenge of pollution from rainfall runoff. To combat pollution associated with roads, a comprehensive assessment of potential pollution loadings should be carried out, guiding the appropriate design and construction of green swales, with a particular focus on addressing the phenomenon of first flush. One of the major challenges faced in sustaining funds for ongoing maintenance after swale construction. To address this issue, the implementation of a green finance platform is proposed. Such a platform would help ensure the availability of funds for continuous maintenance, thus maximizing the long-term effectiveness of green swales in stormwater management. Ultimately, the findings of this review aim to assist municipal governments in enhancing and implementing future urban road designs and SCP developments, incorporating effective green swale strategies.
Driving safety and accident prevention are attracting increasing global interest. Current safety monitoring systems often face challenges such as limited spatiotemporal coverage and accuracy, leading to delays in alerting drivers about potential hazards. This study explores the use of edge computing for monitoring vehicle motion and issuing accident warnings, such as lane departures and vehicle collisions. Unlike traditional systems that depend on data from single vehicles, the cooperative vehicle-infrastructure system collects data directly from connected and automated vehicles (CAVs) via vehicle-to-everything communication. This approach facilitates a comprehensive assessment of each vehicle’s risk. We propose algorithms and specific data structures for evaluating accident risks associated with different CAVs. Furthermore, we examine the prerequisites for data accuracy and transmission delay to enhance the safety of CAV driving. The efficacy of this framework is validated through both simulated and real-world road tests, proving its utility in diverse driving conditions.
Air pollution poses a significant threat to human health, particularly in urban areas with high levels of industrial activities. In China, the government plays a crucial role in managing air quality through the Air Pollution Prevention and Control Action Plan. The government provides direct financial support and guides the investment direction of social funds to improve air quality. While government investment has led to improvements in air quality across China, concerns remain regarding the efficiency of such large-scale investments. To address this concern, we conducted a study using a three-stage data envelopment analysis (DEA)-Malmquist model to assess the efficiency of government investment in improving air quality in China. Our analysis revealed regional disparities and annual dynamic changes. Specifically, we focused on the Beijing–Tianjin–Hebei areas as a case study, as the investment primarily targeted industrial activities in urban areas with the goal of improving living conditions for urban residents. The results demonstrate significant differences in investment efficiency between regions. Beijing exhibits relatively high investment efficiency, while cities in Hebei Province require improvement. We identified scale inefficiency, which refers to the ratio of air pollutant reduction to financial investment, as the main factor contributing to regional disparities. However, we found that increasing the total investment scale can help mitigate this effect. Furthermore, our study observed positive but fluctuating annual changes in investment efficiency within this city cluster from 2014 to 2018. Investment-combined technical efficiency, which represents the investment strategy, is the main obstacle to improving yearly investment efficiency. Therefore, in addition to promoting investment strategies at the individual city level, it is crucial to enhance coordination and cooperation among cities to improve the investment efficiency of the entire city cluster. Evaluating the efficiency of government investment and understanding its influencing factors can guide future investment measures and directions. This knowledge can also support policymaking for other projects involving substantial investments.
Understanding the influencing factors and the evolving trends of the Water-Sediment Regulation System (WSRS) is vital for the protection and management of the Yellow River. Past studies on WSRS have been limited in focus and have not fully addressed the complete engineering control system of the basin. This study takes a holistic view, treating sediment management in the Yellow River as a dynamic and ever-evolving complex system. It merges concepts from system science, information theory, and dissipative structure with practical efforts in sediment engineering control. The key findings of this study are as follows: between 1990 and 2019, the average Yellow River Sediment Regulation Index (YSRI) was 55.99, with the lowest being 50.26 in 1990 and the highest being 61.48 in 2019; the result indicates that the WSRS activity decreased, yet it fluctuated, gradually approaching the critical threshold of a dissipative structure.
The enhancement of energy efficiency stands as the principal avenue for attaining energy conservation and emissions reduction objectives within the realm of road transportation. Nevertheless, it is imperative to acknowledge that these objectives may, in part or in entirety, be offset by the phenomenon known as the energy rebound effect (ERE). To quantify the long-term EREs and short-term EREs specific to China’s road transportation, this study employed panel cointegration and panel error correction models, accounting for asymmetric price effects. The findings reveal the following: The long-term EREs observed in road passenger transportation and road freight transportation range from 13% to 25% and 14% to 48%, respectively; in contrast, the short-term EREs in road passenger transportation and road freight transportation span from 36% to 41% and 3.9% to 32%, respectively. It is noteworthy that the EREs associated with road passenger transportation and road freight transportation represent a partial rebound effect, falling short of reaching the magnitude of a counterproductive backfire effect. This leads to the inference that the upsurge in energy consumption within the road transportation sector cannot be solely attributed to advancements in energy efficiency. Instead, various factors, including income levels, the scale of commodity trade, and industrial structure, exert more substantial facilitating influences. Furthermore, the escalation of fuel prices fails to dampen the demand for energy services, whether in the domain of road passenger transportation or road freight transportation. In light of these conclusions, recommendations are proffered for the formulation of energy efficiency policies pertinent to road transportation.
This paper proposes a framework for evaluating the efficacy and suitability of maintenance programs with a focus on quantitative risk assessment in the domain of aircraft maintenance task transfer. The analysis is anchored in the principles of Maintenance Steering Group-3 (MSG-3) logic decision paradigms. The paper advances a holistic risk assessment index architecture tailored for the task transfer of maintenance programs. Utilizing the analytic network process (ANP), the study quantifies the weight interrelationships among diverse variables, incorporating expert-elicited subjective weighting. A multielement connection number-based evaluative model is employed to characterize decision-specific data, thereby facilitating the quantification of task transfer-associated risk through the appraisal of set-pair potentials. Moreover, the paper conducts a temporal risk trend analysis founded on partial connection numbers of varying orders. This analytical construct serves to streamline the process of risk assessment pertinent to maintenance program task transfer. The empirical component of this research, exemplified through a case study of the Boeing 737NG aircraft maintenance program, corroborates the methodological robustness and pragmatic applicability of the proposed framework in the quantification and analysis of mission transfer risk.
Deep Learning (DL) has revolutionized the field of Artificial Intelligence (AI) in various domains such as computer vision (CV) and natural language processing. However, DL models have limitations including the need for large labeled datasets, lack of interpretability and explainability, potential bias and fairness issues, and limitations in common sense reasoning and contextual understanding. On the other side, DL has shown significant potential in construction for safety and quality inspection tasks using CV models. However, current CV approaches may lack spatial context and measurement capabilities, and struggle with complex safety and quality requirements. The integration of Neuro-Symbolic Computing (NSC), an emerging field that combines DL and symbolic reasoning, has been proposed as a potential solution to address these limitations. NSC has the potential to enable more robust, interpretable, and accurate AI systems in construction by harnessing the strengths of DL and symbolic reasoning. The combination of symbolism and connectionism in NSC can lead to more efficient data usage, improved generalization ability, and enhanced interpretability. Further research and experimentation are needed to effectively integrate NSC with large models and advance CV technologies for precise reporting of safety and quality inspection results in construction.
With the escalating complexity in production scenarios, vast amounts of production information are retained within enterprises in the industrial domain. Probing questions of how to meticulously excavate value from complex document information and establish coherent information links arise. In this work, we present a framework for knowledge graph construction in the industrial domain, predicated on knowledge-enhanced document-level entity and relation extraction. This approach alleviates the shortage of annotated data in the industrial domain and models the interplay of industrial documents. To augment the accuracy of named entity recognition, domain-specific knowledge is incorporated into the initialization of the word embedding matrix within the bidirectional long short-term memory conditional random field (BiLSTM-CRF) framework. For relation extraction, this paper introduces the knowledge-enhanced graph inference (KEGI) network, a pioneering method designed for long paragraphs in the industrial domain. This method discerns intricate interactions among entities by constructing a document graph and innovatively integrates knowledge representation into both node construction and path inference through TransR. On the application stratum, BiLSTM-CRF and KEGI are utilized to craft a knowledge graph from a knowledge representation model and Chinese fault reports for a steel production line, specifically SPOnto and SPFRDoc. The F1 value for entity and relation extraction has been enhanced by 2% to 6%. The quality of the extracted knowledge graph complies with the requirements of real-world production environment applications. The results demonstrate that KEGI can profoundly delve into production reports, extracting a wealth of knowledge and patterns, thereby providing a comprehensive solution for production management.
Decomposition analysis has been widely used to assess the determinants of energy and CO2 emissions in academic research and policy studies. Both the methodology and application of decomposition analysis have been largely improved in the past decades. After more than 50 years’ developments, decomposition studies have become increasingly sophisticated and diversified, and tend to converge internally and integrate with other analytical approaches externally. A good understanding of the literature and state of the art is critical to identify knowledge gaps and formulate future research agenda. To this end, this study presents a literature survey for decomposition analysis applied to energy and emission issues, with a focus on the period of 2016–2021. A review for three individual decomposition techniques is first conducted, followed by a synthesis of emerging trends and features for the decomposition analysis literature as a whole. The findings are expected to direct future research in decomposition analysis.
Urban rail transit (URT) disruptions present considerable challenges due to several factors: i) a high probability of occurrence, arising from facility failures, disasters, and vandalism; ii) substantial negative effects, notably the delay of numerous passengers; iii) an escalating frequency, attributable to the gradual aging of facilities; and iv) severe penalties, including substantial fines for abnormal operation. This article systematically reviews URT disruption management literature from the past decade, categorizing it into pre-disruption and post-disruption measures. The pre-disruption research focuses on reducing the effects of disruptions through network analysis, passenger behavior analysis, resource allocation for protection and backup, and enhancing system resilience. Conversely, post-disruption research concentrates on restoring normal operations through train rescheduling and bus bridging services. The review reveals that while post-disruption strategies are thoroughly explored, pre-disruption research is predominantly analytical, with a scarcity of practical pre-emptive solutions. Moreover, future research should focus more on increasing the interchangeability of transport modes, reinforcing redundancy relationships between URT lines, and innovating post-disruption strategies.
Studies have demonstrated that advanced technology, such as smart contract applications, can enhance both pre- and post-contract administration within the built environment sector. Smart contract technology, exemplifying blockchain technologies, has the potential to improve transparency, trust, and the security of data transactions within this sector. However, there is a dearth of academic literature concerning smart contract applications within the construction industries of developing countries, with a specific focus on Nigeria. Consequently, this study seeks to explore the relevance of smart contract technology and address the challenges impeding its adoption, offering strategies to mitigate the obstacles faced by smart contract applications. To investigate the stakeholders, this research conducted 14 virtual interview sessions to achieve data saturation. The interviewees encompassed project management practitioners, senior management personnel from construction companies, experts in construction dispute resolution, professionals in construction software, and representatives from government construction agencies. The data obtained from these interviews underwent thorough analysis employing a thematic approach. The study duly recognizes the significance of smart contract applications within the sector. Among the 12 identified barriers, issues such as identity theft and data leakage, communication and synchronization challenges, high computational expenses, lack of driving impetus, excessive electricity consumption, intricate implementation processes, absence of a universally applicable legal framework, and the lack of a localized legal framework were recurrent impediments affecting the adoption of smart contract applications within the sector. The study also delves into comprehensive measures to mitigate these barriers. In conclusion, this study critically evaluates the relevance of smart contract applications within the built environment, with a specific focus on promoting their usage. It may serve as a pioneering effort, especially within the context of Nigeria.
Dynamic speed guidance for vehicles in on-ramp merging zones is instrumental in alleviating traffic congestion on urban expressways. To enhance compliance with recommended speeds, the development of a dynamic speed-guidance mechanism that accounts for heterogeneity in human driving styles is pivotal. Utilizing intelligent connected technologies that provide real-time vehicular data in these merging locales, this study proposes such a guidance system. Initially, we integrate a multi-agent consensus algorithm into a multi-vehicle framework operating on both the mainline and the ramp, thereby facilitating harmonized speed and spacing strategies. Subsequently, we conduct an analysis of the behavioral traits inherent to drivers of varied styles to refine speed planning in a more efficient and reliable manner. Lastly, we investigate a closed-loop feedback approach for speed guidance that incorporates the driver’s execution rate, thereby enabling dynamic recalibration of advised speeds and ensuring fluid vehicular integration into the mainline. Empirical results substantiate that a dynamic speed guidance system incorporating driving styles offers effective support for human drivers in seamless mainline merging.
Reliability-redundancy allocation, preventive maintenance, and spare parts logistics are crucial for achieving system reliability and availability goal. Existing methods often concentrate on specific scopes of the system’s lifetime. This paper proposes a joint redundancy-maintenance-inventory allocation model that simultaneously optimizes redundant component, replacement time, spares stocking, and repair capacity. Under reliability and availability criteria, our objective is to minimize the system’s lifetime cost, including design, manufacturing, and operational phases. We develop a unified system availability model based on ten performance drivers, serving as the foundation for the establishment of the lifetime-based resource allocation model. Superimposed renewal theory is employed to estimate spare part demand from proactive and corrective replacements. A bisection algorithm, enhanced by neighborhood exploration, solves the complex mixed-integer, nonlinear optimization problem. The numerical experiments show that component redundancy is preferred and necessary if one of the following situations occurs: extremely high system availability is required, the fleet size is small, the system reliability is immature, the inventory holding is too costly, or the hands-on replacement time is prolonged. The joint allocation model also reveals that there exists no monotonic relation between spares stocking level and system availability.