Dynamic train dwell time forecasting: a hybrid approach to address the influence of passenger flow fluctuations
Zishuai Pang, Liwen Wang, Shengjie Wang, Li Li, Qiyuan Peng
Railway Engineering Science ›› 2023, Vol. 31 ›› Issue (4) : 351-369.
Dynamic train dwell time forecasting: a hybrid approach to address the influence of passenger flow fluctuations
Train timetables and operations are defined by the train running time in sections, dwell time at stations, and headways between trains. Accurate estimation of these factors is essential to decision-making for train delay reduction, train dispatching, and station capacity estimation. In the present study, we aim to propose a train dwell time model based on an averaging mechanism and dynamic updating to address the challenges in the train dwell time prediction problem (e.g., dynamics over time, heavy-tailed distribution of data, and spatiotemporal relationships of factors) for real-time train dispatching. The averaging mechanism in the present study is based on multiple state-of-the-art base predictors, enabling the proposed model to integrate the advantages of the base predictors in addressing the challenges in terms of data attributes and data distributions. Then, considering the influence of passenger flow on train dwell time, we use a dynamic updating method based on exponential smoothing to improve the performance of the proposed method by considering the real-time passenger amount fluctuations (e.g., passenger soars in peak hours or passenger plunges during regular periods). We conduct experiments with the train operation data and passenger flow data from the Chinese high-speed railway line. The results show that due to the advantages over the base predictors, the averaging mechanism can more accurately predict the dwell time at stations than its counterparts for different prediction horizons regarding predictive errors and variances. Further, the experimental results show that dynamic smoothing can significantly improve the accuracy of the proposed model during passenger amount changes, i.e., 15.4% and 15.5% corresponding to the mean absolute error and root mean square error, respectively. Based on the proposed predictor, a feature importance analysis shows that the planned dwell time and arrival delay are the two most important factors to dwell time. However, planned time has positive influences, whereas arrival delay has negative influences.
[1.] |
|
[2.] |
|
[3.] |
|
[4.] |
|
[5.] |
|
[6.] |
|
[7.] |
|
[8.] |
|
[9.] |
|
[10.] |
Yang J, Shiwakoti N, Tay R (2019) Train dwell time models—development in the past forty years. In: Australasian Transport Research Forum 2019 Proceedings, Canberra.
|
[11.] |
|
[12.] |
|
[13.] |
|
[14.] |
|
[15.] |
|
[16.] |
|
[17.] |
|
[18.] |
|
[19.] |
|
[20.] |
|
[21.] |
|
[22.] |
|
[23.] |
|
[24.] |
|
[25.] |
|
[26.] |
|
[27.] |
|
[28.] |
|
[29.] |
|
[30.] |
|
[31.] |
Padmanaban R, Vanajakshi L, Subramanian SC (2009) Estimation of bus travel time incorporating dwell time for APTS applications. In: 2009 IEEE Intelligent Vehicles Symposium, Xi’an, pp 955–959
|
[32.] |
|
[33.] |
Li D, Goverde RM, Daamen W, He H (2014) Train dwell time distributions at short stop stations. In: 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), Qingdao, pp 2410–2415
|
[34.] |
|
[35.] |
|
[36.] |
|
[37.] |
|
[38.] |
|
[39.] |
Baee S, Eshghi F, Hashemi SM, Moienfar R (2012) Passenger boarding/alighting management in urban rail transportation. In: ASME/IEEE Joint Rail Conference, Philadelphia, pp 823–829
|
[40.] |
|
[41.] |
|
[42.] |
|
[43.] |
|
[44.] |
|
[45.] |
|
[46.] |
|
[47.] |
|
[48.] |
|
[49.] |
|
[50.] |
San HP, Masirin MIM (2016) Train dwell time models for rail passenger service. In: The 3rd International Conference on Civil and Environmental Engineering for Sustainability, Melaka
|
[51.] |
Cieslak DA, Chawla NV (2008) Learning decision trees for unbalanced data. In: Machine Learning and Knowledge Discovery in Databases: European Conference, Antwerp, pp 241–256
|
[52.] |
Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:14091556. https://doi.org/10.48550/arXiv.1409.1556
|
[53.] |
Ke G, Meng Q, Finley T, Wang T, Chen W, Ma W, Ye Q, Liu T (2017) Lightgbm: a highly efficient gradient boosting decision tree. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, pp 3149–3157
|
[54.] |
Mason L, Baxter J, Bartlett P, Frean M (1999) Boosting algorithms as gradient descent. In: Proceedings of the 12th International Conference on Neural Information Processing Systems, Denver, pp 512–518
|
[55.] |
|
[56.] |
|
[57.] |
Lundberg SM, Lee SI (2017) A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, pp 4768–4777
|
/
〈 |
|
〉 |