Evaluation of forecast performance for Super Typhoon Lekima in 2019
Guomin CHEN, Xiping ZHANG, Qing CAO, Zhihua ZENG
Evaluation of forecast performance for Super Typhoon Lekima in 2019
The predictions for Super Typhoon Lekima (2019) have been evaluated from official forecasts, global models, regional models and ensemble prediction systems (EPSs) at lead times of 1–5 days. Track errors from most deterministic forecasts are smaller than their annual mean errors in 2019. Compared to the propagation speed, the propagation direction of Lekima (2019) was much easier to determine for the official agency and numerical weather prediction (NWP) models. The National Centers for Environmental Prediction Global Ensemble Forecast System (NCEP-GEFS), Japan Meteorological Agency Global Ensemble Prediction System (JMA-GEPS) and Meteorological Service of Canada Ensemble System (MSC-CENS) are underdispersed, and the Shanghai Typhoon Institute Typhoon Ensemble Data Assimilation and Prediction System (STI-TEDAPS) is overdispersed, while the ensemble prediction system from European Centre for Medium-Range Weather Forecasts (ECMWF) shows adequate dispersion at all lead times. Most deterministic forecasting methods underestimated the intensity of Lekima (2019), especially for the rapid intensification period after Lekima (2019) entered the East China Sea. All of the deterministic forecasts performed well at predicting the first landfall point at Wenling, Zhejiang Province with a lead time of 24 and 48 h.
Typhoon Lekima (2019) / track / intensity / landfall point / forecast verification
[1] |
Buizza R (1997). Potential forecast skill of ensemble prediction and spread and skill distributions of the ECMWF ensemble prediction system. Mon Wea Rev, 125: 99–119
CrossRef
Google scholar
|
[2] |
Carr L E III, Elsberry R L (2000). Dynamical tropical cyclone track forecast errors. Part II: midlatitude circulation influences. Weather Forecast, 15(6): 662–681
CrossRef
Google scholar
|
[3] |
Chen G, Lei X, Zhang X, Chen P, Yu H, Wan R (2016b). Performance of tropical cyclone forecast in western North Pacific in 2015. Trop Cyclone Res Rev, 5(3–4): 47–57
|
[4] |
Chen G, Yang M, Zhang X, Wan R (2020). Verification of tropical cyclone operational forecast in 2019. Annual report for ESCAP/WMO Typhoon Committee
|
[5] |
Chen P, Yu H
CrossRef
Google scholar
|
[6] |
DelSole T, Tippett M K (2016). Forecast comparison based on random walks. Mon Weather Rev, 144(2): 615–626
CrossRef
Google scholar
|
[7] |
DeMaria M, Sampson C R, Knaff J A, Musgrave K D (2014). Is tropical cyclone intensity guidance improving? Bull Am Meteorol Soc, 95(3): 387–398
CrossRef
Google scholar
|
[8] |
Emanuel K, Zhang F (2016). On the predictability and error sources of tropical cyclone intensity forecasts. J Atmos Sci, 73(9): 3739–3747
CrossRef
Google scholar
|
[9] |
Froude L S R, Bengtsson L, Hodges K I (2007). The predictability of extratropical storm tracks and the sensitivity of their prediction to the observing system. Mon Weather Rev, 135(2): 315–333
CrossRef
Google scholar
|
[10] |
Gall R, Franklin J, Marks F, Rappaport E N, Toepfer F (2013). The hurricane forecast improvement project. Bull Am Meteorol Soc, 94(3): 329–343
CrossRef
Google scholar
|
[11] |
Harper B A, Kepert J D, Ginger J D. 2010, Guidelines for converting between various wind averaging periods in tropical cyclone conditions. Wold Meteorological Organization, TCP Sub-Project Report, WMO/TD-No.1555
|
[12] |
Hodges K I, Klingaman N P (2019). Prediction errors of tropical cyclones in the Western North Pacific in the Met Office Global Forecast Model. Weather Forecast, 34(5): 1189–1209
CrossRef
Google scholar
|
[13] |
Kaplan J, DeMaria M, Knaff J A (2010). A revised tropical cyclone rapid intensification index for the Atlantic and eastern North Pacific basins. Weather Forecast, 25(1): 220–241
CrossRef
Google scholar
|
[14] |
Leonardo N M, Colle B A (2017). Verification of multimodel ensemble forecasts of North Atlantic tropical cyclones. Weather Forecast, 32(6): 2083–2101
CrossRef
Google scholar
|
[15] |
Lu X Q, Zhao B K (2013). Analysis of the climatic characteristics of landing tropical cyclones in 373 east China. J Trop Meteorol, 19(2): 145–153
|
[16] |
Murphy A H (1992). Climatology, persistence, and their linear combination as standards of reference in skill scores. Weather Forecast, 7(4): 692–698
CrossRef
Google scholar
|
[17] |
Payne K A, Elsberry R L, Boothe M A (2007). Assessment of western North Pacific 96- and 120-h track guidance and present forecastability. Weather Forecast, 22(5): 1003–1015
CrossRef
Google scholar
|
[18] |
Swinbank R, Kyouda M, Buchanan P, Froude L, Hamill T M, Hewson T D, Keller J H, Matsueda M, Methven J, Pappenberger F, Scheuerer M, Titley H A, Wilson L, Yamaguchi M (2016). The TIGGE project and its achievements. Bull Am Meteorol Soc, 97(1): 49–67
CrossRef
Google scholar
|
[19] |
Taylor K E (2001). Summarizing multiple aspects of model performance in a single diagram. J Geophys Res Atmos, 106(D7): 7183–7192
CrossRef
Google scholar
|
[20] |
Whitaker J S, Loughe A F (1998). The relationship between ensemble spread and ensemble mean skill. Mon Weather Rev, 126(12): 3292–3302
CrossRef
Google scholar
|
[21] |
Yamaguchi M, Ishida J, Sato H, Nakagawa M (2017). WGNE intercomparison of tropical cyclone forecasts by operational NWP models: a quarter century and beyond. Bull Am Meteorol Soc, 98(11): 2337–2349
CrossRef
Google scholar
|
[22] |
Ying M, Cha E J, Kwon H J (2011). Comparison of three western north pacific tropical cyclone best track datasets in a seasonal context. J Meteorol Soc Jpn, 89(3): 211–224
CrossRef
Google scholar
|
[23] |
Ying M, Zhang W, Yu H, Lu X, Feng J, Fan Y, Zhu Y, Chen D (2014). An overview of the China Meteorological Administration tropical cyclone database. J Atmos Ocean Technol, 31(2): 287–301
CrossRef
Google scholar
|
[24] |
Yu H, Hu C, Jiang L (2007). Comparison of three tropical cyclone intensity datasets. Acta Meteorol Sin, 28: 121–128
|
/
〈 | 〉 |