1. Royal Netherlands Meteorological Institute, De Bilt 3731GA, The Netherlands
2. College of Meteorology and Oceanography, National University of Defense Technology, Changsha 410073, China
renkaijun@nudt.edu.cn
Show less
History+
Received
Accepted
Published
2020-05-20
2021-01-18
2022-03-15
Issue Date
Revised Date
2021-06-01
PDF
(4180KB)
Abstract
Tropical hurricanes are among the most devastating hazards on Earth. Knowledge about its intense inner-core structure and dynamics will improve hurricane forecasts and advisories. The precise morphological parameters extracted from high-resolution spaceborne Synthetic Aperture Radar (SAR) images, can play an essential role in further exploring and monitoring hurricane dynamics, especially when hurricanes undergo amplification, shearing, eyewall replacements and so forth. Moreover, these parameters can help to build guidelines for wind calibration of the more abundant, but lower resolution scatterometer wind data, thus better linking scatterometer wind fields to hurricane categories. In this paper, we develop a new method for automatically extracting the hurricane eyes from C-band SAR data by constructing Gray Level-Gradient Co-occurrence Matrices (GLGCMs). The hurricane eyewall is determined with a two-dimensional vector, generated by maximizing the class entropy of the hurricane eye region in GLGCM. The results indicate that when the hurricane is weak, or the eyewall is not closed, the hurricane eye extracted with this automatic method still agrees with what is observed visually, and it preserves the texture characteristics of the original image. As compared to Du’s wavelet analysis method and other morphological analysis methods, the approach developed here has reduced artefacts due to factors like hurricane size and has lower programming complexity. In summary, the proposed method provides a new and elegant choice for hurricane eye morphology extraction.
Hurricanes are one of the most destructive weather systems. They bring strong wind, torrential rain and storm surge, which can cause substantial economic and life losses to the coastal communities. Figure 1 shows the number of hurricanes landed in China during the period between 2000 and 2017. There are on average 15 hurricanes each year hitting China, which highlights the importance of accurate hurricane forecasts and advisories.
Deep understanding of physical mechanisms near and within the core of the hurricanes can contribute to hurricane forecasts and disaster rescue. During the past decades, hurricanes have been extensively studied. While the size of hurricane eyes varies from storm to storm, there is a strong relationship between the dynamical behavior of hurricane eyes and hurricane intensity (Liu and Johnny, 1999), e.g., hurricane eyes tend to contract with increasing hurricane intensity (Kimball and Mulekar, 2004; Shen, 2006). Hurricane intensity determines its potential destructive power. It can be derived with the Dvorak technique using enhanced infrared and visible satellite imagery (Velden et al., 2006) and be related to the Saffir-Simpson hurricane classification by maximum 1-min sustained wind at 10 meter height. Generally, geometric characterization of the hurricane eye and the spatial structure allows experts to explore the (thermo-) dynamical behavior of the hurricane (Shapiro and Willoughby, 1982; Willoughby, 1990) and construct a precise parametric representation of the hurricane vortex (Holland, 1980; Mallen et al., 2005; Willoughby et al., 2006; Holland, 2008; Holland et al., 2010; Wood et al., 2013). Moreover, it will also improve the understanding of eyewall replacement (Sitkowski et al., 2011). With a systematic analysis of sea surface imprints of 83 hurricanes, Li et al. (2013) proposed that characterization of the shape and size of hurricane eyes can be an essential factor to investigate asymmetric processes and intensity change. These characterizations will play a crucial role in hurricane dynamics exploration and monitor, in particular when hurricanes undergo amplification, shearing, eyewall replacements, diurnal influences, etc.
Due to the spatial limitations of in-situ sensors in extreme conditions, remote sensing technology has been an ideal tool in hurricane research. Satellite data products like visible and infrared (IR) images acquired by MODIS (Moderate Resolution Imaging Spectroradiometer) and AVHRR (Advanced Very High-Resolution Radiometer) can act as a reference in hurricane development analysis and provide hurricane cloud-level horizontal structure and intensity information. However, it is hard for optical satellite sensors to obtain accurate hurricane structure information near the surface, since they rely on cloud measurements. As a result, the estimated surface winds may deviate from the real winds, for instance by "cloud obstruction" (Zhang and Perrie, 2012; Li et al., 2013; Pan et al., 2013). Spaceborne microwave sensors, such as scatterometers and Synthetic Aperture Radars (SARs), have the unique capability to penetrate clouds and can probe large areas of the ocean surface at very high resolution. The sea surface roughness imaged by SAR provides not only the full spectrum of local intensities, but also detailed structure parameters (Mouche et al., 2017). Up to now, SARs are the only instruments capable of producing fine-scale, wide-swath spatial data in nearly all weather conditions, though at low temporal frequency (Gade and Stoffelen, 2019).
Using collocated SAR and scatterometer data (Stoffelen et al., 2019), we may associate SAR hurricane structures with their scatterometer equivalents and hence provide guidance on the interpretation of the abundant scatterometer hurricane acquisitions. Spatial structure functions obtained from scatterometer winds (Vogelzang and Stoffelen, 2012; Vogelzang et al., 2015) and SAR winds both represent the spatial structure of the hurricane, while the spatial resolution of scatterometers is limited (Vogelzang and Stoffelen, 2017). With morphological parameters as ancillary data, one may analyze the correlation between SAR and scatterometer structure functions, and construct an optimal meteorological analysis of the wind field in order to improve the scatterometer product quality. The Rankine vortex model, which relates to morphological parameters, can be an external constraint in numerical simulation.
In earlier attempts to automatically quantify the characteristics of hurricane eyes from SAR images, Du and Vachon (2003) extracted hurricane eyes from SAR images using the edge detection properties of wavelets. Jin et al. (2014) developed an image processing approach to extract hurricane eyes based on a labelled watershed segmentation method and morphological analysis. Later, a semi-automatic center location method was proposed, which combines salient region detection and a pattern matching method, capable of locating the hurricane eye center for partially sampled hurricanes (Jin et al., 2017). This technique, however, needs to be manually adjusted when extracting rain bands. An improved and revised approach was proposed in 2019 (Jin et al., 2019). Zheng et al. (2016) carried out hurricane eye detection from SAR and IR images with two newly developed algorithms and one existing wavelet-based algorithm. They found that the hurricane centers extracted from best track (BT) data were closer to the location extracted from SAR data than the equivalents from IR data. Lee et al. (2016) proposed a mathematical morphological method to extract hurricane eyes from C-band SAR images. This method shows a high degree of agreement with the reference data from NOAA (National Oceanic and Atmospheric Administration) manual work.
Sea surface roughness is an important parameter in ocean-atmosphere interaction studies (Belmonte and Stoffelen, 2019). The VV or VH polarized normalized radar cross-section (NRCS) values in SAR images depict sea surface roughness and are sensitive to the wind speed (de Kloe et al., 2017). VH NRCS images of hurricanes show remarkable texture characteristics in terms of gray level information and gradient information (e.g., van Zadelhoff et al., 2014).
Texture information is a significant feature to capture the spatial structure of objects in the image (Zhang et al., 2019), and has been extensively used in image classification and region of interest extraction. Texture features are aggregated by including pixels that have similar statistical characteristics, which are usually expressed by grayscale and gradient. Zheng et al. (2018) developed a Gray Level Co-occurrence Matrix (GLCM)-based method to estimate texture orientations, which contribute to the sea surface wind direction retrieval from SAR images. It manifests the broad prospects for texture information applications in remote sensing techniques.
The winds within a hurricane eye are weaker than those in the surrounding area. Therefore the hurricane eye in microwave imagery appears as a dark area due to the low backscatter and is surrounded by a bright area with high wind speeds. This unique characteristic suggests to identify the hurricane eye from texture information. According to this premise, we have developed a new automatic method to extract hurricane eyes from C-band Sentinel-1 SAR data. The technique can be summarized as follows.
1) Construct Gray Level-Gradient Co-occurrence Matrices (GLGCMs) from SAR images.
2) Maximize the hurricane eyewall class entropies and classify the hurricane image into four classes.
3) Define the hurricane center location and detect the hurricane eyewall using a minimum variance method.
The results from this new method are compared with those from Du’s wavelet analysis methods. It will be shown that the new approach has reduced artefacts due to, e.g., varying hurricane size. It also gets better results when the hurricane eyewall is not closed. A simple comparison of the image processing is made between our new approach and Lee’s morphological analysis method. Since the texture aggregation procedure can automatically denoise high-resolution SAR images, the new method needs less image pre-processing steps.
The paper is organized as follows: in Section 2, we introduce the SAR dataset used and the approach to obtain texture information from SAR images. The method to define the hurricane center and the hurricane eyewall is described in Section 3. In Section 4, we evaluate our approach by comparing it with two existing methods. The hurricane center positions obtained with our method are compared with interpolated BT data. Section 5 summarizes our main findings, and a brief overview of future work is given in Section 6, linking this manuscript to future scatterometer studies.
2 Data sets
2.1 Sentinel-1 C-band SAR images
The Copernicus Sentinel-1 mission (S1) provides continuity of C-band SAR operational applications and services. S1 is a constellation of two satellites (S1-A and S1-B). Both carry a C-band (5.63 GHz) SAR providing products with 5 m × 20 m resolution in range and azimuth (Lu et al., 2018). The orbit of S1 (693 km) is lower than that of other C-Band Radar Systems like Envisat (800 km) and RADARSAT-1/2 (798 km). As a result, the velocity bunching effect for S1 SAR is less significant (Shao et al., 2017). The first S1 SAR image of a hurricane was acquired in 2014 and showed the great advantage of S1 SAR for imaging fine-scale wind patterns on the sea surface beneath the clouds (Li, 2015). S1 SAR measurements uniquely capture inner core characteristics to provide independent measurements of the maximum wind speed and the radius of maximum wind, as verified by airborne Stepped Frequency Microwave Radiometer (SFMR) winds (Mouche et al., 2019).
The dataset consists of 17 C-band S1 VH SAR images of hurricanes, 10 in Interferometric Wide (IW) swath mode (250 km swath) and 7 in Extra Wide (EW) swath mode (400 km swath). These images were acquired during 2016‒2017 and cover some parts of the Pacific Ocean and the Atlantic Ocean. The spatial distribution of the S1 SAR data selected is listed in Table 1 and Fig. S1. Each SAR scene is shown in Fig. 2. Among the images, there are hurricanes making landfall or moving over the ocean, at both developing or declining stages, located in the center of the image or in the margin, entirely covered by the image or only partially covered. It will be shown that our new method can be applied to most images.
More information on the images is given in table 2. The hurricane categories are calculated based on the Saffir-Simpson Scale.
2.2 Best-track (BT) data
BT data are available for most hurricanes and provide six-hourly information on their location, maximum wind, and central pressure. NOAA releases the BT data for the hurricanes occurring in the Atlantic Ocean and the eastern Pacific Ocean. Four hurricanes studied here are located in the western Pacific Ocean and their corresponding BT data (Ying et al., 2014) are released by the Chinese Meteorological Administration (CMA). BT data for Hurricane Donna is missing. The hurricane center locations in BT date are interpolated to the image acquisition time
3 Method
3.1 GLGCM method
We will now present our new approach taking the C-band VH SAR image of Hurricane Lionrock as an example (shown in Fig. 3(a)). The image was acquired on 27 August 2016, when the hurricane has a clear hurricane eye. To prepare hurricane eye detection, a sub-image of the observed hurricane eye was extracted, consisting of 120 by 120 pixels and covering 120 by 120 km. The hurricane eye area is usually assumed to be axisymmetric and surrounded by the eyewall, which contains high gradients and a large range of gray levels. To some extent, a hurricane eyewall in a SAR image has characteristics similar to an edge in an optical picture. As such, hurricane eyewall detection can be simplified to an edge detection problem.
GLGCM employs both gray level and gradient in the image (Zhang et al., 2019) to describe the spatial relationship between each pixel and its neighbors (Zhou et al., 2020). This method has been applied to many research fields, such as scene classification (Chen et al., 2009) and sea surface roughness analysis (Pan et al., 2020).
The GLGCM element p(m,n) is defined as the probability of pixel (i,j) having gray level m in the normalized gray image F(i,j) and gradient n in the normalized gradient image G(i,j). Therefore, GLGCM provides the spatial relationship between each pixel and its adjacent pixels (Wang and Dong, 2009).
Suppose the analyzed image is rectangular with M columns and N rows. Its gray value for pixel (i,j) is given by f(i,j), and a Sobel operator with a 3×3 window-size is employed to obtain the gradient image. The gradient value g(i,j) of pixel (i,j) is calculated as
where i=1,2,3...,M; j=1,2,3,...,N. gx and gy indicate the values of the horizontal and vertical gradient of pixel (i,j). Sobel operator is the partial derivative of f(i,j), which convolves the digital image with two Sobel masks (defined in Eq. (2)), one, gx , giving the x component of the gradient, and the other, gy , giving the y component (Kanopoulos et al., 1988). Their convolution can be described by Eq. (3).
To reduce computational complexity, the gradient matrix is normalized as
where g(i,j) is the gradient image; g(i,j)max is the maximum gradient value of the original image. Floor denotes the floor function. L is the normalized maximum gradient value, set to an empirical value L=64.
Similarly, the gray level matrix is normalized as
where f(i,j)max is the maximum gray level value and L’=64 denotes the maximum gray level value after normalization.
Now define Hm,n as the number of pixels which satisfy F(i,j)=m and G(i,j)=n. The total number of Hm,n is given by
Then, the probability of entry Hm,n is estimated as:
where m and n denote the gray level and the gradient value.
The values adopted for the parameters L and L’ are a trade-off between reducing computational cost and complexity on one hand and preserving the ability to detect subtle image structures on the other.
A general depiction of GLGCM is represented in Fig. 3(b), with the GLGCM of Hurricane Lionrock image shown in Fig. 3(c).
3.2 Two-dimensional entropies
As Fig. 3(b) indicates, a GLGCM of a hurricane can be divided into four quadrants by means of two thresholds t and s, where quadrant A (the green area) represents the hurricane eye area (at low wind speeds, low gradient values) and quadrant C (the blue area) the background area (at high wind speeds, low gradient values). Quadrant B (the red area) indicates the transition region from the calm hurricane eye to the higher gradient wind region, while quadrant D (the yellow area) usually covers the outer margin area with moist convection cells in the spiraling arms of the hurricane, occasionally leading to high gradients near wind downbursts. Quadrant B contains the hurricane eyewall. Then, the hurricane eyewall detection problem can be simplified to searching for the optimal threshold value pair (s,t) that identifies the hurricane eyewall area.
In information theory, entropy measures the distance between the randomness of an event and the signal. With the automatic entropy threshold selection technique proposed by Brink (1992), one can estimate suitable thresholds for edge detection using two-dimensional entropies.
Quadrant B is optimized to maximize entropy. The quadrant is normalized to make the sum of its elements equal to 1. Then, the class B entropy is defined as
where s is the minimum gradient value in quadrant B. Further, represents the probability of Hm,n in quadrant B, calculated as
Our goal is to estimate the probability of Hm,n in quadrant B. As such, in quadrant B, k should start from 1 and end up with s. In addition, l starts from t+1 and goes up to L.
The optimal threshold value pair (s,t) is determined by maximizing two-dimensional entropies of quadrant B using class D entropy as a constraint. It is worth mentioning that the class D area contains the outer rings of the eyewall. Taking class D into account helps to restrain class B pixels to the vicinity of the hurricane eyewall. Hence, we maximize
by varying the (s,t) pair in the full index domains of n and m.
With an (s,t) pair obtained from (10), the hurricane SAR image can be classified into the class A–D area, as shown in Fig. 3(d).
Generally, automatic hurricane eye detection can be easily hampered by speckle noise or systematic instrument effects (e.g., scalloping), which may affect target identification and sometimes covers up essential features. A probabilistic patch-based (PPB) filter is considered as one of the best image denoising methods (Deledalle et al., 2009). It effectively removes speckle noise while preserving the edges and shapes. However, it should be noted that the technique proposed here has the advantage that there is no necessity in particular to denoise the SAR image. The texture aggregation procedure automatically denoises high-resolution SAR images (Hou et al., 2016). As a result, it ignores the effect of speckle noise to a great extent extent. For comparison, the proposed technique is applied to the same SAR image, which is first denoised with PPB filter. The classification results are shown in Fig. 3(e). Noting that Figs. 3(d) and 3(e) are both smoothed by a Gaussian filter with 15×15 window-size for better representation.
As shown in Figs. 3(d) and 3(e), there is no clear difference between the identified hurricane eye or the eyewall. Besides, speckle is a geophysical signal and under proper modelling, it can be exploited to infer physical information from the observed scene (Migliaccio et al., 2019). Thus, speckle noise reduction procedures may filter out useful information, which is avoided by the technique proposed here.
3.3 Hurricane center determination
The hurricane eye is a calm area with low wind speed. Usually, the location of the hurricane center is calculated as the geometric center or the average value of the hurricane eye area (Du and Vachon, 2003; Cheng et al., 2012; Li et al., 2013). However, these methods may lead to smearing of the eye center, due to the vertical wind shear, i.e., mesovortices, which act to tilt the hurricane eyes. Mesovortices show intense transient vorticity features in the image and are observed regularly near the inside edge of the eyewalls (Li et al., 2013).
The proposed technique avoids the effects of the mesovortices since it rejects the hurricane eye margin pixels with relatively high wind speed during the entropy maximization process. As shown in Fig. 3(d), within the hurricane eye area, the pixels close to the eyewall are classified into the red class B. Therefore, the new technique can get more reliable locations of hurricane centers.
As illustrated by Fig. 3(d), the class A area covers the hurricane eye as well as the outer low wind speed area, because of the large sub-image size. Presumably, the class A group containing the eye should satisfy these features:
1. Not close to the class C area.
2. Surrounded by the Class B and D area.
3. The largest consecutive group of class A which contains the pixel with zero gray level value and zero gradient value, named as the initial hurricane center.
With these features accordingly, one can extract the hurricane eye area. Here we construct a gray gradient weighted matrix W to determine the initial hurricane center, combining the matrix G and matrix F by Eq. (11).
where (i,j) indicates the pixel position.
The pixel with the minimum W(i,j) value is selected as the initial hurricane center (shown as the white X mark in Fig. 3(d)). The hurricane eye center is defined as the average value of the grid values (Xi,Yi) within the hurricane eye by
where and are the center position (indicated as the white cross mark in Fig. 3(f)); n is the total number of Class A pixels, part of the class A group containing the eye. The result is shown in Fig. 3(f).
3.4 Eyewall points search
Projecting the extracted class B area to the original image (shown in Fig. 4), we can see that despite some error pixels, the extracted class B area delineates the hurricane eyewall well and shows good agreement with the visual results. It implies that we can determine the hurricane eyewall from class B pixels.
The hurricane eyewall contains the strongest wind speeds (highest gray level values) and the highest wind gradients in all radial directions (Du et al., 2003; Zhang et al., 2014). Assuming that the hurricane eyewall is a closed curve and that the pixels along the hurricane eyewall have similar VH NRCS values, the hurricane eyewall pixels are found by minimizing the variance of the VH NRCS values. The pixel Pi,j with the maximum w(i,j) value is selected as the eyewall starting point (the blue point in Fig. 4). The starting point is considered to be always contained by the eyewall, which can be verified by Fig. 5. Fig. 5 shows that the distances between the starting points and the hurricane center (determined at the end of this section) are generally less than 30km. As such, this assumption can be deemed reliable. Tag Pi,j as the endpoint Ps . The eyewall pixels are selected from the neighboring points of endpoint Ps .
Starting from the hurricane eye center, several methods may be employed to find the eyewall and its distortion. For example, once we know where the hurricane center is, one can determine the speed distributions as a function of distance to the center and then the mean distance of the eyewall to the eye. As such, the asymmetries can be described by seeking the quadrants with a maximum deviation of the highest median wind distance to this mean eyewall. One of the disadvantages of this method is that it can not get smooth hurricane eyewall generally. Here in this study, we take effort to get smooth hurricane eyewalls preserving the textural features by searching for the points which meet:
where i’ and j’ are the row numbers and column numbers of the neighboring points.
Among all neighboring points, the point P’ that meets Eq. (14) will be added to the eyewall point set and tagged as new Ps.
where Arg{min D}denotes selecting the point P’ which can make the variance minimum; are the VH NRCS value list of the temporary hurricane eyewall point set; is the VH NRCS value of point . To some extent, this process is equivalent to compare the textural features in various directions and select the one which can satisfy the current eyewall best.
The search processes are iteratively continued with each new endpoint Ps as the starting point until two endpoints meet. The eyewall pixel search process is performed clockwise and anti-clockwise simultaneously (shown in Fig. 4). For these two processes, the azimuthal angle (initiated as 0° for the blue point in Fig. 4) ranges from 0° to 180° (anti-clockwise direction) or 0° to –180° (clockwise direction), respectively. With the searching processes going, the absolute value of the azimuthal angle of the eyewall pixel will continually increase. As such, the eyewall results can cover all angles. The red points in Fig. 6 represent the eyewall of hurricane Lionrock. The reference elliptical hurricane eyewall (shown as the blue ellipse in Fig. 6) is fitted by the searched location sites using an ellipse-specific method (Fitzgibbon and Fisher, 1996). Other morphological parameters, such as the major/minor axis length and ellipticity, can be derived from the reference elliptical hurricane eyewall. Finally, the orange mark in Fig. 6 gives the hurricane center interpolated from Best Track (BT) data. The hurricane center found with our new method is close to the BT interpolated center. A further investigation is presented in Section 4.
4 Results and discussion
The results of hurricane eyewall detection for the other 16 SAR hurricane images are shown in Fig. 7. Each hurricane is depicted twice, once without and once with retrieved parameters. Figure 7 shows a close match between the processed and visually perceived results. The proposed method yields satisfactory results, preserving the inner core structure, even when the hurricane has an irregular hurricane eye shape (Figs. 7(a), 7(e) and 7(k)) or when the eyewall is not closed (Figs. 7(a) and 7(d)). Therefore, it can be applied to most SAR images showing hurricanes.
The differences in location of the hurricane center between the BT interpolated data and the SAR extracted values are further investigated to evaluate the reliability of the estimated morphological parameters. BT interpolated hurricane centers and the SAR identified centers are all marked in Fig. 7.
Figure 8(a) shows the differences in center locations, using BT interpolated centers as the origin. The location differences are low, with a mean location differences of 12.06 km and a standard deviation of 6.12 km. This difference is acceptable comparing to the size of a hurricane eye, which ranges from 10 to 50 km. Note that most points in Fig. 8(a) are located on the lower side. Figure 8(b) shows the number of points in directional intervals of 45°. There are clearly more points in the interval from 135° to 180° than in the other intervals.
The coordinate system in Fig. 8 ignores the effect of the hurricane moving direction. Figure 9 is similar to Fig. 8, but now with respect to the hurricane moving direction (noted as the Best Track Direction). Distributions of the SAR determined center with respect to the BT center are shown in Fig. 9(a). The points in Fig. 9(a) are distributed in a similar way as those in Fig. 8(a). Figure 9b shows that there are more dots between 0° and 270° than in the other sectors.
The location difference may reach 20 km or more for some cases where the hurricane is asymmetric, such as Hurricane Lester (Figs. 7(a) and 7(d)), Hurricane Hermine (Fig. 7(e)) and Hurricane Megi (Fig. 7(n)). The location difference for Hurricane Megi may be caused by interpolation errors, as the BT data are interpolated linearly to the image acquisition time, assuming that the hurricanes move in a straight line with constant velocity. This may not be correct for all real hurricanes. As a result, BT centers may be located on the hurricane eyewall area (Figs. 7(a)‒7(e), 7(h), and 7(n), Hurricanes Lester, Hermine, Maria, and Megi, respectively), which is certainly not correct. The hurricane centers obtained from our method, however, look more reliable. On the other hand, the hurricanes in these cases were rapidly growing or weakening during acquisition. This makes it difficult to determine a correct hurricane eye center due to the dynamical changes in the inner area. Figures 7(a), 7(d), and 7(e) all show hurricanes with unstable inner structures. The eyewalls are not closed for the two images of hurricane Lester. Hurricane Hermine was making landfall when captured by SAR, and the image shows two tightly linked hurricane eyes, which may be caused by land effects. Moreover, the size of the hurricane eye in SAR images of Figs. 7(a), 7(d) and 7(e) is larger than those in the other images, which makes accurate positioning of the hurricane center more complicated.
In the rest of this section, the calculated morphological parameters are compared with those obtained from Du’s wavelet analysis method to evaluate the properties of the proposed method. The analysis is made based on two hurricane scenarios: closed hurricane eyewall cases and open hurricane eyewall cases. The flowchart of the study is shown in Fig. 10. A comparison with Lee’s method only concerns image processing needs and programming complexity and is not shown in the figure.
4.1 Closed hurricane eyewall cases
Figure 11 shows the eye of hurricane Megi as identified with Du’s wavelet analysis method (left hand panel) and our GLGCM method (right hand panel). Both methods are well capable of detecting hurricane eyes when the eyewall is closed. In Du’s method, the threshold value is adjusted to reduce the artefacts by multiplying the original threshold value (average of the maximum radiometric gradient pixels) by a fraction of 0.9.
Table 3 shows the morphological parameters of the other nine hurricane samples calculated by Du’s method and the GLGCM method. In the study, we found that the optimal fraction above is usually not a constant value and need to be manually adjusted for an individual image. As such, to guarantee the objectivity of the morphological parameters, no fractions are utilized here. The morphological parameters estimated with both methods agree quite well, so the methods are both suitable for hurricanes with closed eyewalls.
4.2 Open hurricane eyewall cases
In Du’s method, it is assumed that the pixels along the hurricane eyewall have VH NRCS values that are close together and that the high wind speed eyewall entirely surrounds the low wind speed hurricane eye. These premises can be satisfied when the hurricane is mature. However, there may be a large variation in VH NRCS values of the pixels along the hurricane eyewall during the hurricane life cycle. This occurs when the hurricane is in the developing or in the declining phase, or during landfall. In these cases, the eyewall is severely distorted, which results in an asymmetric inner structure of the hurricane. Under these circumstances, Du’s method can cause a "leak" of the hurricane eye, i.e., the captured hurricane eye may be outside the eyewall.
Obviously, the eyewall of hurricane Gaston is not closed. The wind speed in the north of the hurricane eye is remarkably larger than that the south. Comparing the hurricane eyes determined with the two methods (shown in Fig. 12), it can be found that Du’s method fails, while the newly proposed method returns a good result. In Du’s method, the threshold is adjusted as in Section 4.1.
Moreover, there is an unstated requirement in Du’s method that the sub-images should not cover the outer area, because otherwise it will return both the hurricane eye and outer. The spatial resolution of the RADARSAT-1 data used in Du’s research is 50 m while that of S1 data is 1000 m. The lower spatial resolution makes it difficult to get sub-scenes of observed hurricane eye without outer area. On the other hand, the hurricane size varies from hurricane to hurricane and also with the stage in the hurricane life cycle. It means that for Du’s method, the sub-image size needs to be manually adjusted to get optimal results. This process can be elaborate.
4.3 Discussion
The results presented here illustrate the advantages of the new technique. Compared to Du’s methods, it does not only yields equally satisfying results when the hurricane eyewall is closed, but also better results when the hurricanes have an asymmetric structure. The wavelet analysis method uses the average value of maximum gradient points as the threshold for hurricane eye determination. However, it determines the hurricane eye without taking texture information into account, ignoring the variation of VH NRCS values along the eyewall. Thus it can lead to some artefacts. Figure 12 illustrates that the hurricane is often asymmetric and the VH NRCS along the eyewall can vary substantially. Besides, the wavelet analysis method may smear the defined hurricane center positions since the determined hurricane eye extents are generally larger, which may exaggerate the effects of mesovortices. As shown in Table 3, the major and minor axis lengths estimated with Du’s method are generally larger than those estimated by the proposed technique. Moreover, as has been clarified in Section 4.2, Du’s method is sensitive to the size of the sub-image. Sub-image size usually needs to be manually modified to avoid covering the outer low wind speed area. For low spatial resolution data, because of the restraint of the available pixels, it is difficult to determine an appropriate sub-image. Our method is less sensitive to these factors and can extract hurricane eyewalls automatically.
Finally, we made a simple comparison between Lee’s morphological analysis method and ours, mainly to discuss the image processing steps and the programming complexity. The two methods both return good results automatically. However, for Lee’s method, a series of SAR image pre-processing steps such as speckle reduction and image enhancement is necessary while they are not needed in our method. Besides, GLGCM is a mature technique, where GLGCM construction and class entropy maximization can be more easily coded as compared to the morphological skeleton computation, morphological reconstruction, and skeleton pruning in Lee’s method. In short, the main advantage of our technique compared to Lee’s method is in its elegance and low programming complexity.
5 Conclusions
In this paper, we have developed a new method for automatically extracting the hurricane eyes from C-band Sentinel-1 SAR data by texture analysis. The texture information is aggregated through a Gray Level-Gradient Co-occurrence Matrix (GLGCM). The hurricane center and the eyewall can be identified by maximizing the eyewall class entropy. The retrieved texture features of the hurricane eyewall are in line with the visual results.
As compared to existing methods, we draw the following conclusions:
1) The proposed GLGCM method can get better results when the hurricane eyewalls are not closed.
2) The GLGCM method is not sensitive to hurricane size and can be applied to most image cases.
3) Fewer pre-processing steps are needed for the GLGCM method. It has a lower programming complexity and is thus easier implemented.
6 Future work
The proposed technique generates fewer artefacts when dealing with low spatial resolution images than other methods. This property suggests to apply the technique to scatterometer hurricane data. However, because of the much lower spatial resolution of scatterometer data, the hurricane eyewalls are strongly blurred. It can be hard to extract hurricane eyewall directly with this method. Introducing additional information into the GLGCM construction process or employing optimization techniques may help to resolve this problem. It will be our future work.
An alternative approach is to model hurricanes as Rankine vortices and fit vortex parameters to SAR data using the techniques developed in this study. Scatterometer vector winds may be fitted to Rankine vortex parameters, providing a scaling between SAR and scatterometer data. This may be useful to improve scatterometer hurricane wind products or their user guidance.
Belmonte R M, Stoffelen A (2019). Characterizing ERA-Interim and ERA5 surface wind biases using ASCAT. Ocean Sci, 15(3): 831–852
[2]
Brink A D (1992). Thresholding of digital images using two-dimensional entropies. Pattern Recognit, 25(8): 803–808
[3]
Cheng Y, Huang S, Liu A K, Ho C, Kuo N (2012). Observation of typhoon eyes on the sea surface using multi-sensors. Remote Sensing of Environment, 123(6): 434–442
[4]
Chen S, Wu C, Chen D, Tan W (2009). Scene classification based on gray level-gradient co-occurrence matrix in the neighborhood of interest points. IEEE
[5]
de Kloe J, Stoffelen A, Verhoef A (2017). Improved use of scatterometer measurements by using stress-equivalent reference winds. IEEE J Sel Top Appl Earth Obs Remote Sens, 10(5): 2340–2347
[6]
Deledalle C A, Denis L, Tupin F (2009). Iterative weighted maximum likelihood denoising with probabilistic patch-based weights. IEEE Trans Image Process, 18(12): 2661–2672
[7]
Du Y, Vachon P W (2003). Characterization of hurricane eyes in RADARSAT-1 images with wavelet analysis. Can J Remote Sens, 29: 491–498
[8]
Du Y, Vachon P W, van der Sanden J J (2003). Satellite image fusion with multiscale wavelet analysis for marine applications: preserving spatial information and minimizing artifacts (PSIMA). Can J Rem Sens, 29(1): 14–23
[9]
Fitzgibbon A W M P, Fisher R B (1996). Direct least squares fitting of ellipses. In: Process 13th Int’l Conf’ Pattern Recognition
[10]
Gade M, Stoffelen A (2019) An introduction to microwave remote sensing of the asian seas. In: Barale V, Gade M, eds. Remote Sensing of the Asian Seas. Cham Springer
[11]
Holland , G. (2008). A revised hurricane pressure–wind model. Monthly Weather Review, 9(136), 3432–3445
[12]
Holland G J (1980). An analytic model of the wind and pressure profiles in hurricanes. Mon Weather Rev, 108(8): 1212–1218
[13]
Holland G J, Belanger J I, Fritz A (2010). A revised model for radial profiles of hurricane winds. Mon Weather Rev, 138(12): 4393–4401
[14]
Hou B, Ren B, Ju G, Li H, Jiao L, Zhao J (2016). SAR image classification via hierarchical sparse representation and multisize patch features. IEEE Geosci Remote Sens Lett, 1(13): 33–37
[15]
Jin S, Li X, Yang X, Zhang J A, Shen D (2019). Identification of tropical cyclone centers in SAR imagery based on template matching and particle swarm optimization algorithms. IEEE Trans Geosci Remote Sens, 57(1): 598–608
[16]
Jin S, Wang S, Li X (2014). Typhoon eye extraction with an automatic SAR image segmentation method. Inter J of Remote Sens: Remote Sens China Seas, 35 (11–12): 3978–3993
[17]
Jin S, Wang S, Li X, Jiao L, Zhang J A, Shen D (2017). A salient region detection and pattern Matching-Based algorithm for center detection of a partially covered tropical cyclone in a SAR image. IEEE Trans Geosci Remote Sens, 55(1): 280–291
[18]
Kanopoulos N, Vasanthavada N, Baker R L (1988). Design of an image edge detection filter using the Sobel operator. IEEE J Solid-State Circuits, 23(2): 358–367
[19]
Kimball S K, Mulekar M S (2004). A 15-Year climatology of north atlantic tropical cyclones. Part I: Size parameters. J Clim, 17(18): 3555–3575
[20]
Lee I K, Shamsoddini A, Li X, Trinder J C, Li Z (2016). Extracting hurricane eye morphology from spaceborne SAR images using morphological analysis. ISPRS J Photogramm, 117: 115–125
[21]
Li X (2015). The first Sentinel-1 SAR image of a typhoon. Acta Oceanol Sin, 34(1): 1–2
[22]
Li X, Zhang J A, Yang X, Pichel W G, DeMaria M, Long D, Li Z (2013). Tropical cyclone morphology from spaceborne synthetic aperture radar. Bull Am Meteorol Soc, 94(2): 215–230
[23]
Liu K S, Chan J C L (1999). Size of tropical cyclones as inferred from ERS-1 ERS-2 data. Mon Weather Rev, 127(12): 2992–3001
[24]
Lu L, Tao Y, Di L (2018). Object-based plastic-mulched landcover extraction using integrated Sentinel-1 and Sentinel-2 data. Remote Sens, 10(11): 1820
[25]
Mallen K J, Montgomery M T, Wang B (2005). Reexamining the Near-Core radial structure of the tropical cyclone primary circulation: implications for vortex resiliency. J Atmos Sci, 6 2(2): 408–425
[26]
Migliaccio M, Huang L, Buono A (2019). SAR speckle dependence on ocean surface wind field. IEEE Trans Geosci Remote Sens, 57(8): 5447–5455
[27]
Mouche A, Chapron B, Knaff J, Zhao Y, Zhang B, Combot C (2019). Copolarized and cross-polarized SAR measurements for high-resolution description of major hurricane wind structures: application to IRMA category 5 hurricane. J Geophys Res Oceans, 124(6): 3905–3922
[28]
Mouche A A, Chapron B, Zhang B, Husson R (2017). Combined Co- and Cross-Polarized SAR measurements under extreme wind conditions. IEEE Trans Geosci Remote Sens, 55(12): 6746–6755
[29]
Pan H, Gao P, Zhou H, Ma R, Yang J, Zhang X (2020). Roughness analysis of sea surface from visible images by texture. IEEE Access, (8): 46448–46458
[30]
Pan Y, Liu A, He S, Yang J, He M (2013). Comparison of typhoon locations over ocean surface observed by various satellite sensors. Remote Sens, 5(7): 3172–3189
[31]
Shao W, Li X, Hwang P, Zhang B, Yang X (2017). Bridging the gap between cyclone wind and wave by C-band SAR measurements. J Geophys Res Oceans, 122(8): 6714–6724
[32]
Shapiro L J, Willoughby H E (1982). The response of balanced hurricanes to local sources of heat and momentum. J Atmos Sci, (39): 378–394
[33]
Shen W (2006). Does the size of hurricane eye matter with its intensity? Geophys Res Lett, 18(33): 18813
[34]
Sitkowski M, Kossin J P, Rozoff C M (2011). Intensity and structure changes during hurricane eyewall replacement cycles. Mon Weather Rev, 139(12): 3829–3847
[35]
Stoffelen A, Kumar R, Zou J, Karaev V, Chang P S, Rodriguez E (2019) Ocean Surface Vector Wind Observations. In: Barale V, Gade M, eds. Remote Sensing of the Asian Seas. Cham: Springer
[36]
van Zadelhoff G J, Stoffelen A, Vachon P W, Wolfe J, Horstmann J, Belmonte Rivas M (2014). Retrieving hurricane wind speeds using cross-polarization C-band measurements. Atmos Meas Tech, 7(2): 437–449
[37]
Velden C, Harper B, Wells F, Beven J L II, Zehr R, Olander T, Mayfield M, Guard C C H I P, Lander M, Edson R, Avila L, Burton A, Turk M, Kikuchi A, Christian A, Caroff P, McCroneP (2006). The Dvorak Tropical Cyclone Intensity estimation technique: a satellite-based method that has endured for over 30 years. Bull Am Meteorol Soc, 87(9): 1195–1210
[38]
Vogelzang J, Stoffelen A (2017). ASCAT ultrahigh-resolution wind products on optimized grids, IEEE J Sel Topics Appl Earth Obs Rem Sensing, 10(5): 2332–2339
[39]
Vogelzang J, King G P, Stoffelen A (2015). Spatial variances of wind fields and their relation to second-order structure functions and spectra. J Geophys Res Oceans, 120(2): 1048–1064
[40]
Vogelzang J, Stoffelen A (2012). NWP model error structure functions obtained from scatterometer winds. IEEE Trans Geosci Remote Sens, 50(7): 2525–2533
[41]
Wang H, Dong F (2009). Image features extraction of gas/liquid two-phase flow in horizontal pipeline by GLCM and GLGCM. IEEE
[42]
Willoughby, H.E. (1990). Temporal changes of the primary circulation in tropical cyclones. J Atmos Sci, (47): 242–264
[43]
Willoughby H E, Darling R W R, Rahn M E (2006). Parametric representation of the primary hurricane vortex. Part II: a new family of sectionally continuous profiles. Mon Weather Rev, 13 4(4): 1102–1120
[44]
Wood V T, White L W, Willoughby H E, Jorgensen D P (2013). A new parametric tropical cyclone tangential wind profile model. Mon Weather Rev, 141(6): 1884–1909
[45]
Ying M, Zhang W, Yu H, Lu X, Feng J, Fan Y, Zhu Y, Chen D (2014). An overview of the China meteorological administration tropical cyclone database. J Atmos Ocean Technol, 31(2): 287–301
[46]
Zhang P, Chen L, Li Z, Xing J, Xing X, Yuan Z (2019). Automatic extraction of water and shadow from SAR images based on a multi-resolution dense encoder and decoder network. Sensors (Basel), 19(16): 3576
[47]
Zhang B, Perrie W (2012). Cross-Polarized synthetic aperture radar: a new potential measurement technique for hurricanes. Bull Am Meteorol Soc, 93(4): 531–541
[48]
Zhang G, Zhang B, Perrie W, Xu Q, He Y (2014). A hurricane tangential wind profile estimation method for C-Band Cross-Polarization SAR. IEEE Trans Geosci Remote Sens, 52(11): 7186–7194
[49]
Zheng G, Yang J, Liu A K, Li X, Pichel W G, He S (2016). Comparison of typhoon centers from SAR and IR images and those from best track data sets. IEEE Trans Geosci Remote Sens, 54(2): 1000–1012
[50]
Zheng G, Li X, Zhou L, Yang J, Ren L, Chen P, Zhang H, Lou X (2018). Development of a Gray-Level Co-Occurrence Matrix-Based texture orientation estimation method and its application in sea surface wind direction retrieval from SAR imagery. IEEE Trans Geosci Remote Sens, 56 (9): 5244–5260
[51]
Zhou L, Lin T, Zhou X, Gao S, Wu Z, Zhang C (2020). Detection of winding faults using image features and binary tree support vector machine for autotransformer. IEEE Tran Transp Electr, 6(2): 625–634
RIGHTS & PERMISSIONS
Higher Education Press
AI Summary 中Eng×
Note: Please be aware that the following content is generated by artificial intelligence. This website is not responsible for any consequences arising from the use of this content.