Image analyses for video-based remote structure vibration monitoring system

Yang YANG , Xiong (Bill) YU

Front. Struct. Civ. Eng. ›› 2016, Vol. 10 ›› Issue (1) : 12 -21.

PDF (1489KB)
Front. Struct. Civ. Eng. ›› 2016, Vol. 10 ›› Issue (1) : 12 -21. DOI: 10.1007/s11709-016-0313-6
RESEARCH ARTICLE
RESEARCH ARTICLE

Image analyses for video-based remote structure vibration monitoring system

Author information +
History +
PDF (1489KB)

Abstract

Video-based vibration measurement is a cost-effective way for remote monitoring the health conditions of transportation and other civil structures, especially for situations where accessibility is restricted and does not allow installation of conventional monitoring devices. Besides, video-based system is global measurement. The technical basis of video-based remote vibration measurement system is digital image analysis. Comparison of the images allow the field of motion to be accurately delineated. Such information is important to understand the structure behaviors including the motion and strain distribution. This paper presents system and analyses to monitor the vibration velocity and displacement field. The performance is demonstrated on a testbed of model building. Three different methods (i.e., frame difference method, particle image velocimetry, and optical Flow Method) are utilized to analyze the image sequences to extract the feature of motion. The Performance is validated using accelerometer data. The results indicate that all three methods can estimate the velocity field of the model building, although the results can be affected by factors such as background noise and environmental interference. Optical flow method achieved the best performance among these three methods studied. With further refinement of system hardware and image processing software, it will be developed into a remote video based monitoring system for structural health monitoring of transportation infrastructure to assist the diagnoses of its health conditions.

Keywords

structure health monitoring / velocity estimation / frame difference / PIV / optical-flow method

Cite this article

Download citation ▾
Yang YANG, Xiong (Bill) YU. Image analyses for video-based remote structure vibration monitoring system. Front. Struct. Civ. Eng., 2016, 10(1): 12-21 DOI:10.1007/s11709-016-0313-6

登录浏览全文

4963

注册一个新账户 忘记密码

Introduction

Health monitoring of structures is important to ensure safety and performance assessment. They also provide data support for preservation and maintenance decisions. Many different sensing principles have been investigated for structure health monitoring (SHM) of structure and transportation infrastructures. They primarily are based on contact type sensors such as accelerometers, strain gauge, fiber optic sensors, piezoelectric based sensors and actuators, impedance based sensors, ultrasonic (lamb) wave sensors, and physical acoustic sensors, etc. [1]. Some of the limitations with these technologies include that they only provide localized information, and require a significant number of sensors to cover a broad area of the structure, besides they require access wires for power or data transmission [1].

To cope with these problems, growing interest is drawn to develop non-contact displacement measurement systems, such as global positioning systems (GPS), laser Doppler vibrometer, and vision-based systems. Currently, most of the published GPS systems can at least reach a resolution of±1 cm in the horizontal direction and±2 cm in the vertical direction. Using the GPS system to measure displacement of flexible civil structures has advantages in many aspects, such as positioning accuracy up to millimeter-level, dynamic measurement up to 10 Hz, real-time performance, and remote non-intrusive way of monitoring. However, the GPS method is uneconomical and is very sensitive to electro-magnetic noise, environmental interference and weather conditions. On the other hand, laser Doppler vibrometer can provide similar or even better accurate compare to GPS systems, but it is also not economically efficient because the measurement of dynamic response of only one point requires expensive devices. Besides, the Laser intensity may become dangerously strongly for the distance increasing to more than 50 m. As the development of optics and computer technologies, Monitoring system based on video is promising to overcome some of these limitations. As a global measurement, it can map the strain and deformation of the structure remotely. By the use of zoom in lens, global scale and local scale measurement can be accomplished. Therefore, it has potential to be a cost-effective, reliable and noncontact method for field applications. These systems usually include digital video cameras, computers and pre-defined target panels, which are all quite affordable and effective nowadays. For example, typical commercial available consumer video camera has a frame rate of 30 frame per seconds (fps), which achieves a measurement range of vibration frequencies up to 15 Hz based on Nyquist theorem. This is enough for large-scale structure with very low nature frequency. One of the crucial component for accurate video-based SHM system is the image processing algorithm that determine the motion based on sequence of images. Different image analyses methods have been proposed. There, however, have not been a systematic evaluation of the performance of difference methods for SHM purpose.

This paper compares the performance of three types of image analyses algorithm to estimation the motion. A model building is used as the testbed. These include the vibration velocity and displacement measurement from video sequences using frame difference technology. The results are compared with accelerometer data. The result shows that it is possible to monitor the vibration velocity and displacement of the structure using digital image analysis. Two other image processing technologies, i.e. particle image velocimetry and optical flow method are also evaluated using the same captured images. The advantages and limitations of each method are compared. The optical flow method is found to provide the most reliable results of field of motion. With support of a robust and accurate image processing algorithm, a cost effective video-based remote monitoring system can be developed for monitoring and diagnose of structural conditions.

Experiment design

Accelerometer and calibration

MEMS accelerometers are used as the baseline measurement to validate the performance of video based vibration monitoring system. Four analog triaxial accelerometers ADXL337 are used for this purpose. The acceleration range of the sensor is±3g, with a sensitivity of 300 mV/g. An in-house fabricated PCB board is used to accommodate the accelerometers.

The first step of the experiment is to calibrate the accelerometers. Based on the calibration guide provided by Bragge et al.[ 2] the sensor is calibrated statically by placing the triaxial accelerometer faces perpendicular to gravitational acceleration in each direction. The relationship between the acceleration and the output voltage value is,

A cceleration=(output voltage-offset)/scale .

Following the calibration routine, the calibration constant for each of the four sensors are obtained, which are listed in Table 1.

Model building test bed and experiment setup

A 10-story steel model building is used as the testbed. Each story is 5.2 inches high and the thickness of the inner and outer wall is 0.078 inches. The thickness of the floors is 0.2 inches in while the base is 0.7 inches thick. The spacing between the walls is 10 inches and the width of the frame is 6 inches (Fig. 1).

Four wired analog accelerometers were mounted on the side of model building from the seventh floor to the tenth floor. They were mounted such that the axes were consistent with the vibration direction of the model building. Thus, only x-direction (parallel to the vibration direction) acceleration output data are acquired. A National Instrument NI6221 DAQ device is used to acquire the acceleration data as sampling rate of 300 Hz. The video capture is via a video camera at fixed distance in front of the model building. The system capture the full picture of structure with image resolution of 1920 ×1080 at frame rate of 30 Hz.

The model building was excited by hitting the top of the structure side using a rubber hammer. The accelerometer data collection is synchronized with digital video camera. Once system is synced and the sensors are ready to sample data, the sampling can be triggered manually or with reset threshold automatically. In the experiments herein, 30 Hz and 150 samples (corresponding to 5 s) for each sensor was chosen as the default configuration. Signal acquisition, processing and image processing algorithms were programmed using Matlab environment.

Experimental data and analysis

Signal processing for accelerometer

Figure 2 shows the time history and spectrum of the acceleration of top sensors (accelerometer 1) after an impulse was applied to the building. It clearly shows the 1st and 2nd natural frequencies, which matches the results of the computational model analyses.

To compare with the results of video based monitoring system, the accelerometer date is first integrated to determine the velocity and displacement. Although time integration seems to be straightforward, the actual implementation can be challenging. During integration, low frequencies contents of the waveform are strongly amplified and high frequencies are reduced. Consider an acceleration signal that consist of a drift component:

A ( t ) = a ( t ) + a 0 ,

with initial conditions are v 0 for velocity and x 0 for position.

Velocity can be obtained by integration of the acceleration process:

V ( t ) = 0 t A ( η ) d η + v 0 = 0 t a ( η ) d η + 0 t a 0 d η + v 0 = v ( t ) + a 0 t + v 0 .

The velocity signal V ( t ) is composed of three parts. The first part, v ( t ) is a zero mean, time varying signal that is bounded. The second part, a 0 t is a ramp (which is also named as first order trend term) with a slope of a 0 and is caused by the accelerometer drift. The third part is the initial velocity.

The displacement can be obtained by integrating V ( t ) :

S ( t ) = 0 t V ( τ ) d τ + s 0 = 0 t ( 0 τ a ( η ) d η + d 0 τ + v 0 ) d τ + s 0 = 0 t 0 τ a ( η ) d η d τ + 1 2 d 0 t 2 + v 0 t + s 0 .

The displacement also contains an unwanted ramp and constant added to a zero mean varying component. Especially, for, ramp, there are first order trend term v 0 t and second order trend term 1 2 d 0 t 2 .

Therefore, it is necessary to remove the DC offset and trend terms before integration to prevent the drift that can affect integration results. Figure 3 shows the signal processing chain starting from the raw acceleration data.

To remove DC bias in Eq. (4), the following algorithm [ 3] is applied:

x ( i ) = x ( i ) x i 0 + x i 1 + + x i n n ,

where

x ( i )

can be acceleration, velocity and displacement raw data.

Trend term in a time series is slow, gradual change in some property of the series over the whole interval under investigation. Many alternative methods are available for detrending. In this study we adopt least squares which is the most widely used method for the random signal and stationary signal. It can eliminate both the linear state of baseline drift and high order polynomial trend.

Since the DAQ device sample the accelerometer output data at the certain sampling rate, input of each integration are at discrete time. Simpson’s Rule [ 4] was adopt for integration:

y ( t ) = y ( t 1 ) + Δ t × x ( t 1 ) + x ( t ) + x ( t + 1 ) 6 .

When x ( t ) in Eq. (6) corresponds to accelerometer, y ( t ) corresponds to velocity; if x ( t ) corresponds to velocity, y ( t ) corresponds to displacement. Figure 4 and 5 shows the time history and spectrum of the velocity and displacement of the accelerometer 1 (which is installed on the top of the model building).

Digital image processing

The video is captured using the system described in Fig. 1. A 5 s video section of a vibrating model building is analyzed as an example. The video is first divided into 150 frames with resolution of 1920×1080 in .jpg format. To save the computation memory, only 200×1080 was cut as region of interest. The parsed image frames are then analyzed in the subsequence studies.

Image preprocessing

Figure 6 shows the image preprocessing schematics of vibration measurement system. Five steps are incorporated in the processing procedure aiming to detect four mark points where the accelerometers are attached, including RGB to gray-scale, gray scaling, median filter, binarization and denoise. Detailed discussions are provided in the following section.

RGB to gray-scale

Since the region of interest images are RGB color images, which need to be converted into gray level images via eliminating the hue and saturation information while retaining the luminance, using the following equation (Matlab R2012b):

Y ( G r a y ) = 0.299 × R + 0.587 × G + 0.144 × B .

Gray scaling

For accurately detecting red mark from the original image, a linear gray transformation is required to properly enhance images. Gray scaling [ 5] mapped the input gray level interval [ f min , f max ] onto the output interval [ g min , g max ] at an arbitrary location by Eqs. (8) and (9),

g ( x ) = T [ f ( x ) ] ,

T = ( g max g min ) / ( f max f min ) ,

Median filter

To reduce “salt and pepper” noise, a 5×5 median filter (Matlab R2012b) was performed through the region of interest, which helps smooth the edge of target.

Binarization

There are numerous methods for the determination of binary threshold value. In this paper, maximum entropy threshold method [ 6] is employed. The threshold value, T is selected as the maximum of the entropy of black and white pixel (background and object points). The entropy of white and black pixel are determined by Eqs. (10) and (11),

H W ( t ) = i = t + 1 i max h ( i ) j = t + 1 i max h ( j ) log h ( i ) j = t + 1 i max h ( j ) ,

H B ( t ) = i = 0 t h ( i ) j = 0 t h ( j ) log h ( i ) j = 0 t h ( j ) .

Dilation and erosion

After binarization segmentation, there are normally some background noise or burrs in the edge of our objects following image processing. Therefore, morphological operations, dilation and erosion, opening operation and closing operation (Matlab R2012b), are performed to eliminate large background noise, small connected domains, isolated dots and also smooth boundaries of the object regions.

Pixel calibration

The aspect ratio and area of the pixels must be determined so that pixel measurement can be translated in to physical measurements by scaling. A circular object of know diameter (1 cm) was chosen for calibration because its size is independent of object orientation. Since the calibration object is contrived, there is no problem obtaining a good contrast image (see Fig. 7). In this paper, area based calibration [ 7] was adopt which use the area of the calibration object in pixel,
A

and the second order central moments, u 20 and u 02 . The calibration parameters can then be calculated as:

a r = u 20 / u 02 , a = π D 2 / 4 A , h = a × a r , w = a / a r .

Algorithm for motion measures from video signals

Frame difference method

The frame difference method [ 8] calculates the differences between frame at time t and frame at time t 1 . In the differential image, the unchanged part is eliminated while the changed part remains. This change is caused by movement. Pixel intensity analysis of the differential image is needed to calculate the displacement of the moving target. This method is very efficient in terms of computational time and provides a simple way for motion detection between two successive image frames.

In this experiment, the input image for frame difference method was denoised binary images showing in Fig. 6. Center pixel’s coordinates of each circular disc was calculated. And coordinates in x-direction can represent the vibration displacement of model building. Figure 8 shows the comparison of vibration velocity measurement between accelerometer based method and frame difference method. Figure 9 shows the comparison of the vibration displacement measurement between accelerometer based method and frame difference method (image-based).

For error analysis, root mean square error (RMSE) is introduced to evaluate the performance of suggest vision-based displacement measurement and is compared to the double integration of accelerometer’s data.

R M S E = i = 1 n ( δ i δ l ) 2 n ,

where δ i is the displacement by double integration of the accelerometer’s measurement, δ i is the measured displacement by vision-based system, and is the amount of data measured.

Figure 8 shows that the vibration velocity measurements of the model building calculated by sensor’s data are frame difference method match very well. The RMSE value is only 0.00408, which indicates that the frame difference method based on digital image processing technology provides reasonable accuracy in measuring the vibration velocity of structure.

Figure 9 shows a little discrepancy of the vibration displacement measurement between accelerometer based method and image based method. The RMSE is 2.3053. This is mainly due to the following reason: as discussed in signal processing algorithm section, although the integrated displacement data from sensor measurement went through DC bias filter and detrend procedure, errors still exist. In this case, frame difference method is more accuracy in determining the structure’s vibration displacement compared to the double integration of acceleration data.

Image based vibration measurement based on frame difference is easily performed and computational efficient. However, it suffers two major limitations. First, the precision of this method to estimate velocity field is limited due to noise, shutter speed and image resolution. Second, this method only measure velocity in a certain direction (i.e., horizontal direction). It has difficulties in measuring complex movements.

Particle image velocimetry (PIV)

Particle image velocimetry method [ 9] is a mature method commonly used in experimental fluid mechanics. It is widely employed to measure 2D flow structure by non-intrusively monitoring the instantaneous velocity fields. For such applications, a laser sheet pulse is used to light the tracking particles, which is captured by camera. PIV [ 10] enables the measurement of the instantaneous in-plane velocity vector field within a planar section. In PIV algorithm, a pair of images is divided into smaller regions (interrogation windows). The cross-correlation between these image subregions measures the optical flow (displacement or velocity) within the image pair. By progressively decreasing the interrogation window size, the resolution of PIC can be further improved [ 11].

In this paper, the PIV analyses are conducted using open source software, ImageJ [ 12] (http://rsbweb.nih.gov/ij/docs/index.html) to evaluate the velocity the velocity field. PIC plugin with the template matching method is used. To obtain better result, the image pairs are preprocess by the “Find Edge” and “Binary” function in ImageJ. The result of the PIV analysis will be displayed a vectorial plot, and saved in plain text tabular format containing all the analysis result. Figure 10 shows the experimental result of PIV method.

While PIV analysis with ImageJ plugin is relatively easy to use, it however does not achieve desired performance in accurately mapping the pattern of motion of the model building. The vector fields generated by the PIV analysis are different from what expected for a model building swaying back and forth. This is mainly due to the size of the interrogation window and the quality of the input image pairs. Such limitation is hard to avoid due to the lack of the prior knowledge about spatial flow structures. This is a shortcoming of applying PIV algorithm for image-based vibration measurement.

Optical flow method

Optical flow [ 13] is a technique to measure the motion from image. It is originally developed by the computer vision community. Optical flow computation consists in extracting a dense velocity field from an image sequence and assume that the intensity is conserved during the displacement. Several techniques [ 14] have been developed for the computation of optical flow. In a survey and a comparative performance study, Barrow et al. [ 15] classify the optical flow methods in four categories: differential based, correlation based, energy based, and phase based. Obtaining the “optical flow” [ 16] consists in extracting a dense representation of the motion field (i.e., on vector per pixel).

This paper used formulation introduced by Horn and Schunck [ 17], which consists in estimating a vectorial function by minimizing an objective function. This functional is composed of two terms: the former is an adequate term between the unknown motion function and the data. It generally relies on the assumption that the brightness is conserved. Similar to correlation techniques, this assumption states that a given point keeps its intensity along its trajectory. The latter promotes a global smoothness of the motion field over the image. It must be pointed that these techniques have been devised for the analysis of quasi-rigid motions with stable salient features. Through smooth restriction it gained the second restriction term and the two restriction terms were made up to be optical flow equations. Through these two restrictions and iterative calculations, the velocity of each pixel can be calculated.

The image analyses using optical flow included the following procedures. First, the system read two consecutive images frame as input. The preprocessing step including determining the image size and adjusting the image border. Then, initial values such as initial 2D velocity vector and weighting factor are set. By applying the relaxation iterative method, the optical flow velocity vector can be calculated until it satisfies convergence conditions. Example result of velocity field from optical flow method is show in Fig. 11.

As can be seen in Fig. 11, the complex motion in the model building is captured by the optical flow method. The length of arrow represents the magnitude of the displacement. Compared to the results of PIV and the frame difference method, the optical flow method gives much better result in capture the global field of motion. The advantages of the optical flow include: 1) unlike the image difference method, the flow vector by optical flow method is a global measurement rather than local measurement. This means the motion can be estimated without having to rely on the local details; 2) the robust and efficient algorithm allow the optical flow method to reach much higher accuracy than other methods; 3) this method can identify complex patterns of motion.

Discussion

Table 1 provides a schematic comparison of the three image processing methods in the aspect of measurement accuracy, robust to noise, computational speed, and complex motion measurement. Frame difference method can provide displacement measurement with relatively high accuracy as it continuously tracks pre-defined high contrast targets attached on the structure surface. These targets function as “virtual sensors” and a significant number of such targets are needed to cover a broad area of structure to obtain global information such as structural damage localization and mode shapes. The requirement of installing targets on the structures to certain extent compromises the advantage of non-intrusive nature of vision-based methods compared to the conventional contact type SHM sensors. Installing the targets can be costly and difficult, especially for large scale civil infrastructures such as high-rise buildings and long-span bridges. In comparison, PIV and optical flow method are capable of measuring the global deformation of the structure without the need of installing any targets. The experimental result in this study shows that optical flow method reached higher accuracy compared to PIV. This is mainly due to the fact that the PIV method utilizes a slide-window based correlation algorithm which is highly dependent upon the quality of the input images.

The frame difference method utilizes a relatively simple and fast algorithm by subtracting between successive image pairs to compute displacement. It will be very sensitive to environmental noise such as change of light intensity, temperature, and wind speed, etc. Therefore, the performance of frame difference method is more sensitive for analyzing images with significant noisy issues, as compared to PIV and optical flow method. On the other hand, frame difference method features higher computational speed, which is dwindling as with the development of high performance computer, real-time displacement measurement by PIV or optical flow method has become feasible. Another limitation of frame difference method can only acquire one dimensional displacement measurement in the horizontal or vertical direction. For both PIV and optical flow methods, single camera is capable of collecting two dimensional displacement fields. More complex motion can be tracked in three dimension with dual cameras by using robust and sophisticate algorithms.

Conclusion

Video-based monitoring system potentially provide reliable and economic solution for SHM applications. Particularly for situations where the access can be challenging (i.e., major bridges cross waterways). The performance of image processing algorithm is the key component in the successful application of such remote SHM monitoring systems. This paper compared the performance of three common types of image processing methods that obtain the motion from sequence of video images, i.e., frame difference method, Particle Image Velocimetry (PIV) and optical flow method. A testbed is set up on a model building, where both traditional accelerometer and video-based system achieves similar accuracy in measuring the vibrations as the accelerometer. Comparison of three image processing methods showed that the optical flow method provides the best performance in capturing the global motion of the model building with competitive accuracy. Major advantages of such technologies for noncontact structural vibration measurement include its low cost, ease of operation, and flexibility to extract structural displacement at any point from a single measurement. As with any technologies, there are limitations of image based monitoring system rooted in the limitations of digital image processing techniques. The practitioners are in face of determining the tradeoff between system accuracy and the capability of global displacement field measurement. Future work is needed to develop an efficient way for target placement on the structure and more robust and accurate image processing algorithm. Another route to advance is to build vison-based system with no artificial target, where it can continuously track the natural targets on the civil infrastructures such as bolts and gussets. All these demonstrate the challenges and opportunities in further advancing the vision based tools for SHM.

References

[1]

LeBlanc BNiezrecki CAvitabile P. Structural health monitoring of helicopter hard landing using 3D digital image correlation. Health Monitoring of Structural and Biological Systems 2010, Pts 1 and 2, 20107650: 89–98

[2]

Bragge THakkarainen MLiikavainio TArokoski JKarjalaiene P. Calibration of triaxial accelerometer by determining sensitivity matric and offsets simultaneously. In: Proceedings of the 1st Joint ESMAC-GCMAS Meeting. Amsterdam, the Netherlands2006

[3]

Arraigada M. Calculation of displacement of measured accelerations, analysis of two accelerometers and application in road engineering. In: Proceedings of 6th Swiss Transport Research Conference. Monter Verita, Ascona2006

[4]

Hamid M A.,Abdullah-AI-Wadud  MAlam Muhammad  Mahbub.A reliable structural health monitoring protocol using wireless sensor networks. In: Proceedings of 14th International Conference on Computer and Information Technology2011, 601–606

[5]

Kapur JSahoo P KWong A. A new method for gray level perjure Thresholding using the entropy of the histogram. In: Proceeding of 7th International Conference on Computing and Convergence Technology (ICCCT)198529: 273–285

[6]

Kumar S.2D maximum entropy method for image Thresholding converge with differential evolution. Advances in Mechanical Engineering and its Applications20122(3): 289–292

[7]

Bailey D G. Pixel calibration techniques. Proceedings of the New Zealand Image and Vision Computing Workshop1995

[8]

Wereley S TGui L. A correlation-based central difference image correction (CDIC) method and application in a four-roll mill flow PIV measurement. Experiments in Fluids200324(1): 42–51

[9]

Willert C EGharib M. Digital particle image velocimetry. Experiments in Fluids199110(4): 181–193

[10]

Quénot G MPakleza JKowalewski T A. Particle image velocimetry with optical flow. Experiments in Fluids199825(3): 177–189

[11]

Moodley KMurrell H. A color-map plugin for the open source, java based, image processing package, ImageJ. Computers & Geosciences200430(6): 609–618

[12]

Igathinathane CPordesimo L OColumbus E PBatchelor W DMethuku S R. Shape identification and particles size distribution from basic shape parameters using ImageJ. Computers and Electronics in Agriculture200863(2): 168–182

[13]

Ruhnau PKohlberger TSchnorr CNobach H. Variational optical flow estimation for particle image velocimetry. Experiments in Fluids200538(1): 21–32

[14]

Angelini E DGerard O. Review of myocardial motion estimation methods from optical flow tracking on ultrasound data. In: the 28th Annual International Conference of the IEEE Engineering in Medicine and Biology society20061(15): 6337–6340

[15]

Barron J LFleet D JBeauchemin S S. Performance of optical flow techniques. International Journal of Computer Vision199412(1): 43–77

[16]

Rocha F R PRaimundo I M Jr, Teixeira L S G. Direct sold-phase optical measurements in flow systems: a review. Analytical Letters201144(1): 528–559

[17]

Horn B K PSchunck B G. Determining optical flow. Artificial Intelligence198117(1): 185–203

RIGHTS & PERMISSIONS

Higher Education Press and Springer-Verlag Berlin Heidelberg

AI Summary AI Mindmap
PDF (1489KB)

3708

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/