Use of artificial neural networks to identify and analyze polymerized actin-based cytoskeletal structures in 3D confocal images

Doyoung Park

Quant. Biol. ›› 2023, Vol. 11 ›› Issue (3) : 306 -319.

PDF (4714KB)
Quant. Biol. ›› 2023, Vol. 11 ›› Issue (3) : 306 -319. DOI: 10.15302/J-QB-022-0325
RESEARCH ARTICLE
RESEARCH ARTICLE

Use of artificial neural networks to identify and analyze polymerized actin-based cytoskeletal structures in 3D confocal images

Author information +
History +
PDF (4714KB)

Abstract

Background: Living cells need to undergo subtle shape adaptations in response to the topography of their substrates. These shape changes are mainly determined by reorganization of their internal cytoskeleton, with a major contribution from filamentous (F) actin. Bundles of F-actin play a major role in determining cell shape and their interaction with substrates, either as “stress fibers,” or as our newly discovered “Concave Actin Bundles” (CABs), which mainly occur while endothelial cells wrap micro-fibers in culture.

Methods: To better understand the morphology and functions of these CABs, it is necessary to recognize and analyze as many of them as possible in complex cellular ensembles, which is a demanding and time-consuming task. In this study, we present a novel algorithm to automatically recognize CABs without further human intervention. We developed and employed a multilayer perceptron artificial neural network (“the recognizer”), which was trained to identify CABs.

Results: The recognizer demonstrated high overall recognition rate and reliability in both randomized training, and in subsequent testing experiments.

Conclusion: It would be an effective replacement for validation by visual detection which is both tedious and inherently prone to errors.

Graphical abstract

Keywords

Concave Actin Bundles / artificial neural network recognizer / planar actin distribution / 3D probability density estimation / cytoskeletal structures

Cite this article

Download citation ▾
Doyoung Park. Use of artificial neural networks to identify and analyze polymerized actin-based cytoskeletal structures in 3D confocal images. Quant. Biol., 2023, 11(3): 306-319 DOI:10.15302/J-QB-022-0325

登录浏览全文

4963

注册一个新账户 忘记密码

1 INTRODUCTION

Tissue engineering intends to develop biological substitutes capable of restoring, replacing, or regenerating defective tissues. The essential components of engineered tissues (the so-called “tissue engineering triad”), consist of cells, scaffolds, and growth-stimulating signals [1,2]. Fibrillar polymeric scaffolds made of micron-sized polymeric fibers have received widespread attention recently as versatile extracellular matrix (ECM)-like materials. They are considered as cell carriers at the site of implantation, or they can be used to assist the reconstruction of tissues and organs, by providing structural support for cell attachment and subsequent tissue development, etc. Yet the main role for these fibrillar materials in tissue engineering is to help control cell positioning, morphology, and function through specific cell-matrix interactions.

Understanding the interaction of endothelial cells (representing the inner layer of capillaries and larger blood vessels) and of their progenitors with the fibers, is of particular importance for controlling their attachment, detachment, or stability at the interface with individual scaffold fibers. Cells have versatile responses to their support, according to the properties of objects they interact with (for example, shape, consistency, and size). One type of object is characterized by a cylindrical shape, solid consistency, and by being comparable in size to that of a cell. When it interacts with such an object, the cell may wrap (stably or not) around this object, a result which depends on the changes happening internally within the cytoskeleton. Among the components of cytoskeleton responsible for these processes are the micro-filaments, mainly composed of polymerized actin. Actin is a globular protein that forms long intracellular filaments through polymerization. Filamentous actin (F-actin) determines a cell’s shape, and – in cooperation with many other macromolecules – also determines its attachment to substrate and migration.

There are several types of F-actin-based cytoskeletal structures: stress fibers (ventral and dorsal), actin ruffles (placed at the cell’s margin), actin filaments enforcing the filopodia, etc. Within a cell attached to a cylindrical substrate, and thus taking a cylindrical shape itself, an F-actin bundle may orient either longitudinally or transversally to the fiber. We have recently described a new type of F-actin containing assemblies, occurring in endothelial cells that wrap polymeric micron-sized fibers, and possibly in vivo. Depending on the length of cell-fiber interaction (i.e., time in culture), and also on the cell’s differentiation status, these structures can be either full rings capable of literally squeezing the fiber (referred to as “actin grips”), or in earlier phases of their development having the shape of an arc (as shown in Fig.1). For this reason, we named them “Concave Actin Bundles” (CABs) [3]. To summarize, we defined CABs as F-actin bundles that grip a fiber transversally in order to attach a cell to a fiber [3,4].

Previously, we developed an algorithm for detecting and quantifying CABs from 3D multi-channel fluorescence image stacks [3]. Our approach was to reconstruct the fiber network by segmentation, and then to quantify actin distribution over a specific segment of a fiber [3]. To this end, we first delineated fibers from the 3D fluorescence confocal images based on an adaptive min-cut-max-flow algorithm [5]. Then, after the algorithm identifies the fiber extremities, it tracks the fibers through a template-matching approach [6]. Finally, it locates CABs by considering their radial distribution.

Subsequently, these steps of the algorithm were incorporated into a visualization tool, developed to both characterize the distribution of F-actin bundles, and to verify the correct detection of CABs through direct visualization, a notoriously difficult operation. This is because not all candidate voxel aggregates fulfil the dimensional (i.e., not large enough) and morphological (i.e., not dense enough) to address the constraints required to be a target visual object (i.e., a CAB). This is further constrained by the actual shape and location within cells of the putative actin bundles. Thus, our visualization tool picks a specific segment of fiber covered by a candidate voxel aggregate among which CABs may exist, and then examines its distribution over the specific fiber segment. However, the pre-detected CAB candidates still need a further visual validation step by a trained human subject. In other words, the human intervention (i.e., a validation) was still needed to distinguish a “true” CAB from other candidate images. Given the limited availability of human competence in this regard, combined with the large size of the scaffold samples customarily analysed, and with the inherently slow speed of analysis, a more efficient method for CAB detection is obviously needed.

In the current study, we present a new algorithm that could automatically identify a CAB from among candidates which were detected through the previous approach, but without human assistance. For this purpose, we propose to employ an artificial neural network (ANN) which is trained to recognize CABs at a voxel level. In brief, we created a specific feature vector as the input for the “recognizer.” In order to decide whether a distribution of voxels, conventionally named “actin” on the green channel is a CAB or not, we should look into the spatial distribution of voxels. The term green is used because F-actin is in practice detected by the binding of a green fluorescent label, e.g., FITC-Phalloidin. And because its cylindrical characteristics make it difficult to get insight into the cylindrical distribution at a glance, we transformed the cylindrical actin (i.e., aggregated green voxels) distribution into a planarized distribution. However, it is still hard to characterize the planarized actin distribution since F-actin filaments may take on intrinsically random shapes even in the planar distribution.

Therefore, we further developed a protocol to extract useful information from the structure of the analysed actin distribution by considering information derived from nearby actin. This approach computes a Gaussian at every actin voxel and then sums Gaussians stored at each voxel. This approach is also useful for the rotation invariant due to the rotation symmetry of a cylinder. The feature vector for the recognizer is constructed based upon the Gaussian summed image. At this point, we employed two ANNs which were trained on a voxel level in the image transformed from the Gaussian summed image: one is trained with voxels with positive signals, and the other with voxels with negative signals. For each voxel of a candidate CAB’s distribution, its related feature vector is tested on only one ANN among the two ANNs, according to its signal. Then, if its result gives a number over a predetermined threshold, the voxel is considered to belong to a CAB distribution image. Otherwise, the voxel is considered to belong to a non-CAB distribution image. Thus, for a candidate CAB’s distribution image, the recognizer classifies part of voxels of the image as belonging to CAB and the other as belonging to non-CAB. Finally, if the number of voxels belonging to CAB is above that of voxels belonging to non-CAB, the recognizer decides that the candidate CAB is a “true” CAB.

2 RESULTS

In order to obtain data for this research, human endothelial colony forming cells (ECFC) were cultured for up to 10 days in poly-e-capro-lactone (PCL) scaffolds prepared by electrospinning. Fixed cells were stained with fluorescently-labeled phalloidin and with anti-vinculin or anti-glutathionylated (GSH)-actin antibodies and imaged using confocal microscopy. In selected experiments, the cells were treated with the pro-oxidant menadione. Cell density, nuclear morphology and cell-averaged F-actin and antigen distribution were visualized by confocal microscopy, and quantified on digitized three dimensional stacks using a custom software.

Candidate CABs are detected from a 3D confocal microscope image. The voxel size of the acquired image volume is about 1 μm × 1 μm × 1.45 μm. Two examples of the 3D confocal microscope image and examples of detected candidate CABs using our previous algorithm (Park et al., 2013) are shown in Fig.2. The volume visualized in Fig.2 is of size 512 × 512 × 200, and of 512 × 512 × 322 as shown in Fig.2.

In this study, we used 8 image volumes and extracted 322 candidate CABs using our previous candidate CAB detection method. Visual detection was used to pick up real CABs from those candidate CABs for training and testing purposes for the two ANNs inside the recognizer. Trained human subjects decided, through their trained visual recognition, that 159 candidate CABs are real CABs and others (that is, 163 candidate CABs) are not. 159 CAB-vectored images were created and 163 Non-CAB-vectored images were created.

The ANN used in the present study is a fully connected back propagation neural network. It has two bias neurons: one in the input layer and the other in the hidden layer. The bias neuron is connected to all the neurons in the next layer. The bias neuron emits the value “1” in its activation status. The ANN has four layers: one input layer, two hidden layers, and one output layer. It has 21 neurons in the 1st hidden layer, and 11 neurons in the 2nd hidden layer. Its learning rate is 0.7 and its momentum is 0.0. Its weight multiplier is 0.4, which was used to make the training a bit less aggressive. The initial weights on the ANN were randomly chosen between –0.1 and 1.0. When we measured the closeness between an ANN and the desired output in the training stage, we used the mean squared error. We set the training process of the two ANNs to be terminated if the error is below 0.005. Otherwise, we permitted its iteration up to 500 times. The training time for ANN 1 took approximately 2 minutes and that for ANN 2 took approximately 4–5 minutes. Thus, the total training time of the recognizer took approximately 6–7 minutes.

To test the robustness of the methods suggested in this study, we randomly chose both the 60% number of CAB-vectored images and 60% number of non-CAB-vectored images as the training images. The remaining 40% number of CAB-vectored images and 40% number of non-CAB-vectored images were used as the testing images. For each training image either in a CAB-vectored image or in a non-CAB-vectored image, we choose 80% of the voxels in the first image slice at random to train ANN 1 or ANN 2. If the voxel had a positive signal, its related feature vector was tested on ANN 1 and if the voxel had a negative signal, its feature vector was tested on ANN 2.

Statistics for 100 testing experiments with the formation of the feature vector in the last Figure of the method section is provided in Fig.3. We set th to 0.5 in those experiments. We compared the results computed from various forms of feature vectors in Fig.4 to the result in Fig.3. All experiments using feature vector forms in the last Figure of the method section and in Fig.4 showed around 83% overall accuracy. The overall accuracy of the form in the last Figure of the method section is slightly better than in Fig.4.

We examined the recognition rates of each candidate CAB by the recognizer. First, the recognizer determined whether a candidate CAB is indeed a CAB, or if it is a non-CAB. Second, the result was compared to the recognition result by the trained human subjects. With each candidate CAB, we repeated the experiment 100 times. Finally, the recognition rate was calculated by the accuracy of classification during the repeated experiments. The recognition rates are summarized in Fig.3. 159 candidate CABs were decided as real CABs by trained human subjects and 163 candidate CABs were determined as non-CABs by the same experts. Out of 159 candidate CABs, 114 candidate CABs were correctly recognized as CABs with 100% accuracy (i.e., when tested 100 times, they were correctly recognized as a CAB each time they were tested). Out of 163 non-CAB candidates, 109 non-CAB candidates were accurately classified as non-CAB with 100% accuracy (i.e., when tested 100 times, they were recognized as a non-CAB each time they were tested). Similarly, 9 candidate CABs among the 159 candidate CABs were recognized as true CABs with 90%−99.99% accuracy (i.e., 90−99 times out of 100 tests, they were accurately recognized as CABs). 21 non-CAB candidates among the 163 non-CAB candidates were correctly recognized as non-CABs with 90%−99.99% accuracy (i.e., 90−99 times out of 100 tests, they were accurately recognized as non-CABs).

We also found candidate CABs that are real CABs, but with a low recognition rate of being classified as a CAB by the recognizer (Fig.5).

We chose candidate CABs that seemed distinctly different from each other for the purpose of illustration. We found that there are two cases. Fig.5 is a case where real CABs are mostly misclassified into the group of non-CABs. On the other hand, candidate CABs in Fig.5, which are not clearly identifiable as a CAB or non-CAB, were classified as CABs by visual detection but recognized as non-CABs by the recognizer. They are too ambiguous for even biologists and/or bio-physicists to decide clearly whether they are real CABs or not.

On the contrary, there are candidate CABs that are non-CABs, but with low recognition rates of being classified as a non-CAB by the recognizer (Fig.5). There are two cases as well. Fig.5 and Fig.5, except candidate CABs in the pink rectangle, demonstrated a case where they are real non-CABs but were mostly misclassified into the group of CABs. As in Fig.5, candidate CABs on the pink rectangle in Fig.5 are classified as non-CABs by visual detection but recognized as CABs by the recognizer.

Those candidate CABs on the pink rectangle either in Fig.5 and Fig.5 or Fig.5 and Fig.5 imply that validation by visual detection inherently includes a risk of making errors. We re-experimented after moving the misclassified candidate CABs in Fig.5 into the group of non-CABs and then moving the misclassified candidate CABs on the pink rectangle into Fig.5 into the group of CABs. Fig.3 summarizes the results. It indicates that the results were improved in comparison to those in Fig.3. The improved results indicate that our method increases the recognition accuracy as it does not require validation by visual detection that inherently includes a risk for making errors.

Further, we found that loss of actin information in the computation of the planar actin distribution was the main factor that decreases recognition rates of candidate CABs in both Fig.5 and out of the pink rectangle in Fig.5 and 5D. In this study we used a uniform radius of cylinder for the computation of a candidate CAB’s planar actin distribution over a fiber segment. However, since the surface of a fiber segment is not as even as that of a cylinder, we may miss some actin information. Thus, future studies need to develop a method to compute the planar actin distribution that takes into consideration the unevenness of the surface of fiber. This would improve the recognition accuracy.

3 DISCUSSION

An in-depth understanding of actin-based structures is crucial for finding solutions to practical problems occurring in tissue engineering constructs that require the interaction of cells with materials. Further, the pharmacological modulators of grip strength might thus become useful in regulating cell attachment and detachment to fibrillar scaffolds, leading to versatile applications in tissue engineering. Thus, it is essentially required to characterize and quantify CABs in cellular ensemble for advanced tissue engineering applications.

In the research of curvilinear structure on imaging F-actin, it would be appropriate to classify the previous studies according to their architectural parameters: local orientation, filament lengths, and curvature distribution and local filament organization.

Regarding the “local orientation,” Petroll et al. [7] analysed the spatial and temporal organization of stress fibers during the process of corneal wound healing. Using the Fourier Transform analysis, an orientation index (OI) was calculated to quantify the global fiber orientation. Thomason et al. [8] applied the fractal analysis technique to calculate a fractal dimension for a microtubule. Weichsel et al. [9] utilized image coherency as a possible quantitative measure for alternations in the actin cytoskeleton.

Regarding the “filament length”, Lichtenstein et al. [10] proposed enhancement of rod-like patterns in immunofluorescent images by correlating a series of rods at uniformly spaced rotation angles using fast Fourier transform. The enhanced image is the maximum of the correlation differences among all rod angles. Shariff et al. [11] described a parametric conditional model of microtubule distribution that generates a microtubule network in intact cells using a persistent random walk approach.

Regarding curvature distribution and local filament organization, Fleischer et al. [12] introduced a new method of fitting multi-layer geometrical random tessellation models to actin network structures to be analysed.

Li et al. [1316] have conducted several studies on detecting actin filaments. In [13], the authors used a dynamic programming to estimate actin filament body points from an image sequence where initial locations are known. In [14], they used spatiotemporal active-surface and active-contour models to find the extreme locations of the filaments in 2D time-lapse image sequences. In [15,16], Li et al. tracked actin filaments by jointly using a particle filtering based algorithm and a stretching open active contours based algorithm. They used a TIRMF image sequence from Fujiwara et al.’ [17] research.

In addition, there are many 2D algorithms [1820] for polymer network structure extraction. Researchers correlated an original image with a set of filters designed to detect line features. Although they could be generalized to 3D using a steerable filter, more sophisticated methods need to be developed for the 3D network extraction [21]. To our knowledge, there is only one existing algorithm from Wu et al. [22] that identifies a 3D collagen network. They developed a method to quantify the collagen orientation, length, and diameter by tracing the skeleton of collagen fibers. However, Stein et al. [23] reported that Wu et al.’s algorithm fails to detect many fibers that are oriented perpendicular to the focal plane. Hence, they developed an advanced, new algorithm to extract a 3D network architecture based on the algorithm of Wu et al.

Regarding “Concave Actin Bundles”, the newly discovered F-actin bundles mediating cell-matrix interactions, no attempt has been made to recognize them except our previous research [3,4]. The present study improved our previous method with an automatic recognition algorithm. To our knowledge, this is the first study for automatic recognition of this new biological finding, the CABs.

The algorithm which recognizes legitimate CABs among candidate CABs was developed here based on our previous research for detecting candidate CABs from a 3D fluorescence image stack (Fig.6). As briefly introduced before, the method consists of three steps:

(1) The first step is to segment fibers by defining an energy based on the tensor voting framework [24,25] and then by minimizing the energy through an adaptive min-cut-max-flow algorithm. The segmented fibers are then skeletonized so that the extremities of the skeleton of each fiber are used as seed points for tracking the fibers.

(2) The second step is to track the fibers by matching templates to the fibers. When a template matches with a segment of a fiber at a given location, the template parameters are computed by minimizing the difference between the contrast of the fiber and that of the template.

(3) The third step is to detect candidate CABs, which surround the scaffold fibers transversally, by examining a radial actin distribution around the nearby templates in focus. Further details of these steps are described in [3].

The detection of CABs with the previous algorithm was not precise enough, in that it sometimes recognized a non-CAB structure as a CAB-structure. Thus, our previous research required a visualization tool for biologists and/or bio-physicists to validate whether detected CABs are the actual CABs or not. Further, the previous research raised a need whereby we should be able to connect the actin distributions of several neighboring templates, and extract and connect parts of the CAB that exist across those neighboring templates.

In order to address the aforementioned limitations in the previous study, we proposed an ANN as a kernel function of the recognizer that detects real CABs among candidate CABs, that does not need human interventions. The recognizer was trained on a voxel level on the 3D image created via 3-dimensional probability density estimation. It is operated by two ANNs. The first ANN is trained and tested with feature vectors of voxels having a positive signal and the second ANN with feature vectors of voxels having a negative signal. A feature vector was designed both to cover a wide area of feature space on a given voxel and to have a small number of elements. The 3D probability density estimation on a planar actin distribution is crucial for the rotation invariant due to the rotation symmetry of a cylindrical fiber.

Further this process is useful for computing the angle of a CAB. (However, discussion of this is beyond the scope of this article). A brief description for computing the angle is provided in the following. Because a 3D PDF valued image has a certain number of local maxima, we can extract those local maxima as nodes in a graph. We can also connect those local maxima as edges in a graph. The graph created in this way represents the structure of a CAB. We compute a minimum spanning tree on the graph and then model the shortest path between every two connected nodes. For each shortest path, we map it onto a cylinder and then compute an angle between its two end nodes from the medial axis of the cylinder. The largest angle is an angle of a given CAB.

The performance of the recognizer shows noticeable recognition accuracy and addresses shortcomings of previous methods; the need for human visual validation to recognize CABs from candidate CABs. Further, it helps to find and reduce errors resulting from human visual validation, which in turn would provide biologists or biophysicists more comprehensive understanding of a CAB.

4 CONCLUSION

CABs refer to a cytoskeletal structure that functions in the cell-matrix engagement with scaffold prepared for tissue engineering. Whereas a typical stress fiber anchorage is oriented parallel to the length of the scaffold, the structure of CABs orients themselves transversally to the length of the fiber, making them appear as if they were gripping the fiber. Based on this observation, we may define CAB as a ring-like structure that is thin and attached on the surface of a cylindrical fiber in computational biology views. Mechanistic understanding of actin-based structures is crucial for finding solutions to practical problems occurring in tissue engineering constructs that require the interaction between cells with materials. No attempt has been made to capture them except our previous research [3]. The present study further improved our previous study with the development of the first algorithm that recognizes CABs automatically based on an ANN.

Another major improvement of the present study is enhanced accuracy. With our previous methods, a non-CAB structure was sometimes incorrectly identified as a CAB-structure. Similarly, candidate CABs that need further validation via human intervention, such as the ring-structure of thick actin distribution or the sheer ring structure not attached to a fiber, were sometimes incorrectly classified as CABs. The recognizer developed in the present study solves this problem as it distinguishes such candidate CABs from CABs.

Future studies may need to take into consideration the unevenness on the surface of the fiber to improve the performance of the recognizer. We plan to develop an improved recognition algorithm by utilizing a cylinder whose shape is tailored to the surface of a fiber with uneven surface shapes.

5 METHOD

The architecture of the recognizer and the input feature vector of the architecture are critical design problems. Thus, we will first describe the foundation for constructing the input feature vector and then that of the architecture.

The first step for the input feature vector is to transform the actin distribution over a cylindrical surface of a fiber into a planar distribution. Next, we estimate the probability density on every actin location over the planar distribution. Finally, we describe the formation of the input feature vector and the architecture of the recognizer.

5.1 Planarization of the actin distribution

Actin filaments and their aggregates exist over a cylindrical surface of a scaffold fiber. When these “actins” are distributed densely as if they seem to grip a fiber transversally, such actin structure can be a candidate CAB. Because a candidate CAB is located by observing its radial actin distribution around the nearby fiber, it is not easy to perceive the overall actin distribution of the candidate CAB. Thus, we need a way to get insight into the overall actin distribution of a candidate CAB. For this purpose, we first reformed a candidate CAB by aligning its fiber’s templates in a straight line and then transformed the actin distribution over a cylindrical surface of a fiber segment to that over a plane.

Fig.7 summarizes the method to planarize actin distribution over a cylindrical fiber segment. We shoot rays from the medial axis of the fiber segment toward the surface of the fiber perpendicularly to the fiber orientation. If the length of a fiber segment is lny as in Fig.7, we may shoot rays from positions l1, l2,...,lny by dividing the length lny into ny in equal parts toward the surface of the fiber over all perpendicular directions as in Fig.7 where we shoot rays in directions: θ 1, θ 2,..., θ nx. Next, we choose slices from the surface (r1) of the fiber up to a certain radius (rnz). The volume of the planarized actin distribution image thus becomes nx× ny× nz. The actin intensity at (θi, lj, rk) from a candidate CAB as in Fig.7 and Fig.7 where 1inx, 1jny, and 1kn z is located at the position ( i,j,k) of the planarized actin distribution image. Thus, each slice can be flattened in Fig.7 allowing creation of a 3D image stack (where each slice becomes one piece of an image in the 3D image stack). Fig.7 is an illustration of the creation of the 3D flattened actin distribution image stack.

The total number of slices N(=r nz) in Fig.7 was determined using the following rationale. We can imagine the actin as a liquid (the green) surrounded by a cylinder that is oriented horizontally. In order to determine the total number of slices, we take our cylinder and orient it to be upright, causing our “liquid” actin to flow to the bottom, as depicted in the right-most picture of Fig.3. From here, we can gather two pieces of information: mass and height. Because each voxel consisting of an actin structure has its own intensity, the gathered actin in the bottom area of the enveloping cylinder will have a certain mass; that is, the summation of voxel intensities of the actin. The gathered actin will also have certain height (h), the distance from the base to the top of the gathered actin, which is calculated by the equating the number of actin in the enveloping cylinder to πrcyliner 2h. These two pieces of information, mass and height, are used to compute the number of slices as in the following equation:

N =( C1+ C2 π rcyliner2hπ rcyliner2ln y+C 3 pixels ingreepartintensity pixel sincy linde r maxintensity) rc ylind er.

where rcylinder is the radius of the enveloping cylinder, maxIntensity is the highest intensity value which a voxel can represent, and the αcylinder and βcylinder are defined as:

α cylinder=π rcyliner2hπ rcyliner2ln y,βcylinder= pixelsingreenp artintensitypixels incylindermaxintensity.

The values of C1, C2, and C3 in the Eq. (1) are determined with linear programming [26].

The objective function of the linear method is to maximize the Eq. (3) so that the enveloping cylinder contains enough amount of actin whose intensity is as strong as possible.

rcylinder C1+( rcylinder α cylinder) C2+( rcylinder β cylinder) C3,

with constraints ① C1+C2+ C3=2, ② C1> C2> 2C3> 0, and ③ 0.2>C3> 0.

The above 3 constraints were created based on the rationale that rcylinder is the most important measure in deciding the number N. So, in Eq. (3), C1 is determined in a way that it allows as much rcy linde r information as possible by making C1 closer to 1. Since, C2 and C3 are considered less important than C1, each of their values is assumed to be smaller than C1. In addition, we consider αcylinder (the portion of actin in the cylinder) more important than β cylinder since we want to contain more actins inside the cylinder, and set C2 to be greater than 2C3.

5.2 3D probability density estimation

For each voxel p of the 3D flattened actin distribution image stack, we locate a sphere of radius (rsphere) with its center positioned to p. The radius (rsphere) of the sphere is determined by finding k-nearest (KNN) actin voxels from p and calculating their average distance. After repeated experiments, we choose the value k that results in the most satisfiable results. We look at the actin voxels inside the sphere and fit them to the best-fitting 3D Gaussian function by considering their distribution. Since the fitted 3D Gaussian function is determined by a mean and variance, the mean μ was set to the selected point (p), and the covariance Σ was calculated using a maximum likelihood (ML) fitting method [27] as in Eq. (4):

Σ=1n i=1n(Xi μ)( X iμ )T,

where n is the number of actins inside the sphere and Xi=(xi, yi, zi), and {μ=(μx, μx, μx).

Next, we renegotiate the height (a) of the fitted Gaussian bell curve:

f(X)=a exp( 12 ( Xμ)TΣ1(X μ)),

There are four cases which need consideration when we renegotiate the height. First, if the density inside the sphere is dense and the intensity is strong, “a” becomes much higher. Second, if the density inside the sphere is dense and the intensity is weak, “a” becomes higher but not as high as it would be in the first case. Third, if the density inside the sphere is sparse and the intensity is strong, “a” becomes slightly higher. Fourth, if the density inside the sphere is sparse and the intensity is weak, “a” becomes slightly less higher compared to the third case. Considering these four cases, the renegotiated height is expressed in Eq. (6).

a(1+ C1α sphere +C2βs phere)whereC1> C2.

where

α sphere =The numberofactins insid ethes phereThenumber ofvoxelsin sidethe sphere , βsphere= for eacha tininside thesphereintensityfor eachv oxels insidethe sphere maxintensity.

The values of C1 and C2 are determined with a linear programming approach based on the same rationale discussed in Eq. (3).

The far left edge and the far right edge of the 3D flattened actin distribution image stack are adjacent to each other because the 3D image came from the actin distribution over the cylindrical fiber. Thus, when we calculate a 3D Gaussian probability density function f (X)(=f(x,y ,z)) over the 3D flattened actin distribution image, if x<0 we calculate f(x+2l,y,z). Likewise, if x>2l, we calculate f(x2l ,y, z) as in Fig.8 where l is the half circumference of a section of the cylindrical fiber. This consideration is required for the rotation invariant property of a cylinder. In sum, for each actin voxel p, we fit it to a 3D Gaussian function and calculate the probability density value on every voxel of the 3D flattened actin distribution image stack. Each voxel will contain PDF values calculated from each 3D Gaussian function. Then, for each voxel, we add up all PDF values stored at each voxel and normalize them. Thus, the 3D flattened actin distribution image stack is transformed into the 3D probability density valued (PDF) image.

5.3 Rotation invariant

When we transform the cylindrical actin distribution to a 3D planar actin distribution, several cylindrical actin distributions can be mapped onto the same 3D planar actin distribution because of the rotational symmetry of a cylinder in its medial axis. In order to consider this rotation invariant property of a cylinder, we find the hightest PDF value in the 3D PDF valued image and transfer it into the plane that passes through the middle in x-axis of the 3D PDF valued image.

5.4 The architecture of the CABs recognizer

The 3D PDF valued image is used to form a feature vector which is used as an input for the recognizer of CABs. After thresholding the 3D PDF valued image and then binarizing it, the resulting 3D image is down-sampled to create a small-sized feature vector as well as for the feature vector to illustrate a wide area of possible feature space. An example of the 3D PDF valued image and its thresholded and binarized image are shown in Fig.9.

The 3D PDF valued image is threholded using the automatic thresholding method described in [28]. The down-sampling on the thresholded and binarized 3D image is excuted in all axes: the horizontal axis, the vertical axis, and the depth axis.

Fig.10 shows the formation of elements of a feature vector on the down-sampled 3D image stack when a voxel of interest, (c), is located at the center in the first image slice of the down-sampled 3D image.

Because the most important features used to decide a CAB are placed on the first image slice, we consider more features as an image slice approaches the front of the down-sampled 3D image. We consider all voxels both on the vertical line and on the horizonal line passing through the voxel (c). Moreover, we consider all voxels on the boundary of the square when its boundary passes through the voxel an(= i=1n 1(i+1) +1w her en1 ) and its center is the voxel (c), where from 1 to n is in the first image slice, from 1 to n−1 is in the second image slice and so on.

We renamed the down-sampled 3D image after thresholding and binarization on the 3D PDF valued image as a 3D vectored image. Some of voxels in the 3D vectored image have positive signals representing the actin’s existence. The others have negative signals. Note that the positive signal means that its voxel has the numerical value “1” . However, a negative signal does not mean that its voxel has the numerical value “0”.

There are two kinds of 3D vectored images: a CAB-vectored image and a non-CAB-vectored image. CAB-vectored images came from actual CABs. Non-CAB-vectored images came from non-CABs. For the recognizer of CABs among candidate CABs, we employed two multilayer perceptron (MLP) ANNs. The activation function on each processing element of the employed ANN is given by the sigmoid activation function [29], which outputs a number between 0 (for low input values) and 1 (for high input values).

Fig.10 shows the architecture of the recognizer that identifies CABs among candidate CABs. In the training stage, a voxel with a positive signal either in the CAB-vectored training image or in the non-CAB-vectored training image becomes a voxel of interest, (c) (Fig.10). The voxel (c) and its features like Fig.10 form a feature vector. Then the resulting feature vector is used as a training input vector for ANN 1. Similarly, a voxel with a negative signal either in the CAB-vectored training image or in the non-CAB-vectored training image becomes a voxel of interest, (c). The feature vector around the voxel (c) is used as a training input vector for ANN 2. When we train ANNs, if the feature vector is formed from a CAB-vectored training image, we associate the feature vector with an output of “1”. Conversely, if the feature vector is from a non-CAB-vectored training image, we associate the feature vector with an output of “0”.

In Fig.10, we proposed two ANNs where each ANN is designed to be used under different conditions depending on the signal of the voxel of interest. The voxel of interest (c) in a feature vector may have either a positive signal or a negative signal. A vectored image may come from either CAB or non-CAB. Let us define P (CAB|c=positive) as the probability that the vectored image comes from CAB under the the condition where the voxel of interest is a positive signal. ANN 1 refers to an adjusted ANN where ANN increases both P(CAB| c=pos itive ) and P (non -CAB| c=pos itive ). Likewise, the ANN 2 refers to an adjusted ANN where ANN increases both P(CAB| c=neg ative ) and P (non -CAB| c=neg ative ). Thus, each ANN will produce more reliable outputs according to the type of a signal of the voxel of interest.

In the testing stage, for each voxel in an unknown 3D vectored image, if the voxel has a positive signal, its feature vector is tested on ANN 1 and then ANN 1 produces an output value v1. If the voxel has a negative signal, its feature vector is tested on ANN 2 and then ANN 2 produces an output value v2.

The last stage of the recognizer decides whether the unknown 3D vectored image comes from CABs or from non-CABs. For each voxel in the first slice image of the unknown 3D vectored image, we tested its feature vector, and the result was written in a decision table (Fig.10). If the voxel has a positive signal and its resulting value v1 on ANN 1 is above a predetermined threshold (th), the voxel’s corresponding pixel in the decision table is marked as “O” . If the resulting value v1 is below the threshold (th), the voxel’s corresponding pixel in the decision table is marked as “X”. Similarly, if the voxel has a negative signal and its resulting value v2 on ANN 2 is above the threshold (th), the voxel’s corresponding pixel in the decision table is marked as “O”. If the resulting value v2 is below the threshold (th), the voxel’s corresponding pixel in the decision table is marked as “X”. Then, using this classification, the number of “O” and the number of “X” in the decision table are counted. Finally, if the number of “O” is greater than that of “X”, we decided that the tested unknown 3D vectored image came from the group of CABs. Otherwise, we decided that it came from the group of non-CABs.

References

[1]

Karp,J. M. (2007). Development and therapeutic applications of advanced biomaterials. Curr. Opin. Biotechnol., 18: 454–459

[2]

Langer,R. Tirrell,D. (2004). Designing materials for biology and medicine. Nature, 428: 487–492

[3]

ParkD. Y.,Jones D.,MoldovanN. I.,MachirajuR.. (2013) Robust detection and visualization of cytoskeletal structures in fibrillar scaffolds from 3-dimensional confocal images, In: IEEE symposium on biological data visualization, Atlanta, GA, pp. 25–32

[4]

Jones,D., Park,D., Anghelina,M., cot,T., Machiraju,R., Xue,R., Lannutti,J. J., Thomas,J., Cole,S. L., Moldovan,L. . (2015). Actin grips: circular actin-rich cytoskeletal structures that mediate the wrapping of polymeric microfibers by endothelial cells. Biomaterials, 52: 395–406

[5]

Boykov,Y., Veksler,O. (2001). Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell., 23: 1222–1239

[6]

Friman,O., Hindennach,M., hnel,C. Peitgen,H. (2010). Multiple hypothesis template tracking of small 3D vessel structures. Med. Image Anal., 14: 160–171

[7]

Petroll,W. M., Cavanagh,H. D., Barry,P., Andrews,P. Jester,J. (1993). Quantitative analysis of stress fiber orientation during corneal wound contraction. J. Cell Sci., 104: 353–363

[8]

Thomason,D. B., Anderson,O. (1996). Fractal analysis of cytoskeleton rearrangement in cardiac muscle during head-down tilt. J. Appl. Physiol., 81: 1522–1527

[9]

Weichsel,J., Herold,N., Lehmann,M. J., usslich,H. Schwarz,U. (2010). A quantitative measure for alterations in the actin cytoskeleton investigated with automated high-throughput microscopy. Cytometry A, 77: 52–63

[10]

Lichtenstein,N., Geiger,B. (2003). Quantitative analysis of cytoskeletal organization by digital fluorescent microscopy. Cytometry A, 54: 8–18

[11]

Shariff,A., Murphy,R. F. Rohde,G. (2010). A generative model of microtubule distributions, and indirect estimation of its parameters from fluorescence microscopy images. Cytometry A, 77: 457–466

[12]

Fleischer,F., Ananthakrishnan,R., Eckel,S., Schmidt,H., Kas,J., Svitkina,T., Schmidt,V. (2010). Actin network architecture and elasticity in lamellipodia of melanoma cells. New J. Phys., 9: 1–14

[13]

LiH.,Shen T.,ShenT.. (2011) Actin filament segmentation using dynamic programming. In: International Conference on Information Processing in Medical Imaging (IPMI), pp. 411–423

[14]

LiH.,Shen T.,VavylonisD.. (2010) Actin filament segmentation using spatiotemporal active-surface and active-contour models. In: International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), pp. 96–94

[15]

LiH.,Shen T.,VavylonisD.. (2009) Actin filament tracking based on particle filters and stretching open active contour models. In: International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), pp. 673–681

[16]

LiH.,Shen T.,SmithM. B.,FujiwaraI.,Vavylonis D.. (2009) Automated actin filament segmentation, tracking, and tip elongation measurements based on open active contour models. In: IEEE International Symposium Biomedical Imaging: From Nano to Macro (ISBI), pp. 1302–1305

[17]

Fujiwara,I., Vavylonis,D. Pollard,T. (2007). Polymerization kinetics of ADP- and ADP-Pi-actin determined by fluorescence microscopy. Proc. Natl. Acad. Sci. USA, 104: 8827–8832

[18]

Can,A., Shen,H., Turner,J. N., Tanenbaum,H. L. (1999). Rapid automated tracing and feature extraction from retinal fundus images using direct exploratory algorithms. IEEE Trans. Inf. Technol. Biomed., 3: 125–138

[19]

Tupin,F., Maitre,H., Mangin,J. F., Nicolas,J. M. (1998). Detection of linear features in SAR images: application to road network extraction. IEEE Trans. Geosci. Remote Sens., 36: 434–453

[20]

Stoica,R., Descombes,X. (2004). A Gibbs point process for road extraction from remotely sensed images. Int. J. Comput. Vis., 57: 121–136

[21]

AguetF.,Jacob M.. (2005) Three-dimensional feature detection using optimal steerable filters. In: IEEE International Conference on Image Processing, pp. 1158–1161

[22]

Wu,J., Rajwa,B., Filmer,D. L., Hoffmann,C. M., Yuan,B., Chiang,C., Sturgis,J. Robinson,J. (2003). Automated quantification and reconstruction of collagen matrix from 3D confocal datasets. J. Microsc., 210: 158–165

[23]

SteinA.Vader D.JawerthL.WeitzD.SanderL., (2009) An algorithm for extracting the network geometry of three-dimensional collagen gels. J. Microsc., 232, 463–475

[24]

Guy,G. (1997). Inference of surfaces, 3D curves, and junctions from sparse, noisy 3D data. IEEE Trans. Pattern Anal. Mach. Intell., 19: 1265–1277

[25]

MedioniG.,Lee M. S.TangC.. (2000) Computational Framework for Segmentation and Grouping. New York: Elsevier Science Inc

[26]

NemhauserG. L.WolseyL.. (1988) Integer and Combinatorial Optimization. New York: John Wiley & Sons

[27]

KingG.. (1998) Unifying Political Methodology: The Likelihood Theory of Statistical Inference. New York: Cambridge University Press

[28]

Otsu,N. (1979). A threshold selection method from gray-level histograms, IEEE Trans. Sys. IEEE Trans. Syst. Man Cybern., 9: 62–66

[29]

MitchellT.. (1997) Machine Learning. New York: WCB–McGraw–Hill

RIGHTS & PERMISSIONS

The Author(s). Published by Higher Education Press.

AI Summary AI Mindmap
PDF (4714KB)

957

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/