1. Key laboratory of Environment Change and Resources Use in Beibu Gulf (Ministry of Education), Nanning Normal University, Nanning 530001, China
2. School of Civil Engineering and Architecture, Guangxi University, Nanning 530004, China
lyily1112@126.com
Show less
History+
Received
Accepted
13 Aug 2022
18 May 2023
Online First Date
Issue Date
19 Dec 2023
15 Jul 2024
Abstract
Sedimentation is a key process affecting wetland sustainability and carbon burial flux. In context of sea level rise, climate change and human activities, further understanding about the sedimentary dynamic in wetland is critical in predicting the landscape evolution or the change in carbon burial flux. In this study, based on the field hydrological observation in a mangrove system in the Nanliu River estuary, we found the net flux of suspended sediment to mangrove is 39−72 kg/m in tidal cycles with Turbidity Maximum Zone (TMZ) forming in surface layer and only is 9−18 kg/m in tidal cycles without TMZ. The higher net flux of suspended sediment to mangrove in tidal cycles with TMZ forming in surface layer is attributed to high SSC in rising tide and intense flocculation in mangrove. The significant discrepancy in sedimentation rate in the mangrove patches also can be explained by the probability of TMZ forming in the surface layer of estuary. In future, rapid sea level rising may lead to the change of TMZ pattern in estuary, which will result in non-negligible variation in sedimentation rate in wetlands. According to the present data of sedimentation rate in wetlands, the fragility of wetlands in river estuary may be miscalculated.
Tao LIU,
Ying LIU,
Baoqing HU.
The important role of Turbidity Maximum Zone in sedimentary dynamic of estuarine mangrove swamp. Front. Earth Sci., 2024, 18(1): 219‒226 https://doi.org/10.1007/s11707-022-1083-1
Introduction
A multi-model ensemble does not actually provide us with a reliable way to understand uncertainty, but instead represents a kind of sensitivity analysis, which is not inherently related to model uncertainty. There exists no consistent (i.e., asymptotic or bounded) theoretical relationship between sensitivity and uncertainty, and this distinction has important implications for scientific inference. In particular, if the scientific goal is to build the best hypothesis-driven model possible given (imperfect) experimental data, then any form of hypothesis testing fundamentally requires separating model error from data error.
Further, inferences over a multi-model ensemble only converge if the truth can be sufficiently well approximated as a mixture of the models in the ensemble. In general, however, we cannot know whether a true model is a member or mixture of the models in a particular ensemble. The consequence is that inferences over multi-model ensembles admit unbounded errors under conditions that are not evaluable. Ensemble-based methods are therefore fundamentally unreliable for either uncertainty quantification or hypothesis testing. On the other hand, the argument that we present here is that information theory makes it possible to derive a consistent (i.e., bounded) method for partitioning of data error from model error, and this leads to a reliable hypothesis test.
Throughout this discussion, we will distinguish between two different things:
• Uncertainty decomposition separates the contribution to total uncertainty from different sources of error in a modeling chain (e.g., model inputs, model parameters, model structure, measurement error, etc.).
• Uncertainty propagation provides estimates of uncertainty in forecasts or predictions.
The former is required for hypothesis testing, and the latter for decision-support. Our proposal is that information theory solves the uncertainty decomposition problem, and therefore leads to a reliable hypothesis test. However, information theory does not apparently provide a way to reliably propagate uncertainty into future predictions. To address the latter, we suggest that instead of constructing hydrology models as solutions to sets of partial differential equations representing various conservation laws, we might construct hydrology models as Bayesian networks constrained by conservation symmetries.
Uncertainty
It is impossible to define uncertainty in a meaningful way without reference to an underlying system of logic. This is because uncertainty is an epistemological concept in the sense that any meaningful understanding of uncertainty requires a theory of knowledge. Uncertainty resulting from scientific endeavors can only be understood in the context of a well-defined philosophy of science. For example, under a philosophy of science based in probability theory or probabilistic logic, uncertainty might be understood as the difference between the (unknown) truth-value of some proposition and our state of belief about that truth-value. Uncertainty, in this case, is due wholly to the fact that we must rely on non-deductive reasoning in situations where information is limited.
There have been many attempts to define uncertainty in the Hydrological sciences (e.g., Montanari, 2007; Renard et al., 2010; Beven, 2016). None of these attempts – that we are aware of – achieves logical self-consistency. However even if this minimum standard were met, it is widely understood that no such attempt in any branch of science has yielded a consistent estimator of a measurable quantity (Nearing et al., 2016b). This is a problem because, at least since Popper (1959), it has been understood that hypothesis testing must account for uncertainty. This paper outlines a procedure for hypothesis testing that does not require quantifying uncertainty, but is coherent and reliable (i.e., bounded) in the presence of arbitrary (unknown and unknowable) uncertainty.
In the context of any particular science experiment, there are two fundamental sources of uncertainty: (i) the fact that our hypotheses may not be perfect descriptions of dynamic systems, and (ii) any imprecision and incompleteness inherent in experimental data. The latter is simply the fact that any set of experimental control (input) data will never fully determine any set of experimental response (output) data, even given a hypothetically “perfect” model. While this indeterminacy may be partially due to real, ontic randomness that manifests at scale (Albrecht and Phillips, 2014), it is, more generally, just a matter of limited information in experimental inputs†
1) Even if ontic randomness does exist in macro-scale systems like watersheds, this still manifests as limited information in experimental inputs.
).
We will use the terms epistemic and aleatory to refer to, respectively, hypothetical (imperfect system description) and experimental (data imprecision and incompleteness) components of uncertainty. The intuition we encourage is that epistemic uncertainty is uncertainty about scientific hypotheses, and aleatory uncertainty is related to incomplete information in experimental input/output data. This differs somewhat from distinguishing between ontic (as aleatory) vs. epistemic randomness, which is not a meaningful distinction given our current inability to solve the Schrödinger equation at watershed scales. Our perspective also differs from distinguishing between aleatory vs. epistemic uncertainty such that the former is variability that can be treated probabilistically while the latter cannot (e.g., Beven, 2016); this is not a valid distinction because probability theory is fundamentally an epistemic (i.e., logical) calculus (Van Horn, 2003).
Given that the job of a scientist is – at least ostensibly – to build models that are as close to perfect as possible, separating these two sources of uncertainty (epistemic vs. aleatory) is fundamental to the project of hypothesis testing, and therefore of science in general. As an example, take the simplest experiment we might imagine (illustrated in Fig. 1), consisting of a model that takes one control variable to determine one response variable . The variable represents data about the system control while represents data about system response, and these data may not correspond exactly to the phenomenological†
2) Something is said to be phenomenological if it is observable in principle. Theoretical laws are not phenomenological because only their effects on existent objects can be observed, however properties of existent objects – things like porosity, bulk density, precipitation, and streamflow – are phenomenological, even if it is impossible to measure these things exactly in practice.
) variables and . There is, in principle, some “perfect” (but unknown) probability distribution that represents the distribution over actual experimental data that is implied by the real dynamic system, including all measurement devices. This distribution represents aleatory uncertainty, as defined above, in the context of the particular experiment, and completely describes everything that it is possible to know about the relationship between measured components of the system inputs and response given available data (one could think of as a perfect model).
Fig.1 Diagram of a simple experiment with a single input and single output. Even in the simplest case, we require at least one process hypothesis , and at least two measurement models and . Aleatory uncertainty is defined as the (unknown) distribution , which would only be knowable if we had access to both a perfect process model and also perfect measurement models. A full model is the conjunction .
Our objective is to represent system with a hypothesis-driven, usually physically-based, model in a way that is, in some sense, explanatory (i.e., reductionist). It is important to remember that because the relationships between the phenomenological variables ( and ) and their corresponding data ( and ) are determined by measurement devices, which are just physical systems like any other, it is necessary to model measurement devices as part of the whole dynamic system that we are testing. This is a consequence of confirmation holism (Stanford, 2016) – it is impossible to separate a measurement model from the model dynamical system that we want to learn about during any hypothesis test. This means that all hypothetical models must be probabilistic to account for aleatory uncertainty, a fact that is well-known and widely discussed in the hydrology literature (e.g., Weijs et al., 2010; Beven, 2016).
The model itself, however, is subject to epistemic uncertainty in all of its component process representations. This epistemic uncertainty about individual process representations, including all measurement devices, manifests as uncertainty in all phenomenological components simulated by the model. This means that not only must each individual hypothetical model be probabilistic to account for aleatory uncertainty, but also that we must have probability distributions over families of these probabilistic models.
The simple example in Fig. 1 is apparently completely generalized by making and vectors instead of scalars, so that the concepts and philosophy outlined above apply to more detailed modeling problems in the environmental sciences. For example, the list of phenomenological variables might include any types of parameters, boundary conditions, prognostic states, or diagnostic outputs, and we may have experimental data about any of these things.
Ensembles can’t measure epistemic uncertainty
The most common strategy for treating epistemic uncertainty about the structure and functioning of a dynamical system is to place a probability distribution over some family of hypothetical model components: e.g., where the superscript indexes a family of competing or complimentary models. We can estimate a predictive distribution over a finite family of models by Monte Carlo (Metropolis, 1987) integration:
Predictions from more general model classes must be integrated differently, of course, but the principle is the same. The notation is equivalent to . Technically, is itself a model; however, calling it that might cause confusion with objects like , so we will call a model, an ensemble, and the ensemble prediction. is exactly the typical multi-model ensemble – we have, thus far, not deviated in any way from standard practice. Sometimes multi-model ensembles are generated simply by choosing different parameter values (e.g., Beven and Freer, 2001) and sometimes by choosing different biogeophysical process representations (e.g., Clark et al., 2015).
We might wish to be more explicitly reductionist in our development of Eq. (1). For example, we might wish to explicitly recognize different components of each model – like different biogeophysical process representations ( is the ith hypothesis about a jth biogeophysical process), different measurement models (e.g., , , and are models of the measurements of boundary conditions, parameters, and diagnostic outputs respectively), etc. Accordingly, the appropriate expansion of Eq. (1) is over phenomenological variables using a chain rule like:
The aspects of an ensemble that are actually testable, at least in principle, are its predictions – i.e., the probability distribution over experimental data that results from integrations like Eq. (1) or Eq. (2). Notice that these predictions are determined completely by the information that we put into the ensemble – information that comes in the form of hypotheses like the various . So, does not yet represent anything like real uncertainty, because at this point our ensemble may contain models that conflict with reality, and it certainly does not contain all possible models that do not conflict with what we currently know about reality. Specifically, we know with effective certainty that none of the ’s are actually correct, and without collecting and conditioning on experimental data, we cannot know the (inexact) relationship between families like and reality. We cannot even reliably guess, without empirical testing, whether is substantively biased, let alone anything about its predictive accuracy or precision.
Related to prediction and forecasting, it may seem intuitive that if our ensemble contains a set of models that are plausible representations of system , then the resulting will represent a set of plausible outcomes for data . While this may be true, the problem is that this range of plausible outcomes is not a plausible range of outcomes, and any quantification over may be significantly biased related to either true frequencies or true likelihoods of real outcomes. It is, at least in principle, potentially dangerous to confuse an ensemble of plausible models with a plausible representation of uncertainty (Taleb, 2010).
Ensembles like also do not help with reliable scientific inference. The problem here is that unless we are willing to believe that at least one of the competing models from our family of models is actually “true” in a strict sense, then any scientific inference that we make over is at least potentially inconsistent. This problem is well-known and well-understood, and is discussed in accessible detail by Gelman and Shalizi (2013). Specifically, the problem is that if the true model is not a mixture of the models in , then any application of Bayes’ theorem to is not guaranteed to yield posteriors that cluster in any neighborhood around the best models in terms of future predictions. Importantly, inference over with can even result in posteriors that end up predicting with skill worse than the worst model in the class, and sometimes with skill worse than chance (Grünwald and Langford, 2007).
In hydrology, the primary danger is that incorrect phenomenological distributions (or measurement distributions) like can result not only in incorrect, but actually in contradictory, inferences over ensembles of process representations like . Examples of this effect were given by Beven et al. (2008), and discussed by Clark et al. (2011): “A major unresolved challenge for the ensemble method to work is to ensure that the ensemble includes at least one hypothesis that approximates “reality” within the range of data uncertainty.” The problem is that we can never know whether such a model is available without first being able to separate aleatory from epistemic uncertainties. However, under Eq. (1), any inferences over any particular component of the modeling chain (e.g., those measurement distributions that together represent aleatory uncertainty) are at least potentially inconsistent under misspecification of the distribution over any other component of the modeling chain (e.g., those biogeophysical processes hypotheses, , that we actually want to test). Thus we are left with a circular problem in that we cannot reliably test process models unless we have accurate data models, and we cannot test data models unless we have accurate process models. This is exactly a consequence of the aforementioned confirmation holism problem. To reiterate, hypothesis testing necessarily relies on separating epistemic from aleatory uncertainty.
It is worth pointing out that certain practical implications of this problem have been discussed widely in the Hydrology literature. Any application of Bayesian methods for model discrimination requires that a likelihood function be specified a priori, so as to relate model predictions with actual observations. At least in the case of environmental models, this choice is necessarily ad hoc (Beven et al., 2008; Beven, 2016), and Nearing et al. (2016b) point out that this problem of assigning likelihood functions is exactly the problem of separating aleatory from epistemic uncertainty.
So, what we need is a way to reliably (i) separate aleatory from epistemic uncertainty, and (ii) project our epistemic uncertainty onto forecasts and predictions. Information theory provides a solution to the first problem, and we propose that it might offer a solution to the second problem as well.
Information theory can be used to decompose uncertainty
Before we continue, let’s remember that there are currently two dominant and fundamentally different approaches to scientific inference: Bayesian methods (i.e., ensemble-based methods) and falsification-based methods (i.e., statistical hypothesis testing). Gelman and Shalizi (2013) provide an excellent and accessible treatment of this distinction. For our purpose, the issue is that falsification-based hypothesis testing avoids assigning probabilities to models, and thus avoids the inconsistency problems discussed above, but still requires that the models assign probabilities to predictions, and thus requires specification of a prior.
Gong et al. (2013) laid the groundwork for a falsification-based information-theoretic approach that decomposes aleatory from epistemic uncertainty related to any individual model. Since this approach doesn’t rely on an ensemble there is no chance for misspecification, degeneracy, or the resulting inconsistency. More generally, the approach does not rely on assigning any a priori distributions at all – it relies completely on empirical probability distributions, and therefore avoids the problem of requiring a priori distinction between any aspects of the structure of aleatory and/or epistemic uncertainty. To state this another way, by avoiding any direct use of Bayes’ theorem it is not necessary to specify either a prior or a likelihood function, both of which cause errors in inference when misspecified.
This approach is as follows. First, we measure the information contained in experimental control data about experimental response data ; call this . This measure is equal to 1 if the experimental control data completely explains the observed variability in experimental response data, and is equal to 0 if the experimental response data is independent of the experimental control data. A model then makes predictions – i.e., – that also contain some fractional information about experimental response data – call this . As long as the measure used for quantifying information is self-equitable, and the probability distribution over is independent of conditional on (i.e., the model acts only on experimental control data and not on experimental response data), then the data processing inequality (Kinney and Atwal, 2014) guarantees that the information content of the model predictions is always bounded above by the actual information content of the data: . This is always true because necessarily results in the Markov relationship: .
This inequality allows us to ask whether a given model provides as much information about the relationship between experimental control data and experimental response data as is contained in the data themselves. If the answer to this question is “no” – i.e., – then we know that model could be improved given only the information that is currently available from experimental data that we actually have in-hand.
More specifically, if measure is the ratio of Shannon’s (1948) mutual information to entropy then we can quantitatively separate epistemic uncertainty from aleatory uncertainty in the context of available experimental data. Details were given by Gong et al. (2013), but the basic idea is that if mutual information is notated as :
and entropy as :
then is the fraction of observed entropy in that is explained by observed variability in . The distribution is estimated empirically. In other words, quantifies the fraction of variability in experiential response data that is explainable due to variability in experimental input data. A similar procedure is used to calculate .
Ideally would be an accurate representation of , but of course this is impossible in practice. The challenge is coming up with an empirical estimator of that is as close to the true value as possible, and although we can’t estimate directly, we can bound it conservatively. Take any mapping , then by another application of the data processing inequality we have such that information missing from model (notated ) is bounded by estimator as follows:
If has known convergence properties over large function classes (e.g., Hornik, 1991), then in principle and as long as metric itself admits a convergent estimator, we have a bounded and convergent estimate of the information content of experimental data. Regressions like could be any convergent basis expansion; examples might include neural networks, Gaussian processes (Rasmussen and Williams, 2006), polynomial chaos expansions, and many others. The only strict requirement is that must be data-driven. It’s also essential that the input data to must be the same experimental control data that is used as input into model .
The fact that this approach always underestimates epistemic uncertainty (i.e., Eq. (7)) is essential. If our purpose were uncertainty propagation, then we might prefer over-estimating uncertainty; however, when the purpose is uncertainty decomposition for hypothesis testing, it is essential that epistemic uncertainty is under-estimated. The reason is because of the asymmetry between falsification and verification due to the modus tollens. In this case the proposition we are evaluating for our hypothesis test is . If there existed a finite probability that , then we should have a finite probability of incorrectly rejecting .
To summarize, information theory provides a reliable way of bounding epistemic uncertainty in a completely context-free environment, meaning that we are not simply comparing the performance of two competing hypotheses about how the system behaves. We bound the information contained in experimental data using a purely empirical method, so that we minimize conflation of epistemic with aleatory uncertainties, and any conflation that does occur is reliably in one direction – we will always under-estimate epistemic uncertainty. This approach facilitates a bounded hypothesis test whereby we know that any model for which has the potential to be improved without collecting any new data. Further, the test is coherent with probability theory, works in the presence of data uncertainty, and does not require subjective choice of a likelihood function. For demonstrations of this theory, the reader is directed to Nearing and Gupta (2015) which applied this theory and method to conceptual runoff models, and Nearing et al. (2016a) which applied the theory to four distributed land surface models.
Information theory for uncertainty propagation
The underlying problem with ensemble-based methods is that we must explicitly formulate each of the hypothetical models in . Of course, it is impossible to formulate a complete family of plausible models (estimating a range of plausible outcomes is easy, but estimating a plausible range of outcomes is impossible). Instead, we suggest that a better approach would be to develop models that use hypotheses about individual process relationships to constrain model behavior.
It seems very strange to formulate a hypothetical model that we know is wrong, and to then assume that the error in predictions made using that wrong model will remain stationary up to some probability distribution. This goes for models like and also for models like . Similarly, it seems strange to require that models admit solution. Currently, if we want to run any existing watershed model, we have to provide values for several unknown (and sometimes unknowable) parameters and boundary conditions. We then have to go back and prescribe uncertainty related to all of these components. If we prescribe that uncertainty using inverse methods, then we are fundamentally using a multi-model approach. If we assign that uncertainty a priori, then we are relying on a priori information.
So, consider instead that we would like to be able to run a hydrology model by starting with no prior information at all regarding the nature of the relationships among relevant variables (except, perhaps, for the fact that the system we are simulating is conservative), and to then impose constraints on its behavior based on available information or assumptions we may wish to make. In such a model, the mathematics would fundamentally act in such a way as to constrain probability distributions, instead of acting to solve sets of differential equations and then imposing uncertainties on top of those solutions. Following this perspective, we might approach hydrologic modeling from a “maximum entropy” perspective in which we first impose a uniform or non-informative distribution over possible outcomes (e.g., a uniform distribution over the real number line representing streamflow), after which the model equations act in such a way that they use hypotheses about process-level system behavior to constrain this distribution.
One approach might be as follows. A model is constructed as a Bayesian network, where each node in the network represents a different simulated variable at a particular location in time and space. A complete simulation of a complex and spatially distributed biogeophysical system is therefore represented by a joint probability distribution over a large Bayesian network. Conservation equations would then be imposed on this type of network as symmetry constraints in either time or space. Any other hypotheses about system behavior (e.g., that soil moisture exerts a threshold control on stomatal resistance) may be implemented by imposing constraints on the full network such that outcomes conflicting with imposed constraints are assigned zero probability.
We do not, at this moment in time, provide an example of such an implementation, in part because we are suggesting a completely different way of thinking about modeling hydrologic systems, and because there are many aspects of how this approach can be implemented that remain to be worked out. At this time, we are effectively suggesting that the community reconsider the first principles of complex models. Instead of solving sets of differential equations that do not admit solution unless they are fully parameterized – often requiring parameter values that are unobtainable in practice – we propose to instead treat conservation laws as symmetry constraints on maximum entropy distributions. This would allow for real quantitative understanding about the implications of our scientific hypotheses on future events, without necessitating any assumptions about statistical stationarity of model error distributions. It would also allow simulations under any available information – in principle if system properties (either relationships or values of physical parameters) are known, then these could be used to impose constraints on model behavior, but if they are not known then the model would not require them to complete a simulation.
Summary
It has become popular to employ various types of model ensembles to characterize and quantify uncertainty in model predictions, with multi-model ensembles considered as a way of assessing epistemic uncertainty due to imperfect system representation. However, there exists no consistency theorem, boundedness theorem, or convergence theorem that relates any measure over any feasible model ensemble or ensemble prediction with anything that could plausibly be called “uncertainty”.
Because we cannot, in general, ever know whether truth can be sufficiently well approximated as a mixture of the models in a given ensemble, a) Bayesian assessment will almost unavoidably suffer from inconsistency and result in incorrect (even contradictory) inferences, and b) there is no guarantee that the Bayesian posterior will (in any systematic way) be related to anything like real uncertainty.
The solution to assessing epistemic and aleatory components of uncertainty within any modeling framework is, instead, to adopt an information theoretic approach. This makes it possible to quantify information provided by a system hypothesis in a reliable (bounded) way. This approach does not suffer from the typical problems related to assigning likelihood functions, Bayesian degeneracy and inconsistency, nor from the problem of ad hoc statistical rejection criteria.
Of course, we recognize that conceptualizing a model as “uncertainty restricted via constraints” as opposed to the conventional “prediction plus uncertainty” might seem like a difficult shift to make, and the difference in these two approaches does seem to be very fundamental. Whereas the latter (conventional) approach requires two sets of assumptions (model assumptions+ error structure assumptions), the former only requires one set of assumptions (model assumptions). In the conventional approach, the mathematics of the model acts to make a prediction, to which we add uncertainty. The point is that we really want the mathematics of the model to act in such a way as to rule out areas of prediction space that are not feasible given our modeling hypotheses, theories, assumptions, and available data. To do this, we need a way of imposing conservation symmetries into mathematical descriptions of complex systems.
More broadly, in a complementary paper (Nearing et al., 2016b) we discuss the fact that uncertainty quantification is itself an incoherent activity and that it would be much easier to address science, engineering, policy, and communication problems from an information-centric perspective. Briefly, the idea is that we should communicate and act on the best available scientific information, and that this information may come in the form of probability distributions. So, while this current paper treats uncertainty as a central object in the scientific endeavor, we do believe that this is not an optimal perspective. Instead, our goal with this current paper is to help bridge the gap between current practice and the more fundamental change in perspective advocated by Nearing et al. (2016b).
The more practical point is that we really can’t do model-based experiments in a reliable way without using information theory. What we discuss here is one aspect of that argument – that the standard methods fail to deal reliably with the underlying problem, which is separation of epistemic from aleatory uncertainties. Instead, multi-model assessment – as provided by intercomparison projects (e.g., Eyring et al., 2016) and multi-hypothesis modeling systems (e.g., Clark et al., 2015) – have their value in providing guidance on which process-level theories and equations are better suited as foundations for avenues of exploration about how to better represent different types of systems.
Finally, it is important to understand that there currently does not exist any reliable way to propagate epistemic uncertainty into future predictions. To this end, we encourage the community to focus on methods that use hypotheses and data to constrain the range of possible future outcomes, rather than to propose (necessarily degenerate) families of explicit modeling alternatives. Contrary to Beven (2006), mass and energy balance models expressed as differential equations might not be the “Holy Grail” of hydrologic modeling.
{{custom_sec.title}}
{{custom_sec.title}}
{{custom_sec.content}}
This is a preview of subscription content, contact us for subscripton.
Adame F, Neil D, Wright S F, Lovelock C E (2010). Sedimentation within and among mangrove forests along a gradient of geomorphological settings.Estuar Coast Shelf Sci, 86(1): 21–30
Alongi D M, Pfitzner J, Trott L A, Tirendi F, Dixon P, Klumpp D W (2005). Rapid sediment accumulation and microbial mineralization in forests of the mangrove Kandelia candel in the Jiulongjiang Estuary, China.Estuar Coast Shelf Sci, 63(4): 605–618
Bouillon S, Borges A V, Castañeda-Moya E, Diele K, Dittmar T, Duke N C, Kristensen E, Lee S Y, Marchand C, Middelburg J J, Rivera-Monroy V H, Smith T J, Twilley R R (2008). Mangrove production and carbon sinks: a revision of global budget estimates.Global Biogeochem Cycles, 22(2): 1–12
Bryce S, Larcombe P, Ridd P V (2003). Hydrodynamic and geomorphological controls on suspended sediment transport in mangrove creek systems, a case study: Cocoa Creek, Townsville, Australia.Estuar Coast Shelf Sci, 56(3–4): 415–431
França M C, Francisquini M I, Cohen M C L, Pessenda L C R, Rossetti D F, Guimaraes J T F, Smith C B (2012). The last mangroves of Marajó Island -Eastern Amazon: impact of climate and/or-relative sea-level changes.Rev Palaeobot Palynol, 187: 50–65
Kitheka J U, Ongwenyi G S, Mavuti K M (2003). Fluxes and exchange of suspended sediment in tidal inlets draining a degraded mangrove forest in Kenya.Estuar Coast Shelf Sci, 56(3–4): 655–667
Long C, Dai Z, Wang R, Lou Y, Zhou X, Li S, Nie Y (2022). Dynamic changes in mangroves of the largest delta in northern Beibu Gulf, China: reasons and causes.For Ecol Manage, 504: 119855
Lovelock C E, Cahoon D R, Friess D A, Guntenspergen GR, Krauss KW, Reef R, Rogers K, Saunders M L, Sidik F, Swales A, Saintilan N, Thuyen L X, Triet T (2015). The vulnerability of Indo-Pacific mangrove forests to sea-level rise.Nature, 526: 559–563
Nagelkerken I, Blaber S J M, Bouillon S, Green P, Haywood M, Kirton L G, Meynecke J O, Pawlik J, Penrose H M, Sasekumar A, Somerfield P J (2008). The habitat function of mangroves for terrestrial and marine fauna: a review.Aquat Bot, 89(2): 155–185
Nerem R S, Beckley B D, Fasullo J T, Hamlington B D, Masters D, Mitchum G T (2018). Climate-change-driven accelerated sea-level rise detected in the altimeter era.Proc Natl Acad Sci USA, 115(9): 2022–2025
Van Santen P, Augustinus P G E F, Janssen-Stelder B M, Quartel S, Tri N H (2007). Sedimentation in an estuarine mangrove system.J Asian Earth Sci, 29(4): 566–575
Wolanski E, Gibbs R, Spagnol S, King B, Brunskill G (1998). Inorganic sediment budget in the mangrove-fringed Fly River delta, Papua New Guinea.Mangroves Salt Marshes, 2(2): 85–98
We are grateful for the help of Mr. Yancheng Tao, Mr. Lianghua Pan and Dr. Xiao Chen in the field investigations. This study was financed by the National Natural Science Foundation of China (Grant No. 41930537) and the Major Project of Guangxi Science and Technology (No. AA23023016) .
Competing interests
The authors declare that they have no competing interests.
RIGHTS & PERMISSIONS
2023 Higher Education Press
审图号:GS京(2024)1246号
AI Summary 中Eng×
Note: Please note that the content below is AI-generated. Frontiers Journals website shall not be held liable for any consequences associated with the use of this content.