NUMERICAL MODELS | Parameter Estimation

NUMERICAL MODELS | Parameter Estimation

Parameter Estimation A Aksoy, University of Miami, Miami, FL, USA and NOAA Hurricane Research Division, Miami, FL, USA Ó 2015 Elsevier Ltd. All rights...

231KB Sizes 3 Downloads 133 Views

Parameter Estimation A Aksoy, University of Miami, Miami, FL, USA and NOAA Hurricane Research Division, Miami, FL, USA Ó 2015 Elsevier Ltd. All rights reserved.

Synopsis The state of the art of parameter estimation in atmospheric sciences is discussed. It is common in numerical weather prediction models to find parameterizations to represent subgrid-scale atmospheric processes such as radiation, cloud microphysics, convection, and turbulence. According to a widely accepted opinion, errors in such parameterizations are major contributors to model error. Parameter estimation aims to reduce such errors by utilizing atmospheric observations within the data assimilation framework. Although early work on parameter estimation mostly utilized variational techniques, with the advent and progress of ensemble-based data assimilation systems, an increasing number of parameter estimation studies based on ensemble-based techniques have recently emerged. The goal here is to highlight the state of the art in parameter estimation through the lens of these most recent atmospheric science publications.

Introduction Parameter estimation in the field of atmospheric sciences refers to the determination of the best values of certain parameters in a numerical model through data assimilation or other similar techniques. The practice therefore is intimately tied to addressing model deficiencies due to inaccurate parameters. The approach is sometimes also referred to as the inverse modeling problem, although, from the viewpoint of parameter estimation, the distinction is mostly semantic. In some publications, one also encounters the alternative term parameter identification. Parameters in numerical models can be part of processes that are either explicitly resolved or parameterized at the subgrid scale. In the former, parameters are part of the model dynamical core of a model and are directly related to the physical changes in momentum and heat. Some examples of such parameters are the angular speed of the rotation of the Earth, the gravitational acceleration, the gas constant for dry air, and the specific heat of dry air at constant pressure. Such parameters are generally associated with universal physical processes and their values are well known with high accuracy. In the latter case, subgrid-scale parameterizations can lead to large numerical model errors. This is due to two main reasons: (1) limited understanding and observations of the processes lead to large uncertainties in parameter values that quantify these processes; and (2) crude representation of the parameterized processes within one grid volume (or spectral truncation wavelength) leads to parameters that represent processes of multiple spatial and temporal scales. In numerical models, the most common subgrid-scale processes that are parameterized include turbulence in the planetary boundary layer (PBL), moist convection, phase changes of water (microphysical processes), and radiative transfer between the Earth’s surface, atmosphere, and space. Each of these has a class of parameterizations with multiple proposed schemes (algorithms). These parameters are not known with high accuracy and must be estimated. One can therefore envision two main purposes for doing parameter estimation. On the one hand, detailed and targeted observations and advanced data assimilation techniques can be

Encyclopedia of Atmospheric Sciences 2nd Edition, Volume 4

used to gain a better understanding of the parameterized physical processes themselves. This would aid in the tuning/ calibration of numerical models during their development. If a numerical model becomes operational, one can also perform a similar procedure by augmenting the model state with the parameters to be estimated. Although both approaches are procedurally similar, in the former, one is more concerned with better understanding of the physical processes themselves, whereas the other aims to improve numerical forecasts through data assimilation by acknowledging that forecast errors are due to uncertainties in both the initial conditions and the model. The early literature of parameter estimation in atmospheric sciences has generally focused on the use of variational data assimilation schemes. More recently, and with the proliferation and success of ensemble-based data assimilation systems in providing high-quality analyses at a wide range of atmospheric scales, an increasing number of studies based on these techniques have emerged. The goal here is to summarize the state of art in parameter estimation through the lens of these most recent studies.

Parameter Estimation Methodology The general methodology of parameter estimation follows closely that of state-only data assimilation. Therefore, further reading in variational and ensemble-based data assimilation techniques are strongly recommended. The first step in parameter estimation generally involves obtaining an augmented state vector that consists of both the state (control) variables and the parameters to be estimated. Since, by definition, there is no dynamical feedback from the model state to the parameters, the traditional algorithms of data assimilation are needed to be supplemented by specific measures so that optimal solutions for the parameters can be obtained. In variational data assimilation, explicit penalty terms are included in the cost function for parameters. Furthermore, in four-dimensional data assimilation, the adjoint model equation is augmented by an explicit term that involves the parameters. In ensemble-based data assimilation, appropriate parameter perturbations are introduced to obtain


Numerical Models j Parameter Estimation

one-way covariance information between parameters and the observed model state. Additionally, measures are taken to maintain sufficient parameter spread throughout data assimilation cycles. In Figure 1, the general methodology of parameter estimation in a simplified schematic is illustrated. Figure 1(a) shows how a first guess that is sensitive to parameter perturbations is generated. First, the numerical model is advanced using perturbed parameters. In ensemble-based techniques this is achieved by advancing each ensemble member using a perturbed parameter value. Here, it is also assumed that there exists initial condition uncertainty, as it is standard in all data assimilation applications (hence the distribution in the ‘initial state’ balloon). In variational techniques, a ‘first guess’ parameter value is used to obtain the first guess for the model state. At this stage, observation operators are also applied to the first guess model state to obtain the first guess in observation space. In the second stage (Figure 1(b)), regression is carried out between the first guess and the parameter to be updated, to obtain a linear statistical relationship between the (observed) model state and parameters. In ensemble-based techniques, this involves the computation of sample covariances between observations and parameters. In variational techniques, adjoint models are used. Finally, the update is performed (Figure 1(c)) to obtain a new parameter value informed by the latest available observations. In ensemble-based techniques, this is done by projecting observation-first-guess differences onto the parameter space through the regression relationship. In variational techniques, a cost function minimization procedure that accounts for parameter variability is applied. It is also common to repeat this workflow in subsequent analysis cycles where model advances take into account the updated parameter values. Parameter estimation is generally an extension of data assimilation in terms of its methodology. At the same time, it is unique in that there is only a one-way dynamical interaction between parameters and the model state. In other words, there is an increased possibility of low signal-to-noise ratio between parameters and observations in the presence of potentially strong dynamical interactions within model state variables themselves. In the next section, issues related to the difficulty of extracting useful information from observations to successfully update parameters are discussed in more detail.

(a) Forward model

Forecast state

a. Observability: Changes produced by parameter variations do not necessarily project sufficiently strongly onto model/ observation space. Strictly speaking, this may be due to both insufficient model sensitivity to and/or incompatible model-observation mapping with the parameters. The end result of realistic measurement errors leading to unrealistically large changes in estimated parameter values is the same. Lack of observability is illustrated in Figure 2(a) by the relatively flat slope of the regression line between the observed model state and parameters. In such a scenario, small observation differences are projected onto large parameter differences and may result in unphysical or unrealistic parameter values in the update. b. Simplicity: Sometimes, even with sufficient model sensitivity, the mapping between the parameter space and model space can be such that a simple one-to-one relationship does not exist between the model output and the parameter to be estimated, which is generally caused by the nonlinearity in how the model responds to the expected parameter variations. In such situations, a unique solution may not exist to the inverse problem and the optimal parameter value may never be found. Lack of simplicity is illustrated in Figure 2(b) by the quadratic relationship between the observed model state and parameters. If the parameter space is not carefully constrained (i.e., the regression is limited to only one side of the quadratic relationship), the probability that the same observation differences are projected onto multiple distinct parameter

(c) Update

ΔY = (Y –Y )

Observed state

Y 0 Actual observation












Parameter estimation is, in essence, the problem of inverse mapping from model space to the space of parameters. In the atmospheric science literature, the term parameter identifiability has been used to denote how easy it is to find unique solutions of the inverse problem for unknown parameters from available observations of the model state. An argument is made that nonuniqueness and instability of the identified processes may contribute to the ill posedness of the problem. A more categorical explanation for parameter identifiability argues that three factors can be thought to contribute to it (see Figure 2, for a schematic illustration):

(b) Regression

Observed state Y

Initial state

Parameter Identifiability

{ cov(Y,α)

Y5 Parameter α



Parameter a α analysis

Δα = (α a–α)

Figure 1 Illustration of the typical workflow of parameter estimation (using the ensemble method as an example). (a) Forward model. (b) Regression. (c) Update. Symbols a and Y represent the parameter to be estimated and the observation variable, respectively.

Numerical Models j Parameter Estimation

(b) Simplicity




Observed state

Observed state

Observed state

(a) Observability



Parameter A ( ), Parameter B ( )

Figure 2 Illustration of types of parameter identifiability and how they can negatively impact successful parameter estimation. (a) Observability. (b) Simplicity. (c) Distinguishability. The thick curves in the panels represent the statistical regression relationship between parameters and observed state. Thin dashed lines represent selected parameter–observation pairs on the regression curves.

differences is increased, which can result in unrealistic parameter updates. c. Distinguishability: In other situations, various parameters may have similar aggregate effects on the model state. Therefore, observations related to the involved model output may lead to adjustments in the wrong set of parameters. Lack of distinguishability is illustrated in Figure 2(c), where very similar statistical relationships exist between the observed model state and two different types of parameters. In such a situation, given model response may be interpreted as being the result of perturbations in a wrong parameter, causing certain parameters to be updated toward unrealistic values, whereas those that should be updated remain mostly unchanged. Successful parameter estimation is intimately linked to whether and how each of these factors is addressed. For example, observability implies that careful analysis must be carried out to ensure that the model exhibits sufficient sensitivity to the parameters to be estimated. At this stage, it is also imperative to choose variables that best reflect the sensitivity between model space and parameter space. Next, the proposed parameter space must be carefully examined to avoid nonsimple observation–parameter relationships. Finally, when multiple parameters are estimated, cross correlations among the parameters must be carefully identified so that parameters that induce similar effects on the observed fields are eliminated. Only when all of these conditions are met, sufficient parameter identifiability can be achieved and successful parameter estimation can be performed. Some suggestions to improve identifiability are (1) change the observing system (i.e., address observability); (2) modify the model to eliminate the source of nonuniqueness (i.e., address simplicity); or (3) altogether modify the construction of the inverse method, which can be achieved by introducing new information to distinguish and avoid the behavior in the modeled processes that result in the nonuniqueness in the first place. When parameter variations are considered in the context of uncertain initial and boundary conditions, sensitivity of the model state to parameters can decrease substantially, leading to heightened identifiability issues. This can be investigated

through the magnitude and linearity of the signal in the ensemble spread due to parameter perturbations. If lack of identifiability is detected, scaling parameter perturbations to increase the resulting ensemble spread could be helpful, but the positive effects would be unavoidably constrained if the linearity in ensemble spread is limited. Multiple, simultaneously uncertain parameters are also argued to be a source of negative impact on identifiability. Earlier studies have investigated parameter identifiability mostly in the context of model sensitivity. Some studies approached identifiability both with the use of a response function and correlations between model output and uncertain parameters. A response function can be constructed to represent the mean-square distance between the model solution and observations averaged over all observation points. Normalizing then by observation variance provides a metric that allows for the comparison of model response as a result of various parameter perturbations. Computing the response function in observation space also addresses the observability aspect of identifiability. Another metric to investigate sensitivity is rootmean-squared correlation, which represents the magnitude of spatially averaged absolute sample correlation between the model state and parameters. The root-mean-squared correlation metric can also be extended to observation space to account for the observability aspect.

Other Challenges for Parameter Estimation Besides the difficulties arising from identifiability issues, other challenges also exist for parameter estimation. One such important problem arises from the fact that parameters in a numerical model are not ‘dynamical’ by definition, by which it is meant that there is no feedback from the evolving model state back to parameters. In ensemble-based data assimilation, this creates the immediate problem of filter divergence: When the ensemble spread becomes exceedingly small, the first guess is given increasing weight by the data assimilation scheme, which results in new observations to have less weight on the analysis. Without dynamical feedback, parameter spread is destined to become small enough to lead to filter divergence unless explicit measures are taken to maintain the desired


Numerical Models j Parameter Estimation

level of ensemble spread in the parameters to be estimated. A common such measure is to ‘inflate’ analysis variance when it drops below a threshold level. The other aspect of the nondynamical nature of model parameters is that they are generally specified ‘globally,’ i.e., they are assigned the same values over the entire computational domain of the model. In real-data applications with large, threedimensional observational datasets, this results in the overdetermination of parameters and manifests itself in the form of excessive noise in updated parameters (updated parameters that appear to follow random walk). To counter this issue, one method, ‘spatial updating,’ updates the parameters first horizontally as two-dimensional arrays using covariance localization, and then uses spatial averaging to obtain the updated global parameter values. Another approach is a data selection technique, which uses only a certain subset of observations that exhibit the largest ensemble correlations with the parameters. When simultaneous state and parameter estimation is carried out, it has been shown by some that the state estimation errors dominate the uncertainties of the model state during the early stages of data assimilation so that the covariance information between the model state and parameters becomes unreliable. To remedy the situation, some studies used a delayed approach in which parameter estimation after a number of data assimilation cycles when the model state is likely to be constrained by observations, and covariances between the model state and parameters are likely to be reliable. Other studies chose not to update the state at all, instead focusing on updating parameters, and approached the filter divergence and overdetermination issues by assimilating observations individually at their exact time. The final global parameter value is then achieved by averaging over time and ensemble members. Other challenges to successful parameter estimation pertain to the lack of knowledge on how best to sample parameter space. In most situations, suitable values of parameters are barely known and there is very little knowledge, if any, on the bounds and the nature of uncertainty that surrounds them. Although, as demonstrated by some studies, parameter uncertainty can take many shapes and forms in the form of non-Gaussian, multimodal distributions, in more realistic situations, such assumptions are difficult to make. This has generally led to the assumption of prior parameter distributions with limited information content, which generally takes the form of bounded uniform distributions or Gaussian distributions. However, since parameter values are generally expected to remain bounded and positive definite, Gaussian prior probability density functions (PDFs) are not always suitable choices. Approaches in the literature have varied to address these situations; methods such as transforming parameters logarithmically to mimic lognormal PDFs, assumption of beta distributions that are naturally bounded, and trigonometric transformations for bounded parameters are used. As the number of parameters simultaneously estimated increases, a new issue also arises to effectively sample the multidimensional parameter space in an ensemble of limited size. Though most studies sample parameters independently from respective assumed prior PDFs, this does not necessarily guarantee that the joint parameter space is effectively sampled. One method to increase the effectiveness of sampling, the

Markov chain Monte Carlo algorithm, revisits high-probability regions of the parameter space in an iterative manner. Although this technique reduces the computational burden of effective sampling by a few orders of magnitude, for sufficiently many parameters, the technique may still be computationally unfeasible with complex numerical models. As an alternative, a Latin hypercube sampling strategy is suggested, which works in a normalized bounded space but provides independent parameter distributions with even sampling. These samples can then be transformed into any other PDF if nonuniform distributions are to be assumed. Another challenge for parameter estimation arises from the nature of the complex and highly nonlinear relationship between individual parameters and the aggregate model response to them. In many situations, the same parameters simultaneously impact various processes, and the interactions among the processes make isolating individual impacts difficult. One proposed approach assigns multiplicative weights to the processes that contribute to the total outcome of a parameterization scheme, and introduces uncertainties to these weights. The advantage of this approach is that it is a natural way of looking at model error, in the sense that individual processes themselves are better understood physically and therefore lend themselves suitably to a subjective assessment of the misrepresentation of the parameterized aspect of the model. Furthermore, the additive nature of the processes themselves may prove advantageous in obtaining a linear model response to the perturbations of the multiplicative weights.

Future Advances Research in parameter estimation is likely to advance on several avenues. First of all, studies that systematically investigate all dimensions of parameter identifiability are needed to obtain a complete picture of the challenges and limitations facing parameter estimation. Comparisons between parameterrelated model sensitivity and initial condition- or boundary condition-related model sensitivity should be carefully made to take into account the timescales at which these various sources of sensitivity act upon the atmospheric state. For fair comparison, it is also advisable to note that model sensitivity to initial or boundary conditions is the result of perturbations that are usually obtained from continuously cycling data assimilation systems that do not account for parameter uncertainty, whereas, for practical reasons, parameter perturbations can only be introduced to the most recent initial and boundary condition ensembles. There is also a need for a unified vision for ‘correlation localization’ that places the updating of state variables and parameters on a common ground. In state-only data assimilation, it is natural to visualize the influence of observations onto the model state in geographical space. For global model parameters, such a natural localization space does not exist. Although some studies have proposed ad hoc techniques to manage this dichotomy, a systematic approach that puts on equal ground the localizations for model state and model parameters has yet to emerge. Finally, a fundamental shift from focusing on individual parameters, whose effects on the model state are usually

Numerical Models j Parameter Estimation obscure because of the many dependent nonlinear processes, toward focusing on those actual processes that contribute to the particular subgrid-scale parameterizations may be prudent. It is possible to control the contributions from these individual processes by assigning multiplicative weights to their final output that are to be estimated. This philosophy also enables parameter estimation to become a more holistic approach to counter model error. After all, it may be more difficult to justify parameter estimation as a legitimate means of treating model error when estimated parameters are usually empirical and not well observed. When the focus shifts to the processes themselves, the uncertainty can be expressed in the natural space of known physical processes and the estimation may then effectively inform on the relative importance of individual processes under observed atmospheric conditions.


The state of the art of parameter estimation in atmospheric sciences is reviewed. Parameter identifiability, defined as the ease of finding unique solutions of the inverse problem for unknown parameters from available observations of the model state, is composed of the three dimensions of observability, simplicity, and distinguishability. The major challenges in parameter estimation are discussed as the nondynamical nature of model parameters, difficulty of updating globally specified parameters using spatially and temporally varying observations, distinguishability in the presence of initial and boundary condition uncertainty, lack of knowledge of the probabilistic nature of parameter uncertainty, and effective sampling of multidimensional parameter space. Some research directions for the near future are suggested.

Nielsen-Gammon et al. (2010) discuss in detail the three dimensions of parameter identifiability. All three factors are evaluated there for various parameters of a PBL parameterization scheme: observability is deduced from the magnitude of ensemble standard deviations of model fields when individual parameters are perturbed, simplicity is deduced from the nature of scatter plots between model fields and individual parameters, and distinguishability is deduced from the overall level of correlations between model fields and individual parameters. Posselt and Vukicevic (2010), using a one-dimensional model of convection and cloud microphysics, employed the Markov chain Monte Carlo approach to effectively sample the high-probability regions of parameter space spanned by multiple parameters. That way, they were able to obtain joint parameter-state PDFs, which, in some cases, exposed relationships that were highly nonlinear and multimodal, resulting in parameter identifiability issues. Hacker et al. (2011) investigated parameter identifiability in the presence of uncertain initial and boundary conditions. The limitations to identifiability from simultaneous uncertain parameters are studied by Nielsen-Gammon et al. (2010), Posselt and Vukicevic (2010), Tong and Xue (2008b), and Aksoy et al. (2006a). To investigate sensitivity, Tong and Xue (2008a) suggested a response function and correlations between model output and uncertain parameters, whereas Aksoy et al. (2006a) used the metric root-mean-squared correlation. Aksoy et al. (2006a) introduced the method of inflating parameter variance when it drops below a threshold level (also see Tong and Xue, 2008b; Jung et al., 2010; Zhang et al., 2012). The two counter measures for the overdetermination problem when estimating global parameters, the spatial updating method and the correlation-dependent data selection technique, were introduced by Aksoy et al. (2006a) and Tong and Xue (2008b), respectively. Jung et al. (2010) and Zhang et al. (2012) used the delayed parameter estimation approach to improve the sensitivity of the model state to multiple, simultaneously uncertain parameters. Godinez et al. (2012) chose to only update the parameters and used the technique of updating parameters at their exact time. To obtain prior distributions for bounded parameters, Tong and Xue (2008b) introduced logarithmic transformations, Hacker et al. (2011) applied beta distributions, and Nielsen-Gammon et al. (2010) used trigonometric transformations. The two methods to effectively sample the multiple parameter space, the Markov chain Monte Carlo algorithm, and the Latin hypercube sampling strategy, are suggested by Posselt and Vukicevic (2010) and Hacker et al. (2011), respectively. van Lier-Walqui et al. (2013) proposed the process approach, where the weights of process outcomes are estimated rather than empirical parameters themselves.




The author is grateful for the valuable inputs obtained from Drs Robert Rogers and Sim Aberson of NOAA Hurricane Research Division.

See also: Boundary Layer (Atmospheric) and Air Pollution: Modeling and Parameterization. Data Assimilation and Predictability: Data Assimilation; Ensemble Prediction; Ensemble-Based Data Assimilation. Numerical Models: Methods; Model Physics Parameterization; Parameterization of Physical Processes: Clouds; Parameterization of Physical Processes: Turbulence and Mixing; Regional Prediction Models.

Further Reading See Navon (1998) for a comprehensive review of parameter identifiability and early literature on variational parameter estimation. Zhu and Navon (1999) provide some further technical details on the variational approach to parameter estimation. Good examples for recent studies on ensemble-based parameter estimation are Aksoy et al. (2006a,b), Tong and Xue (2008a,b), Hu et al. (2010), Jung et al. (2010), Godinez et al. (2012), Posselt and Bishop (2012), and van Lier-Walqui et al. (2013).

Aksoy, A., Zhang, F., Nielsen-Gammon, J.W., 2006a. Ensemble-based simultaneous state and parameter estimation with MM5. Geophysical Research Letters 33, L12801. Aksoy, A., Zhang, F., Nielsen-Gammon, J.W., 2006b. Ensemble-based simultaneous state and parameter estimation in a two-dimensional sea-breeze model. Monthly Weather Review 134, 2951–2970. Godinez, H.C., Reisner, J.M., Fierro, A.O., Guimond, S.R., Kao, J., 2012. Determining key model parameters of rapidly intensifying Hurricane Guillermo (1997) using the ensemble Kalman filter. Journal of Atmospheric Sciences 69, 3147–3171. Hacker, J.P., Snyder, C., Ha, S.-Y., Pocernich, M., 2011. Linear and non-linear response to parameter variations in a mesoscale model. Tellus 63A, 429–444. Hu, X.-M., Zhang, F., Nielsen-Gammon, J.W., 2010. Ensemble-based simultaneous state and parameter estimation for treatment of mesoscale model error: a real-data study. Geophysical Research Letters 37, L08802. 2010GL043017. Jung, Y., Xue, M., Zhang, G., 2010. Simultaneous estimation of microphysical parameters and the atmospheric state using simulated polarimetric radar data and an ensemble Kalman filter in the presence of an observation operator error. Monthly Weather Review 136, 1649–1668. Nielsen-Gammon, J.W., Hu, X., Zhang, F., Pleim, J.E., 2010. Evaluation of planetary boundary layer scheme sensitivities for the purpose of parameter estimation. Monthly Weather Review 138, 3400–3417. Posselt, D.J., Bishop, C.H., 2012. Nonlinear parameter estimation: comparison of an ensemble Kalman smoother with a Markov chain Monte Carlo algorithm. Monthly Weather Review 140, 1957–1974. Posselt, D.J., Vukicevic, T., 2010. Robust characterization of model physics uncertainty for simulations of deep moist convection. Monthly Weather Review 138, 1513–1535.


Numerical Models j Parameter Estimation

Tong, M., Xue, M., 2008a. Simultaneous estimation of microphysical parameters and atmospheric state with simulated radar data and ensemble square root Kalman filter. Part II: sensitivity analysis and parameter identifiability. Monthly Weather Review 136, 1630–1648. Tong, M., Xue, M., 2008b. Simultaneous estimation of microphysical parameters and atmospheric state with simulated radar data and ensemble square root Kalman filter. Part II: parameter estimation experiments. Monthly Weather Review 136, 1649–1668.

van Lier-Walqui, M., Vukicevic, T., Posselt, D.J., 2014. Linearization of microphysical parameterization uncertainty using multiplicative process perturbation parameters. Monthly Weather Review 142 (1), 401–413. Zhang, S., Liu, Z., Rosati, A., Delworth, T., 2012. A study of enhancive parameter correction with coupled data assimilation for climate estimation and prediction using a simple coupled model. Tellus 64A, 10963. tellusa.v64i0.10963.