Method mfh/getFitErrors


  getFitErrors calculates fisher matrix approximation of fit parameters
  errors
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 
  CALL:     cov = getFitErrors(func,pl)
 
  INPUTS:
          - func. The function handle. If you use 'jacobian' algorith, func
          must be the fit function. If you use 'hessian' algorith, func
          must be the cost function.
 
  PARAMETERS:
          - pars. The set of parameters. (pest object).
          - DerivStep. The set of derivative steps. (doble vactor).
          - mse. Mean Square Error. (Chi^2). (double number).
          - algo. Algorithm used to calculate the covariance matrix. Can be
          'jacobian' and 'hessian'.
          In case of 'jacobian' algorith, func must be the fit function
          (model).
          In case of 'hessian' algorith, func must be the cost function.
          For the difference between fit function and cost function see the
          remarks below.
          - dy. errors on y measurements. If you input dy the function does
          not compensates for mse.
 
  OUTPUTS:
          - out. A pest obect with the error and covariance field filled
          in.
 
  IMPORTANT REMARKS
 
  GENERAL REMARKS ON THE HESSIAN MATRIX AND ERROR CALCULATION.
 
  Care must be taken in definining the cost function from which we
  calculate the errors. The hessian matrix calculation is aiming to provide
  the curvature of the cost function around its maximum. The formalism is
  perfectly consistent in case of Gaussian assumption for which the inverse
  of the expected covariance (Fisher Information) is:
 
  F(i,j) = -E(dlLike / dxi dxj)   i,j = 1,...,n
 
  where E is the expectation value, lLike is the log Likelihood, xi and xj
  are parameters with i and j running over the parameters numbers.
  In the Gaussian assumption:
 
  lLike = -(1/2)*sum_k(((yk - f(x1,...,xn))/sk)^2) + const.
 
  Here yk are the data, xi the parameters, sk is the standard error on the
  data yk and f(x1,...,xn) is the fit function (model). As it is customary
  in least square fits, we minimize -2*lLike, i.e.
 
  SE(x1,...,xn) = sum_k(((yk - f(x1,...,xn))/sk)^2)
 
  Assuming SE is the cost function we have:
  
  H(SE) = dSE / dxi dxj   i,j = 1,...,n
 
  where H is the Hessian matrix. Following the definition of F as the
  inverse of the expected covariance we have:
 
  C = 2*inv(H)
 
  Where C is the expected covariance matrix of the fit parameters.
  It is important to follow the definition given above in order to obtain a
  proper error. In case a cost function different from SE is used the
  result should be adapted accordingly.
 
  DIFFERENCE BETWEEN FIT FUNCTION AND COST FUNCTION
 
  Fit function is the fit model while cost function is the function
  minimized (maximized) in the fit. As an exaples in least squares fits
  with Gaussian assumption we have:
 
  cost function = sum_k(((yk - f(x1,...,xn))/sk)^2)
 
  fit function = f(x1,...,xn)
 
  Where yk are the data samples and xi are the fit parameters with
  i=1,...,n.
 
  COST FUNCTION AND MEAN SQUARED ERROR
 
  Mean Squared Error is the average of the squares of the fit residuals.
  The average is typically weighted for the fit parameters so following the
  definitions introduced above, the cost function or squared error is:
 
  SE(x1,...,xn) = sum_k(((yk - f(x1,...,xn))/sk)^2)
 
  While the MSE is:
 
  MSE = SE/(K-n)
 
  Where K is the number of data points and n is the number of parameters.
  MSE is used in error calculation to compensate for unknown data errors
  sk. In that case an average data error can be estimated with the MSE.
  That provides the correct estimation of the data error only if it is
  costant, in general instead it just provides a coefficient for error
  adjustment.
  In case you know data errors sk it is mandatory to:
  1) Define the cost function including that information. i.e.
     SE(x1,...,xn) = sum_k(((yk - f(x1,...,xn))/sk)^2)
  2) Input sk vactor in the input field dy. In that case the function will
     not compensate for MSE.
 
  If you use a cost function with no error information, i.e. 
  CF(x1,...,xn) = sum_k(((yk - f(x1,...,xn)))^2)
  then error are calculating assuming that your errors sk = 1 for each k.
  In that case it is a good practice at least to try to compensate for the
  MSE.
 
  COST FUNCTION DEFINITION
 
  Never use an averaged cost function if you want to calculate the errors
  with the Hessian option. An example of non-averaged cost function is the
  squared error defined above:
 
  SE(x1,...,xn) = sum_k(((yk - f(x1,...,xn))/sk)^2)
 
  Its averaged version is the MSE
 
  MSE = SE/(K-n)
 
  In oder to have a meaningful error estimate with the Hessian option the
  input 'func' must be the non-averaged one, i.e. SE.
 
  Parameters Description
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Method Details
Access public
Defining Class mfh
Sealed 0
Static 0

Parameter Description

Default

no description
Key Default Value Options Description
getFitErrors
PARS [] none The set of parameter values. A pest object.
DERIVSTEP [] none The set of derivative steps. A NumParams x 1 array
MSE 1 none Fit mean square error.
ALGO 'jacobian'
  • 'jacobian'
  • 'hessian'
Algorithm used to calculate the covariance matrix.
DY [] none Errors on y measurements. If you input dy the function does not compensates for mse.

Example

plist('PARS', [[]], 'DERIVSTEP', [[]], 'MSE', [1], 'ALGO', 'jacobian', 'DY', [[]])

back to top back to top

Some information of the method mfh/getFitErrors are listed below:
Class name mfh
Method name getFitErrors
Category Signal Processing
Package name ltpda
VCS Version 967b0eec0dece803a81af8ef54ad2f8c784b20b2
Min input args 1
Max input args -1
Min output args 1
Max output args -1
Can be used as modifier 1
Supported numeric types {'double'}




©LTP Team