Re-orthonormalization

Split J and GradJ calculation
"da_calculate_j" routine is left unchanged: it still calculates the cost function "J" AND its gradient "GradJ". A new routine "da_calculate_gradj" is introduced, which only calculates "GradJ". This makes the code much simpler and avoids redundancies of code.

Calculation of Cost Function
The Cost Function is directly derived from the gradient using the fully linear properties inside the inner-loop. The results are identical to the former full cost function calculation, but it results in a significant speed-up when "CALCULATE_CG_COST_FN" namelist parameter is activated. For more information, cf. Tshimanga et al. (QJRMS, 2008)

Orthonormalization of gradient
If the "ORTHONORM_GRADIENT" namelist parameter is activated, the gradient vectors are stored during the Conjugate Gradient for each iteration and used to re-orthogonalize the new gradient. This requires extra storage of large vectors (each one being the size of the control variable) but results in a better convergence of the Conjugate Gradient after around 20 iterations.

Modified routines
var/da/da_minimisation/da_minimise_cg.inc
var/da/da_minimisation/da_calculate_gradj.inc

  • No labels

1 Comment

  1. Anonymous