[ot-users] Evaluations vs. iterations - FORM

regis lebrun regis_anne.lebrun_dutfoy at yahoo.fr
Wed Mar 15 15:00:08 CET 2017

Here is the reason for the final evaluation:
+ The search for the design point should be done using an inequality constraint:
u_opt = argmin |u|^2 with f(u) >= s
+ In fact we assume that the constraint is active at the optimum, so the problem is replaced by:
u_opt = argmin |u|^2 with f(u) = s
+ The resulting approximation of the probability of the event is then Phi(-|u_opt|) if the origin of the standard space is NOT in the event (in which case both optimization problems give the same result), or Phi(|u_opt|) if the origin of the standard space is in the event (in which case the problems differ, the solution of the first one being 0 and only the second one gives useful information for the approximation)
-> so we have to check if the origin of the standard space is in the event or not, and this test is done AFTER the optimization step. If you have activated the cache mechanism of your function, this evaluation has no overhead most of the time, since the most usual starting point is precisely the origin of the standard space. Note that this point corresponds to the point with coordinates equal to the median of each marginal distribution, also equal to the mean eg for symmetrical marginal distributions.

If you are using the COBYLA solver, the error control and stopping criterion is delegated to the solver. The error history you get after the reconstruction is rebuilt as a post-processing of the optimization result and the history mechanism coupled to the objective and constraint functions. There is no guarantee at all that COBYLA used the exact same values to stop.

Another technical detail: the evaluations driven by COBYLA don't have the same goal. Some of them are there to substitute to a gradient a kind of linear approximation, and the convergence check is not done on these values, and the others are specifically there to monitor the convergence. It can explain the behavior you get.

Note that if you use other solvers (AbdoRackwitz or from NLOpt) you will have another behavior for the convergence management, but you will ALWAYS have a final evaluation at the origin of the standard space.



De : Anita Laera <anita.laera87 at gmail.com>
À : Julien Schueller <schueller at phimeca.com> 
Cc : users <users at openturns.org>
Envoyé le : Mercredi 15 mars 2017 13h04
Objet : Re: [ot-users] Evaluations vs. iterations - FORM

Hi Julien,
thank you. I understand the reason, especially in the case of a gradient-based optimization algorithm
I am now using Cobyla and I would like to understand why I reach convergence at the iteration 65 (at least counting the number of rows in the error history), which corresponds to the results of the evaluation 65, but, after that, I have other 7 evaluations and the last one is a calculation with all mean parameters (i.e. my starting point).
Thank you!

2017-03-13 17:15 GMT+01:00 Julien Schueller <schueller at phimeca.com>:

>Short answer: the evaluations per iteration depend on the solver used and the dimension.
>See http://trac.openturns.org/ ticket/351
>De: "Anita Laera" <anita.laera87 at gmail.com>
>>À: "users" <users at openturns.org>
>>Envoyé: Lundi 13 Mars 2017 16:41:14
>>Objet: [ot-users] Evaluations vs. iterations - FORM
>Hi all, 
>>I am performing FORM analysis and I am setting the maximum number of iterations to 100. In many occasions I see that the performed number of evaluations is larger than 100.
>>My questions are two:
>>1- what does the number of evaluation represents and in what it differs from the iterations?
>>2- I can set the max number of iterations as a convergence criterion but is there a way to control the evaluations?
>>Thank you for your time!
>>______________________________ _________________
>>OpenTURNS users mailing list
>>users at openturns.org
>>http://openturns.org/mailman/ listinfo/users
>Julien Schueller
>Phimeca Engineering

OpenTURNS users mailing list
users at openturns.org

More information about the users mailing list