1NLOPT_MINIMIZE(3)          NLopt programming manual          NLOPT_MINIMIZE(3)
2
3
4

NAME

6       nlopt_minimize - Minimize a multivariate nonlinear function
7

SYNOPSIS

9       #include <nlopt.h>
10
11       nlopt_result nlopt_minimize(nlopt_algorithm algorithm,
12                                   int n,
13                                   nlopt_func f,
14                                   void* f_data,
15                                   const double* lb,
16                                   const double* ub,
17                                   double* x,
18                                   double* minf,
19                                   double minf_max,
20                                   double ftol_rel,
21                                   double ftol_abs,
22                                   double xtol_rel,
23                                   const double* xtol_abs,
24                                   int maxeval,
25                                   double maxtime);
26
27       You should link the resulting program with the linker flags
28       -lnlopt -lm on Unix.
29

DESCRIPTION

31       nlopt_minimize()  attempts  to  minimize  a  nonlinear  function f of n
32       design variables using the specified algorithm.  The  minimum  function
33       value found is returned in minf, with the corresponding design variable
34       values returned in the array x of length n.   The  input  values  in  x
35       should  be  a starting guess for the optimum.  The inputs lb and ub are
36       arrays of length n containing lower and upper bounds, respectively,  on
37       the design variables x.  The other parameters specify stopping criteria
38       (tolerances, the maximum number of function evaluations, etcetera)  and
39       other  information as described in more detail below.  The return value
40       is a integer code indicating success (positive) or failure  (negative),
41       as described below.
42
43       By  changing the parameter algorithm among several predefined constants
44       described below, one can switch easily between a variety  of  minimiza‐
45       tion algorithms.  Some of these algorithms require the gradient (deriv‐
46       atives) of the function to be supplied via f, and other  algorithms  do
47       not  require  derivatives.   Some  of  the algorithms attempt to find a
48       global minimum within the given bounds, and others find  only  a  local
49       minimum.
50
51       The  nlopt_minimize  function  is  a  wrapper around several free/open-
52       source minimization packages.  as well as some new  implementations  of
53       published  optimization  algorithms.  You could, of course, compile and
54       call these packages separately, and in some  cases  this  will  provide
55       greater flexibility than is available via the nlopt_minimize interface.
56       However, depending upon the specific function being minimized, the dif‐
57       ferent algorithms will vary in effectiveness.  The intent of nlopt_min‐
58       imize is to allow you to quickly switch between algorithms in order  to
59       experiment  with  them  for your problem, by providing a simple unified
60       interface to these subroutines.
61

OBJECTIVE FUNCTION

63       nlopt_minimize() minimizes an objective function f of the form:
64
65             double f(int n,
66                      const double* x,
67                      double* grad,
68                      void* f_data);
69
70       The return value should be the value of the function at  the  point  x,
71       where  x  points  to an array of length n of the design variables.  The
72       dimension n is identical to the one passed to nlopt_minimize().
73
74       In addition, if the argument grad is not NULL, then grad points  to  an
75       array  of length n which should (upon return) be set to the gradient of
76       the function with respect to the  design  variables  at  x.   That  is,
77       grad[i] should upon return contain the partial derivative df/dx[i], for
78       0 <= i < n, if grad is non-NULL.  Not all  of  the  optimization  algo‐
79       rithms  (below)  use the gradient information: for algorithms listed as
80       "derivative-free," the grad argument will always be NULL and need never
81       be  computed.   (For  algorithms that do use gradient information, how‐
82       ever, grad may still be NULL for some calls.)
83
84       The f_data argument is the same as the one passed to  nlopt_minimize(),
85       and  may  be  used to pass any additional data through to the function.
86       (That is, it may be  a  pointer  to  some  caller-defined  data  struc‐
87       ture/type containing information your function needs, which you convert
88       from void* by a typecast.)
89
90

CONSTRAINTS

92       Most of the algorithms in NLopt are designed for minimization of  func‐
93       tions  with simple bound constraints on the inputs.  That is, the input
94       vectors x[i] are constrainted to lie in a hyperrectangle lb[i] <=  x[i]
95       <=  ub[i]  for 0 <= i < n, where lb and ub are the two arrays passed to
96       nlopt_minimize().
97
98       However, a few of the algorithms support partially  or  totally  uncon‐
99       strained  optimization,  as noted below, where a (totally or partially)
100       unconstrained design variable is indicated by a lower  bound  equal  to
101       -Inf  and/or  an  upper bound equal to +Inf.  Here, Inf is the IEEE-754
102       floating-point infinity, which (in ANSI  C99)  is  represented  by  the
103       macro  INFINITY in math.h.  Alternatively, for older C versions you may
104       also use the macro HUGE_VAL (also in math.h).
105
106       With some of the algorithms, especially those that do not  require  de‐
107       rivative  information,  a  simple (but not especially efficient) way to
108       implement arbitrary nonlinear constraints is to return Inf (see  above)
109       whenever  the constraints are violated by a given input x.  More gener‐
110       ally, there  are  various  ways  to  implement  constraints  by  adding
111       "penalty  terms" to your objective function, which are described in the
112       optimization literature.  A much more efficient way to specify  nonlin‐
113       ear  constraints  is provided by the nlopt_minimize_constrained() func‐
114       tion (described in its own manual page).
115

ALGORITHMS

117       The algorithm parameter specifies the optimization algorithm (for  more
118       detail  on  these,  see the README files in the source-code subdirecto‐
119       ries), and can take on any of the following constant values.  Constants
120       with  _G{N,D}_  in  their  names  refer to global optimization methods,
121       whereas _L{N,D}_ refers to local optimization methods (that try to find
122       a  local  minimum  starting from the starting guess x).  Constants with
123       _{G,L}N_ refer to non-gradient (derivative-free) algorithms that do not
124       require  the  objective function to supply a gradient, whereas _{G,L}D_
125       refers to derivative-based algorithms that require the objective  func‐
126       tion to supply a gradient.  (Especially for local optimization, deriva‐
127       tive-based algorithms are generally superior to  derivative-free  ones:
128       the gradient is good to have if you can compute it cheaply, e.g. via an
129       adjoint method.)
130
131       NLOPT_GN_DIRECT_L
132              Perform a global (G) derivative-free (N) optimization using  the
133              DIRECT-L search algorithm by Jones et al. as modified by Gablon‐
134              sky et al. to be more weighted towards local search.   Does  not
135              support  unconstrainted  optimization.   There  are also several
136              other variants of  the  DIRECT  algorithm  that  are  supported:
137              NLOPT_GN_DIRECT,   which   is  the  original  DIRECT  algorithm;
138              NLOPT_GN_DIRECT_L_RAND, a slightly randomized version of DIRECT-
139              L   that  may  be  better  in  high-dimensional  search  spaces;
140              NLOPT_GN_DIRECT_NOSCAL,      NLOPT_GN_DIRECT_L_NOSCAL,       and
141              NLOPT_GN_DIRECT_L_RAND_NOSCAL,  which  are  versions  of  DIRECT
142              where the dimensions are not rescaled to a unit hypercube (which
143              means that dimensions with larger bounds are given more weight).
144
145       NLOPT_GN_ORIG_DIRECT_L
146              A  global  (G)  derivative-free  optimization using the DIRECT-L
147              algorithm as above, along with NLOPT_GN_ORIG_DIRECT which is the
148              original  DIRECT  algorithm.   Unlike  NLOPT_GN_DIRECT_L  above,
149              these two algorithms refer to code based on the original Fortran
150              code  of Gablonsky et al., which has some hard-coded limitations
151              on the number of subdivisions etc. and does not support  all  of
152              the  NLopt  stopping  criteria,  but  on the other hand supports
153              arbitrary nonlinear constraints as described above.
154
155       NLOPT_GD_STOGO
156              Global (G) optimization using the StoGO algorithm by  Madsen  et
157              al.  StoGO exploits gradient information (D) (which must be sup‐
158              plied by the objective) for its local searches, and performs the
159              global  search by a branch-and-bound technique.  Only bound-con‐
160              strained optimization is supported.  There is also another vari‐
161              ant  of  this algorithm, NLOPT_GD_STOGO_RAND, which is a random‐
162              ized version of the StoGO search scheme.  The  StoGO  algorithms
163              are  only  available  if NLopt is compiled with C++ enabled, and
164              should be linked via -lnlopt_cxx (via a C++ compiler,  in  order
165              to link the C++ standard libraries).
166
167       NLOPT_LN_NELDERMEAD
168              Perform  a  local (L) derivative-free (N) optimization, starting
169              at x, using the Nelder-Mead simplex algorithm, modified to  sup‐
170              port bound constraints.  Nelder-Mead, while popular, is known to
171              occasionally fail to converge for some objective  functions,  so
172              it  should  be  used  with  caution.  Anecdotal evidence, on the
173              other hand, suggests that it works fairly well for discontinuous
174              objectives.  See also NLOPT_LN_SBPLX below.
175
176       NLOPT_LN_SBPLX
177              Perform  a  local (L) derivative-free (N) optimization, starting
178              at x, using an algorithm based on the Subplex algorithm of Rowan
179              et  al.,  which  is  an improved variant of Nelder-Mead (above).
180              Our implementation does not use Rowan's original code,  and  has
181              some minor modifications such as explicit support for bound con‐
182              straints.  (Like Nelder-Mead, Subplex often works well in  prac‐
183              tice,  even for discontinuous objectives, but there is no rigor‐
184              ous guarantee that it will converge.)  Nonlinear constraints can
185              be  crudely supported by returning +Inf when the constraints are
186              violated, as explained above.
187
188       NLOPT_LN_PRAXIS
189              Local (L) derivative-free (N) optimization using the  principal-
190              axis  method,  based  on  code  by  Richard Brent.  Designed for
191              unconstrained optimization, although bound constraints are  sup‐
192              ported  too  (via  the inefficient method of returning +Inf when
193              the constraints are violated).
194
195       NLOPT_LD_LBFGS
196              Local (L) gradient-based (D) optimization using the limited-mem‐
197              ory  BFGS (L-BFGS) algorithm.  (The objective function must sup‐
198              ply the gradient.)  Unconstrained optimization is  supported  in
199              addition  to  simple bound constraints (see above).  Based on an
200              implementation by Luksan et al.
201
202       NLOPT_LD_VAR2
203              Local (L) gradient-based (D) optimization using a  shifted  lim‐
204              ited-memory  variable-metric  method  based on code by Luksan et
205              al., supporting both unconstrained and  bound-constrained  opti‐
206              mization.    NLOPT_LD_VAR2   uses  a  rank-2  method,  while  .B
207              NLOPT_LD_VAR1 is another variant using a rank-1 method.
208
209       NLOPT_LD_TNEWTON_PRECOND_RESTART
210              Local (L) gradient-based (D) optimization using an LBFGS-precon‐
211              ditioned  truncated Newton method with steepest-descent restart‐
212              ing, based on code by Luksan  et  al.,  supporting  both  uncon‐
213              strained  and bound-constrained optimization.  There are several
214              other variants of this algorithm: NLOPT_LD_TNEWTON_PRECOND (same
215              without restarting), NLOPT_LD_TNEWTON_RESTART (same without pre‐
216              conditioning), and NLOPT_LD_TNEWTON (same without restarting  or
217              preconditioning).
218
219       NLOPT_GN_CRS2_LM
220              Global (G) derivative-free (N) optimization using the controlled
221              random search (CRS2) algorithm of Price, with the  "local  muta‐
222              tion" (LM) modification suggested by Kaelo and Ali.
223
224       NLOPT_GD_MLSL_LDS, NLOPT_GN_MLSL_LDS
225              Global (G) derivative-based (D) or derivative-free (N) optimiza‐
226              tion using the multi-level single-linkage (MLSL) algorithm  with
227              a  low-discrepancy  sequence  (LDS).   This algorithm executes a
228              quasi-random (LDS) sequence of local searches, with a clustering
229              heuristic  to  avoid  multiple local searches for the same local
230              minimum.  The local  search  uses  the  derivative/nonderivative
231              algorithm  set  by  nlopt_set_local_search_algorithm  (currently
232              defaulting  to  NLOPT_LD_MMA  and  NLOPT_LN_COBYLA  for  deriva‐
233              tive/nonderivative  searches, respectively).  There are also two
234              other  variants,  NLOPT_GD_MLSL  and  NLOPT_GN_MLSL,  which  use
235              pseudo-random  numbers  (instead  of  an LDS) as in the original
236              MLSL algorithm.
237
238       NLOPT_LD_MMA
239              Local (L) gradient-based (D) optimization using  the  method  of
240              moving  asymptotes  (MMA),  or  rather  a refined version of the
241              algorithm as published by Svanberg (2002).  (NLopt uses an inde‐
242              pendent  free-software/open-source  implementation of Svanberg's
243              algorithm.)  The NLOPT_LD_MMA algorithm supports both bound-con‐
244              strained  and  unconstrained  optimization, and also supports an
245              arbitrary number (m) of nonlinear constraints via the nlopt_min‐
246              imize_constrained() function.
247
248       NLOPT_LN_COBYLA
249              Local  (L)  derivative-free  (N)  optimization  using the COBYLA
250              algorithm of Powell (Constrained Optimization BY Linear Approxi‐
251              mations).   The  NLOPT_LN_COBYLA  algorithm supports both bound-
252              constrained and unconstrained optimization, and also supports an
253              arbitrary number (m) of nonlinear constraints via the nlopt_min‐
254              imize_constrained() function.
255
256       NLOPT_LN_NEWUOA_BOUND
257              Local (L) derivative-free (N) optimization using  a  variant  of
258              the the NEWUOA algorithm of Powell, based on successive quadrat‐
259              ic approximations of the objective function.  We  have  modified
260              the algorithm to support bound constraints.  The original NEWUOA
261              algorithm is also available, as NLOPT_LN_NEWUOA, but this  algo‐
262              rithm  ignores the bound constraints lb and ub, and so it should
263              only be used for unconstrained problems.
264

STOPPING CRITERIA

266       Multiple stopping criteria for the optimization are supported, as spec‐
267       ified by the following arguments to nlopt_minimize().  The optimization
268       halts whenever any one of these criteria is satisfied.  In some  cases,
269       the  precise  interpretation  of  the stopping criterion depends on the
270       optimization algorithm above (although we have tried to  make  them  as
271       consistent  as reasonably possible), and some algorithms do not support
272       all of the stopping criteria.
273
274       minf_max
275              Stop when a function value less than or  equal  to  minf_max  is
276              found.   Set  to  -Inf or NaN (see constraints section above) to
277              disable.
278
279       ftol_rel
280              Relative tolerance on function value: stop when an  optimization
281              step  (or an estimate of the minimum) changes the function value
282              by less than ftol_rel multiplied by the absolute  value  of  the
283              function value.  (If there is any chance that your minimum func‐
284              tion value is close to zero, you might want to set  an  absolute
285              tolerance with ftol_abs as well.)  Disabled if non-positive.
286
287       ftol_abs
288              Absolute  tolerance on function value: stop when an optimization
289              step (or an estimate of the minimum) changes the function  value
290              by less than ftol_abs.  Disabled if non-positive.
291
292       xtol_rel
293              Relative  tolerance  on design variables: stop when an optimiza‐
294              tion step (or an estimate of the minimum) changes  every  design
295              variable  by less than xtol_rel multiplied by the absolute value
296              of the design variable.  (If there is any chance that an optimal
297              design variable is close to zero, you might want to set an abso‐
298              lute tolerance with xtol_abs as well.)   Disabled  if  non-posi‐
299              tive.
300
301       xtol_abs
302              Pointer  to  an  array of length n giving absolute tolerances on
303              design variables: stop when an optimization step (or an estimate
304              of  the minimum) changes every design variable x[i] by less than
305              xtol_abs[i].  Disabled if non-positive, or if xtol_abs is NULL.
306
307       maxeval
308              Stop when the number of function  evaluations  exceeds  maxeval.
309              (This  is  not  a strict maximum: the number of function evalua‐
310              tions may exceed maxeval  slightly,  depending  upon  the  algo‐
311              rithm.)  Disabled if non-positive.
312
313       maxtime
314              Stop  when  the  optimization time (in seconds) exceeds maxtime.
315              (This is not a strict  maximum:  the  time  may  exceed  maxtime
316              slightly,  depending  upon  the  algorithm  and on how slow your
317              function evaluation is.)  Disabled if non-positive.
318

RETURN VALUE

320       The value returned is one of the following enumerated constants.
321
322   Successful termination (positive return values):
323       NLOPT_SUCCESS
324              Generic success return value.
325
326       NLOPT_MINF_MAX_REACHED
327              Optimization stopped because minf_max (above) was reached.
328
329       NLOPT_FTOL_REACHED
330              Optimization stopped because ftol_rel or  ftol_abs  (above)  was
331              reached.
332
333       NLOPT_XTOL_REACHED
334              Optimization  stopped  because  xtol_rel or xtol_abs (above) was
335              reached.
336
337       NLOPT_MAXEVAL_REACHED
338              Optimization stopped because maxeval (above) was reached.
339
340       NLOPT_MAXTIME_REACHED
341              Optimization stopped because maxtime (above) was reached.
342
343   Error codes (negative return values):
344       NLOPT_FAILURE
345              Generic failure code.
346
347       NLOPT_INVALID_ARGS
348              Invalid arguments (e.g.  lower  bounds  are  bigger  than  upper
349              bounds, an unknown algorithm was specified, etcetera).
350
351       NLOPT_OUT_OF_MEMORY
352              Ran out of memory.
353

PSEUDORANDOM NUMBERS

355       For  stochastic  optimization  algorithms,  we use pseudorandom numbers
356       generated by the Mersenne Twister algorithm, based on code from  Makoto
357       Matsumoto.   By  default,  the seed for the random numbers is generated
358       from the system time, so that they will be different each time you  run
359       the  program.  If you want to use deterministic random numbers, you can
360       set the seed by calling:
361
362                   void nlopt_srand(unsigned long seed);
363
364       Some of the algorithms also  support  using  low-discrepancy  sequences
365       (LDS),  sometimes  known as quasi-random numbers.  NLopt uses the Sobol
366       LDS, which is implemented for up to 1111 dimensions.  maxeval.
367

AUTHORS

369       Written by Steven G. Johnson.
370
371       Copyright (c) 2007 Massachusetts Institute of Technology.
372

SEE ALSO

374       nlopt_minimize_constrained(3)
375
376
377
378MIT                               2007-08-23                 NLOPT_MINIMIZE(3)
Impressum