1NLOPT_MINIMIZE_CONSTRAINED(3N)Lopt programming manuaNlLOPT_MINIMIZE_CONSTRAINED(3)
2
3
4

NAME

6       nlopt_minimize_constrained - Minimize a multivariate nonlinear function
7       subject to nonlinear constraints
8

SYNOPSIS

10       #include <nlopt.h>
11
12       nlopt_result nlopt_minimize_constrained(nlopt_algorithm algorithm,
13                                   int n,
14                                   nlopt_func f,
15                                   void* f_data,
16                                   int m,
17                                   nlopt_func fc,
18                                   void* fc_data,
19                                   ptrdiff_t fc_datum_size,
20                                   const double* lb,
21                                   const double* ub,
22                                   double* x,
23                                   double* minf,
24                                   double minf_max,
25                                   double ftol_rel,
26                                   double ftol_abs,
27                                   double xtol_rel,
28                                   const double* xtol_abs,
29                                   int maxeval,
30                                   double maxtime);
31
32       You should link the resulting program with the linker flags
33       -lnlopt -lm on Unix.
34

DESCRIPTION

36       nlopt_minimize_constrained() attempts to minimize a nonlinear  function
37       f  of  n design variables, subject to m nonlinear constraints described
38       by the function fc (see below), using  the  specified  algorithm.   The
39       minimum  function value found is returned in minf, with the correspond‐
40       ing design variable values returned in the array x of  length  n.   The
41       input  values  in  x  should  be a starting guess for the optimum.  The
42       inputs lb and ub are arrays of length  n  containing  lower  and  upper
43       bounds,  respectively, on the design variables x.  The other parameters
44       specify stopping criteria (tolerances, the maximum number  of  function
45       evaluations,  etcetera)  and  other  information  as  described in more
46       detail below.  The return value is a integer  code  indicating  success
47       (positive) or failure (negative), as described below.
48
49       By  changing the parameter algorithm among several predefined constants
50       described below, one can switch easily between a variety  of  minimiza‐
51       tion algorithms.  Some of these algorithms require the gradient (deriv‐
52       atives) of the function to be supplied via f, and other  algorithms  do
53       not  require  derivatives.   Some  of  the algorithms attempt to find a
54       global minimum within the given bounds, and others find  only  a  local
55       minimum.   Most  of the algorithms only handle the case where m is zero
56       (no explicit nonlinear constraints); the only algorithms that currently
57       support positive m are NLOPT_LD_MMA and NLOPT_LN_COBYLA.
58
59       The  nlopt_minimize_constrained  function  is  a wrapper around several
60       free/open-source minimization packages, as well as some new implementa‐
61       tions of published optimization algorithms.  You could, of course, com‐
62       pile and call these packages separately, and in some  cases  this  will
63       provide  greater  flexibility  than  is  available  via the nlopt_mini‐
64       mize_constrained interface.  However, depending upon the specific func‐
65       tion  being minimized, the different algorithms will vary in effective‐
66       ness.  The intent of nlopt_minimize_constrained  is  to  allow  you  to
67       quickly  switch between algorithms in order to experiment with them for
68       your problem, by providing a simple unified interface to these  subrou‐
69       tines.
70

OBJECTIVE FUNCTION

72       nlopt_minimize_constrained()  minimizes  an objective function f of the
73       form:
74
75             double f(int n,
76                      const double* x,
77                      double* grad,
78                      void* f_data);
79
80       The return value should be the value of the function at  the  point  x,
81       where  x  points  to an array of length n of the design variables.  The
82       dimension n is identical  to  the  one  passed  to  nlopt_minimize_con‐
83       strained().
84
85       In  addition,  if the argument grad is not NULL, then grad points to an
86       array of length n which should (upon return) be set to the gradient  of
87       the  function  with  respect  to  the  design variables at x.  That is,
88       grad[i] should upon return contain the partial derivative df/dx[i], for
89       0  <=  i  <  n, if grad is non-NULL.  Not all of the optimization algo‐
90       rithms (below) use the gradient information: for algorithms  listed  as
91       "derivative-free," the grad argument will always be NULL and need never
92       be computed.  (For algorithms that do use  gradient  information,  how‐
93       ever, grad may still be NULL for some calls.)
94
95       The  f_data  argument  is  the  same  as  the one passed to nlopt_mini‐
96       mize_constrained(), and may be used to pass any additional data through
97       to  the function.  (That is, it may be a pointer to some caller-defined
98       data structure/type containing information your function  needs,  which
99       you convert from void* by a typecast.)
100
101

BOUND CONSTRAINTS

103       Most  of the algorithms in NLopt are designed for minimization of func‐
104       tions with simple bound constraints on the inputs.  That is, the  input
105       vectors  x[i] are constrainted to lie in a hyperrectangle lb[i] <= x[i]
106       <= ub[i] for 0 <= i < n, where lb and ub are the two arrays  passed  to
107       nlopt_minimize_constrained().
108
109       However,  a  few  of the algorithms support partially or totally uncon‐
110       strained optimization, as noted below, where a (totally  or  partially)
111       unconstrained  design  variable  is indicated by a lower bound equal to
112       -Inf and/or an upper bound equal to +Inf.  Here, Inf  is  the  IEEE-754
113       floating-point  infinity,  which  (in  ANSI  C99) is represented by the
114       macro INFINITY in math.h.  Alternatively, for older C versions you  may
115       also use the macro HUGE_VAL (also in math.h).
116
117       With  some  of the algorithms, especially those that do not require de‐
118       rivative information, a simple (but not especially  efficient)  way  to
119       implement  arbitrary nonlinear constraints is to return Inf (see above)
120       whenever the constraints are violated by a given input x.  More  gener‐
121       ally,  there  are  various  ways  to  implement  constraints  by adding
122       "penalty terms" to your objective function, which are described in  the
123       optimization  literature.  A much more efficient way to specify nonlin‐
124       ear constraints is described below, but is only supported  by  a  small
125       subset of the algorithms.
126

NONLINEAR CONSTRAINTS

128       The  nlopt_minimize_constrained  function  also allows you to specify m
129       nonlinear constraints via the function fc, where m is  any  nonnegative
130       integer.   However,  nonzero  m  is  currently  only  supported  by the
131       NLOPT_LD_MMA and NLOPT_LN_COBYLA algorithms below.
132
133       In particular, the nonlinear constraints are of the form  fc(x)  <=  0,
134       where  the  function  fc  is of the same form as the objective function
135       described above:
136
137             double fc(int n,
138                       const double* x,
139                       double* grad,
140                       void* fc_datum);
141
142       The return value should be the value of the constraint at the point  x,
143       where  the  dimension  n  is identical to the one passed to nlopt_mini‐
144       mize_constrained().  As for the objective  function,  if  the  argument
145       grad is not NULL, then grad points to an array of length n which should
146       (upon return) be set to the gradient of the function with respect to x.
147       (For any algorithm listed as "derivative-free" below, the grad argument
148       will always be NULL and need never be computed.)
149
150       The fc_datum argument is  based  on  the  fc_data  argument  passed  to
151       nlopt_minimize_constrained(),  and  may  be used to pass any additional
152       data through to the function, and is used to distinguish  between  dif‐
153       ferent constraints.
154
155       In  particular,  the  constraint function fc will be called (at most) m
156       times for each x, and the i-th constraint (0 <= i < m) will  be  passed
157       an fc_datum argument equal to fc_data offset by i * fc_datum_size.  For
158       example, suppose that you have a data  structure  of  type  "foo"  that
159       describes  the data needed by each constraint, and you store the infor‐
160       mation for the constraints in an array "foo data[m]".   In  this  case,
161       you  would  pass "data" as the fc_data parameter to nlopt_minimize_con‐
162       strained, and "sizeof(foo)" as the fc_datum_size parameter.  Then, your
163       fc  function  would  be  called  m  times for each point, and be passed
164       &data[0] through &data[m-1] in sequence.
165

ALGORITHMS

167       The algorithm parameter specifies the optimization algorithm (for  more
168       detail  on  these,  see the README files in the source-code subdirecto‐
169       ries), and can take on any of the following constant values.  Constants
170       with  _G{N,D}_  in  their  names  refer to global optimization methods,
171       whereas _L{N,D}_ refers to local optimization methods (that try to find
172       a  local  minimum  starting from the starting guess x).  Constants with
173       _{G,L}N_ refer to non-gradient (derivative-free) algorithms that do not
174       require  the  objective function to supply a gradient, whereas _{G,L}D_
175       refers to derivative-based algorithms that require the objective  func‐
176       tion to supply a gradient.  (Especially for local optimization, deriva‐
177       tive-based algorithms are generally superior to  derivative-free  ones:
178       the gradient is good to have if you can compute it cheaply, e.g. via an
179       adjoint method.)
180
181       NLOPT_GN_DIRECT_L
182              Perform a global (G) derivative-free (N) optimization using  the
183              DIRECT-L search algorithm by Jones et al. as modified by Gablon‐
184              sky et al. to be more weighted towards local search.   Does  not
185              support  unconstrainted  optimization.   There  are also several
186              other variants of  the  DIRECT  algorithm  that  are  supported:
187              NLOPT_GN_DIRECT,   which   is  the  original  DIRECT  algorithm;
188              NLOPT_GN_DIRECT_L_RAND, a slightly randomized version of DIRECT-
189              L   that  may  be  better  in  high-dimensional  search  spaces;
190              NLOPT_GN_DIRECT_NOSCAL,      NLOPT_GN_DIRECT_L_NOSCAL,       and
191              NLOPT_GN_DIRECT_L_RAND_NOSCAL,  which  are  versions  of  DIRECT
192              where the dimensions are not rescaled to a unit hypercube (which
193              means that dimensions with larger bounds are given more weight).
194
195       NLOPT_GN_ORIG_DIRECT_L
196              A  global  (G)  derivative-free  optimization using the DIRECT-L
197              algorithm as above, along with NLOPT_GN_ORIG_DIRECT which is the
198              original  DIRECT  algorithm.   Unlike  NLOPT_GN_DIRECT_L  above,
199              these two algorithms refer to code based on the original Fortran
200              code  of Gablonsky et al., which has some hard-coded limitations
201              on the number of subdivisions etc. and does not support  all  of
202              the  NLopt  stopping  criteria,  but  on the other hand supports
203              arbitrary nonlinear constraints as described above.
204
205       NLOPT_GD_STOGO
206              Global (G) optimization using the StoGO algorithm by  Madsen  et
207              al.  StoGO exploits gradient information (D) (which must be sup‐
208              plied by the objective) for its local searches, and performs the
209              global  search by a branch-and-bound technique.  Only bound-con‐
210              strained optimization is supported.  There is also another vari‐
211              ant  of  this algorithm, NLOPT_GD_STOGO_RAND, which is a random‐
212              ized version of the StoGO search scheme.  The  StoGO  algorithms
213              are  only  available  if NLopt is compiled with C++ enabled, and
214              should be linked via -lnlopt_cxx (via a C++ compiler,  in  order
215              to link the C++ standard libraries).
216
217       NLOPT_LN_NELDERMEAD
218              Perform  a  local (L) derivative-free (N) optimization, starting
219              at x, using the Nelder-Mead simplex algorithm, modified to  sup‐
220              port bound constraints.  Nelder-Mead, while popular, is known to
221              occasionally fail to converge for some objective  functions,  so
222              it  should  be  used  with  caution.  Anecdotal evidence, on the
223              other hand, suggests that it works fairly well for discontinuous
224              objectives.  See also NLOPT_LN_SBPLX below.
225
226       NLOPT_LN_SBPLX
227              Perform  a  local (L) derivative-free (N) optimization, starting
228              at x, using an algorithm based on the Subplex algorithm of Rowan
229              et  al.,  which  is  an improved variant of Nelder-Mead (above).
230              Our implementation does not use Rowan's original code,  and  has
231              some minor modifications such as explicit support for bound con‐
232              straints.  (Like Nelder-Mead, Subplex often works well in  prac‐
233              tice,  even for discontinuous objectives, but there is no rigor‐
234              ous guarantee that it will converge.)  Nonlinear constraints can
235              be  crudely supported by returning +Inf when the constraints are
236              violated, as explained above.
237
238       NLOPT_LN_PRAXIS
239              Local (L) derivative-free (N) optimization using the  principal-
240              axis  method,  based  on  code  by  Richard Brent.  Designed for
241              unconstrained optimization, although bound constraints are  sup‐
242              ported  too  (via  the inefficient method of returning +Inf when
243              the constraints are violated).
244
245       NLOPT_LD_LBFGS
246              Local (L) gradient-based (D) optimization using the limited-mem‐
247              ory  BFGS (L-BFGS) algorithm.  (The objective function must sup‐
248              ply the gradient.)  Unconstrained optimization is  supported  in
249              addition  to  simple bound constraints (see above).  Based on an
250              implementation by Luksan et al.
251
252       NLOPT_LD_VAR2
253              Local (L) gradient-based (D) optimization using a  shifted  lim‐
254              ited-memory  variable-metric  method  based on code by Luksan et
255              al., supporting both unconstrained and  bound-constrained  opti‐
256              mization.    NLOPT_LD_VAR2   uses  a  rank-2  method,  while  .B
257              NLOPT_LD_VAR1 is another variant using a rank-1 method.
258
259       NLOPT_LD_TNEWTON_PRECOND_RESTART
260              Local (L) gradient-based (D) optimization using an LBFGS-precon‐
261              ditioned  truncated Newton method with steepest-descent restart‐
262              ing, based on code by Luksan  et  al.,  supporting  both  uncon‐
263              strained  and bound-constrained optimization.  There are several
264              other variants of this algorithm: NLOPT_LD_TNEWTON_PRECOND (same
265              without restarting), NLOPT_LD_TNEWTON_RESTART (same without pre‐
266              conditioning), and NLOPT_LD_TNEWTON (same without restarting  or
267              preconditioning).
268
269       NLOPT_GN_CRS2_LM
270              Global (G) derivative-free (N) optimization using the controlled
271              random search (CRS2) algorithm of Price, with the  "local  muta‐
272              tion" (LM) modification suggested by Kaelo and Ali.
273
274       NLOPT_GD_MLSL_LDS, NLOPT_GN_MLSL_LDS
275              Global (G) derivative-based (D) or derivative-free (N) optimiza‐
276              tion using the multi-level single-linkage (MLSL) algorithm  with
277              a  low-discrepancy  sequence  (LDS).   This algorithm executes a
278              quasi-random (LDS) sequence of local searches, with a clustering
279              heuristic  to  avoid  multiple local searches for the same local
280              minimum.  The local  search  uses  the  derivative/nonderivative
281              algorithm  set  by  nlopt_set_local_search_algorithm  (currently
282              defaulting  to  NLOPT_LD_MMA  and  NLOPT_LN_COBYLA  for  deriva‐
283              tive/nonderivative  searches, respectively).  There are also two
284              other  variants,  NLOPT_GD_MLSL  and  NLOPT_GN_MLSL,  which  use
285              pseudo-random  numbers  (instead  of  an LDS) as in the original
286              MLSL algorithm.
287
288       NLOPT_LD_MMA
289              Local (L) gradient-based (D) optimization using  the  method  of
290              moving  asymptotes  (MMA),  or  rather  a refined version of the
291              algorithm as published by Svanberg (2002).  (NLopt uses an inde‐
292              pendent  free-software/open-source  implementation of Svanberg's
293              algorithm.)  The NLOPT_LD_MMA algorithm supports both bound-con‐
294              strained  and  unconstrained  optimization, and also supports an
295              arbitrary number  (m)  of  nonlinear  constraints  as  described
296              above.
297
298       NLOPT_LN_COBYLA
299              Local  (L)  derivative-free  (N)  optimization  using the COBYLA
300              algorithm of Powell (Constrained Optimization BY Linear Approxi‐
301              mations).   The  NLOPT_LN_COBYLA  algorithm supports both bound-
302              constrained and unconstrained optimization, and also supports an
303              arbitrary  number  (m)  of  nonlinear  constraints  as described
304              above.
305
306       NLOPT_LN_NEWUOA
307              Local (L) derivative-free (N) optimization using  a  variant  of
308              the the NEWUOA algorithm of Powell, based on successive quadrat‐
309              ic approximations of the objective function.  We  have  modified
310              the algorithm to support bound constraints.  The original NEWUOA
311              algorithm is also available, as NLOPT_LN_NEWUOA, but this  algo‐
312              rithm  ignores the bound constraints lb and ub, and so it should
313              only be used for unconstrained problems.
314

STOPPING CRITERIA

316       Multiple stopping criteria for the optimization are supported, as spec‐
317       ified  by the following arguments to nlopt_minimize_constrained().  The
318       optimization halts whenever any one of these criteria is satisfied.  In
319       some  cases,  the  precise  interpretation  of  the  stopping criterion
320       depends on the optimization algorithm above (although we have tried  to
321       make them as consistent as reasonably possible), and some algorithms do
322       not support all of the stopping criteria.
323
324       Important: you do not need to use all of  the  stopping  criteria!   In
325       most cases, you only need one or two, and can set the remainder to val‐
326       ues where they do nothing (as described below).
327
328       minf_max
329              Stop when a function value less than or  equal  to  minf_max  is
330              found.   Set  to  -Inf or NaN (see constraints section above) to
331              disable.
332
333       ftol_rel
334              Relative tolerance on function value: stop when an  optimization
335              step  (or an estimate of the minimum) changes the function value
336              by less than ftol_rel multiplied by the absolute  value  of  the
337              function value.  (If there is any chance that your minimum func‐
338              tion value is close to zero, you might want to set  an  absolute
339              tolerance with ftol_abs as well.)  Disabled if non-positive.
340
341       ftol_abs
342              Absolute  tolerance on function value: stop when an optimization
343              step (or an estimate of the minimum) changes the function  value
344              by less than ftol_abs.  Disabled if non-positive.
345
346       xtol_rel
347              Relative  tolerance  on design variables: stop when an optimiza‐
348              tion step (or an estimate of the minimum) changes  every  design
349              variable  by less than xtol_rel multiplied by the absolute value
350              of the design variable.  (If there is any chance that an optimal
351              design variable is close to zero, you might want to set an abso‐
352              lute tolerance with xtol_abs as well.)   Disabled  if  non-posi‐
353              tive.
354
355       xtol_abs
356              Pointer  to  an  array of length n giving absolute tolerances on
357              design variables: stop when an optimization step (or an estimate
358              of  the minimum) changes every design variable x[i] by less than
359              xtol_abs[i].  Disabled if non-positive, or if xtol_abs is NULL.
360
361       maxeval
362              Stop when the number of function  evaluations  exceeds  maxeval.
363              (This  is  not  a strict maximum: the number of function evalua‐
364              tions may exceed maxeval  slightly,  depending  upon  the  algo‐
365              rithm.)  Disabled if non-positive.
366
367       maxtime
368              Stop  when  the  optimization time (in seconds) exceeds maxtime.
369              (This is not a strict  maximum:  the  time  may  exceed  maxtime
370              slightly,  depending  upon  the  algorithm  and on how slow your
371              function evaluation is.)  Disabled if non-positive.
372

RETURN VALUE

374       The value returned is one of the following enumerated constants.
375
376   Successful termination (positive return values):
377       NLOPT_SUCCESS
378              Generic success return value.
379
380       NLOPT_MINF_MAX_REACHED
381              Optimization stopped because minf_max (above) was reached.
382
383       NLOPT_FTOL_REACHED
384              Optimization stopped because ftol_rel or  ftol_abs  (above)  was
385              reached.
386
387       NLOPT_XTOL_REACHED
388              Optimization  stopped  because  xtol_rel or xtol_abs (above) was
389              reached.
390
391       NLOPT_MAXEVAL_REACHED
392              Optimization stopped because maxeval (above) was reached.
393
394       NLOPT_MAXTIME_REACHED
395              Optimization stopped because maxtime (above) was reached.
396
397   Error codes (negative return values):
398       NLOPT_FAILURE
399              Generic failure code.
400
401       NLOPT_INVALID_ARGS
402              Invalid arguments (e.g.  lower  bounds  are  bigger  than  upper
403              bounds, an unknown algorithm was specified, etcetera).
404
405       NLOPT_OUT_OF_MEMORY
406              Ran out of memory.
407

PSEUDORANDOM NUMBERS

409       For  stochastic  optimization  algorithms,  we use pseudorandom numbers
410       generated by the Mersenne Twister algorithm, based on code from  Makoto
411       Matsumoto.   By  default,  the seed for the random numbers is generated
412       from the system time, so that they will be different each time you  run
413       the  program.  If you want to use deterministic random numbers, you can
414       set the seed by calling:
415
416                   void nlopt_srand(unsigned long seed);
417
418       Some of the algorithms also  support  using  low-discrepancy  sequences
419       (LDS),  sometimes  known as quasi-random numbers.  NLopt uses the Sobol
420       LDS, which is implemented for up to 1111 dimensions.
421

AUTHORS

423       Written by Steven G. Johnson.
424
425       Copyright (c) 2007-2014 Massachusetts Institute of Technology.
426

SEE ALSO

428       nlopt_minimize(3)
429
430
431
432MIT                               2007-08-23     NLOPT_MINIMIZE_CONSTRAINED(3)
Impressum