1NLOPT(3) NLopt programming manual NLOPT(3)
2
3
4
6 nlopt - Nonlinear optimization library
7
9 #include <nlopt.h>
10
11 nlopt_opt opt = nlopt_create(algorithm, n);
12 nlopt_set_min_objective(opt, f, f_data);
13 nlopt_set_ftol_rel(opt, tol);
14 ...
15 nlopt_optimize(opt, x , &opt_f);
16 nlopt_destroy(opt);
17
18 The "..." indicates any number of calls to NLopt functions, below, to
19 set parameters of the optimization, constraints, and stopping
20 criteria. Here, nlopt_set_ftol_rel is merely an example of a
21 possible stopping criterion. You should link the resulting program
22 with the linker flags -lnlopt -lm on Unix.
23
25 NLopt is a library for nonlinear optimization. It attempts to minimize
26 (or maximize) a given nonlinear objective function f of n design vari‐
27 ables, using the specified algorithm, possibly subject to linear or
28 nonlinear constraints. The optimum function value found is returned in
29 opt_f (type double) with the corresponding design variable values
30 returned in the (double) array x of length n. The input values in x
31 should be a starting guess for the optimum.
32
33 The parameters of the optimization are controlled via the object opt of
34 type nlopt_opt, which is created by the function nlopt_create and dis‐
35 posed of by nlopt_destroy. By calling various functions in the NLopt
36 library, one can specify stopping criteria (e.g., a relative tolerance
37 on the objective function value is specified by nlopt_set_ftol_rel),
38 upper and/or lower bounds on the design parameters x, and even arbi‐
39 trary nonlinear inequality and equality constraints.
40
41 By changing the parameter algorithm among several predefined constants
42 described below, one can switch easily between a variety of minimiza‐
43 tion algorithms. Some of these algorithms require the gradient (deriv‐
44 atives) of the function to be supplied via f, and other algorithms do
45 not require derivatives. Some of the algorithms attempt to find a
46 global optimum within the given bounds, and others find only a local
47 optimum. Most of the algorithms only handle the case where there are
48 no nonlinear constraints. The NLopt library is a wrapper around sev‐
49 eral free/open-source minimization packages, as well as some new imple‐
50 mentations of published optimization algorithms. You could, of course,
51 compile and call these packages separately, and in some cases this will
52 provide greater flexibility than is available via NLopt. However,
53 depending upon the specific function being optimized, the different
54 algorithms will vary in effectiveness. The intent of NLopt is to allow
55 you to quickly switch between algorithms in order to experiment with
56 them for your problem, by providing a simple unified interface to these
57 subroutines.
58
60 The objective function is specified by calling one of:
61
62 nlopt_result nlopt_set_min_objective(nlopt_opt opt,
63 nlopt_func f,
64 void* f_data);
65 nlopt_result nlopt_set_max_objective(nlopt_opt opt,
66 nlopt_func f,
67 void* f_data);
68
69 depending on whether one wishes to minimize or maximize the objective
70 function f, respectively. The function f should be of the form:
71
72 double f(unsigned n,
73 const double* x,
74 double* grad,
75 void* f_data);
76
77 The return value should be the value of the function at the point x,
78 where x points to an array of length n of the design variables. The
79 dimension n is identical to the one passed to nlopt_create.
80
81 In addition, if the argument grad is not NULL, then grad points to an
82 array of length n which should (upon return) be set to the gradient of
83 the function with respect to the design variables at x. That is,
84 grad[i] should upon return contain the partial derivative df/dx[i], for
85 0 <= i < n, if grad is non-NULL. Not all of the optimization algo‐
86 rithms (below) use the gradient information: for algorithms listed as
87 "derivative-free," the grad argument will always be NULL and need never
88 be computed. (For algorithms that do use gradient information, how‐
89 ever, grad may still be NULL for some calls.)
90
91 The f_data argument is the same as the one passed to
92 nlopt_set_min_objective or nlopt_set_max_objective, and may be used to
93 pass any additional data through to the function. (That is, it may be
94 a pointer to some caller-defined data structure/type containing infor‐
95 mation your function needs, which you convert from void* by a type‐
96 cast.)
97
99 Most of the algorithms in NLopt are designed for minimization of func‐
100 tions with simple bound constraints on the inputs. That is, the input
101 vectors x[i] are constrainted to lie in a hyperrectangle lb[i] <= x[i]
102 <= ub[i] for 0 <= i < n. These bounds are specified by passing arrays
103 lb and ub of length n to one or both of the functions:
104
105 nlopt_result nlopt_set_lower_bounds(nlopt_opt opt,
106 const double* lb);
107 nlopt_result nlopt_set_upper_bounds(nlopt_opt opt,
108 const double* ub);
109
110 If a lower/upper bound is not set, the default is no bound (uncon‐
111 strained, i.e. a bound of infinity); it is possible to have lower
112 bounds but not upper bounds or vice versa. Alternatively, the user can
113 call one of the above functions and explicitly pass a lower bound of
114 -HUGE_VAL and/or an upper bound of +HUGE_VAL for some design variables
115 to make them have no lower/upper bound, respectively. (HUGE_VAL is the
116 standard C constant for a floating-point infinity, found in the math.h
117 header file.)
118
119 Note, however, that some of the algorithms in NLopt, in particular most
120 of the global-optimization algorithms, do not support unconstrained
121 optimization and will return an error if you do not supply finite lower
122 and upper bounds.
123
124 For convenience, the following two functions are supplied in order to
125 set the lower/upper bounds for all design variables to a single con‐
126 stant (so that you don't have to fill an array with a constant value):
127
128 nlopt_result nlopt_set_lower_bounds1(nlopt_opt opt,
129 double lb);
130 nlopt_result nlopt_set_upper_bounds1(nlopt_opt opt,
131 double ub);
132
133
135 Several of the algorithms in NLopt (MMA and ORIG_DIRECT) also support
136 arbitrary nonlinear inequality constraints, and some also allow nonlin‐
137 ear equality constraints (COBYLA, SLSQP, ISRES, and AUGLAG). For these
138 algorithms, you can specify as many nonlinear constraints as you wish
139 by calling the following functions multiple times.
140
141 In particular, a nonlinear inequality constraint of the form fc(x) <=
142 0, where the function fc is of the same form as the objective function
143 described above, can be specified by calling:
144
145 nlopt_result nlopt_add_inequality_constraint(nlopt_opt opt,
146 nlopt_func fc,
147 void* fc_data,
148 double tol);
149
150 Just as for the objective function, fc_data is a pointer to arbitrary
151 user data that will be passed through to the fc function whenever it is
152 called. The parameter tol is a tolerance that is used for the purpose
153 of stopping criteria only: a point x is considered feasible for judging
154 whether to stop the optimization if fc(x) <= tol. A tolerance of zero
155 means that NLopt will try not to consider any x to be converged unless
156 fc is strictly non-positive; generally, at least a small positive tol‐
157 erance is advisable to reduce sensitivity to rounding errors.
158
159 A nonlinear equality constraint of the form h(x) = 0, where the func‐
160 tion h is of the same form as the objective function described above,
161 can be specified by calling:
162
163 nlopt_result nlopt_add_equality_constraint(nlopt_opt opt,
164 nlopt_func h,
165 void* h_data,
166 double tol);
167
168 Just as for the objective function, h_data is a pointer to arbitrary
169 user data that will be passed through to the h function whenever it is
170 called. The parameter tol is a tolerance that is used for the purpose
171 of stopping criteria only: a point x is considered feasible for judging
172 whether to stop the optimization if |h(x)| <= tol. For equality con‐
173 straints, a small positive tolerance is strongly advised in order to
174 allow NLopt to converge even if the equality constraint is slightly
175 nonzero.
176
177 (For any algorithm listed as "derivative-free" below, the grad argument
178 to fc or h will always be NULL and need never be computed.)
179
180 To remove all of the inequality and/or equality constraints from a
181 given problem opt, you can call the following functions:
182
183 nlopt_result nlopt_remove_inequality_constraints(nlopt_opt opt);
184 nlopt_result nlopt_remove_equality_constraints(nlopt_opt opt);
185
187 The algorithm parameter specifies the optimization algorithm (for more
188 detail on these, see the README files in the source-code subdirecto‐
189 ries), and can take on any of the following constant values.
190
191 Constants with _G{N,D}_ in their names refer to global optimization
192 methods, whereas _L{N,D}_ refers to local optimization methods (that
193 try to find a local optimum starting from the starting guess x). Con‐
194 stants with _{G,L}N_ refer to non-gradient (derivative-free) algorithms
195 that do not require the objective function to supply a gradient,
196 whereas _{G,L}D_ refers to derivative-based algorithms that require the
197 objective function to supply a gradient. (Especially for local opti‐
198 mization, derivative-based algorithms are generally superior to deriva‐
199 tive-free ones: the gradient is good to have if you can compute it
200 cheaply, e.g. via an adjoint method.)
201
202 The algorithm specified for a given problem opt is returned by the
203 function:
204
205 nlopt_algorithm nlopt_get_algorithm(nlopt_opt opt);
206
207 The available algorithms are:
208
209 NLOPT_GN_DIRECT_L
210 Perform a global (G) derivative-free (N) optimization using the
211 DIRECT-L search algorithm by Jones et al. as modified by Gablon‐
212 sky et al. to be more weighted towards local search. Does not
213 support unconstrainted optimization. There are also several
214 other variants of the DIRECT algorithm that are supported:
215 NLOPT_GLOBAL_DIRECT, which is the original DIRECT algorithm;
216 NLOPT_GLOBAL_DIRECT_L_RAND, a slightly randomized version of
217 DIRECT-L that may be better in high-dimensional search spaces;
218 NLOPT_GLOBAL_DIRECT_NOSCAL, NLOPT_GLOBAL_DIRECT_L_NOSCAL, and
219 NLOPT_GLOBAL_DIRECT_L_RAND_NOSCAL, which are versions of DIRECT
220 where the dimensions are not rescaled to a unit hypercube (which
221 means that dimensions with larger bounds are given more weight).
222
223 NLOPT_GN_ORIG_DIRECT_L
224 A global (G) derivative-free optimization using the DIRECT-L
225 algorithm as above, along with NLOPT_GN_ORIG_DIRECT which is the
226 original DIRECT algorithm. Unlike NLOPT_GN_DIRECT_L above,
227 these two algorithms refer to code based on the original Fortran
228 code of Gablonsky et al., which has some hard-coded limitations
229 on the number of subdivisions etc. and does not support all of
230 the NLopt stopping criteria, but on the other hand it supports
231 arbitrary nonlinear inequality constraints.
232
233 NLOPT_GD_STOGO
234 Global (G) optimization using the StoGO algorithm by Madsen et
235 al. StoGO exploits gradient information (D) (which must be sup‐
236 plied by the objective) for its local searches, and performs the
237 global search by a branch-and-bound technique. Only bound-con‐
238 strained optimization is supported. There is also another vari‐
239 ant of this algorithm, NLOPT_GD_STOGO_RAND, which is a random‐
240 ized version of the StoGO search scheme. The StoGO algorithms
241 are only available if NLopt is compiled with C++ code enabled,
242 and should be linked via -lnlopt_cxx instead of -lnlopt (via a
243 C++ compiler, in order to link the C++ standard libraries).
244
245 NLOPT_LN_NELDERMEAD
246 Perform a local (L) derivative-free (N) optimization, starting
247 at x, using the Nelder-Mead simplex algorithm, modified to sup‐
248 port bound constraints. Nelder-Mead, while popular, is known to
249 occasionally fail to converge for some objective functions, so
250 it should be used with caution. Anecdotal evidence, on the
251 other hand, suggests that it works fairly well for some cases
252 that are hard to handle otherwise, e.g. noisy/discontinuous
253 objectives. See also NLOPT_LN_SBPLX below.
254
255 NLOPT_LN_SBPLX
256 Perform a local (L) derivative-free (N) optimization, starting
257 at x, using an algorithm based on the Subplex algorithm of Rowan
258 et al., which is an improved variant of Nelder-Mead (above).
259 Our implementation does not use Rowan's original code, and has
260 some minor modifications such as explicit support for bound con‐
261 straints. (Like Nelder-Mead, Subplex often works well in prac‐
262 tice, even for noisy/discontinuous objectives, but there is no
263 rigorous guarantee that it will converge.)
264
265 NLOPT_LN_PRAXIS
266 Local (L) derivative-free (N) optimization using the principal-
267 axis method, based on code by Richard Brent. Designed for
268 unconstrained optimization, although bound constraints are sup‐
269 ported too (via the inefficient method of returning +Inf when
270 the constraints are violated).
271
272 NLOPT_LD_LBFGS
273 Local (L) gradient-based (D) optimization using the limited-mem‐
274 ory BFGS (L-BFGS) algorithm. (The objective function must sup‐
275 ply the gradient.) Unconstrained optimization is supported in
276 addition to simple bound constraints (see above). Based on an
277 implementation by Luksan et al.
278
279 NLOPT_LD_VAR2
280 Local (L) gradient-based (D) optimization using a shifted lim‐
281 ited-memory variable-metric method based on code by Luksan et
282 al., supporting both unconstrained and bound-constrained opti‐
283 mization. NLOPT_LD_VAR2 uses a rank-2 method, while .B
284 NLOPT_LD_VAR1 is another variant using a rank-1 method.
285
286 NLOPT_LD_TNEWTON_PRECOND_RESTART
287 Local (L) gradient-based (D) optimization using an LBFGS-precon‐
288 ditioned truncated Newton method with steepest-descent restart‐
289 ing, based on code by Luksan et al., supporting both uncon‐
290 strained and bound-constrained optimization. There are several
291 other variants of this algorithm: NLOPT_LD_TNEWTON_PRECOND (same
292 without restarting), NLOPT_LD_TNEWTON_RESTART (same without pre‐
293 conditioning), and NLOPT_LD_TNEWTON (same without restarting or
294 preconditioning).
295
296 NLOPT_GN_CRS2_LM
297 Global (G) derivative-free (N) optimization using the controlled
298 random search (CRS2) algorithm of Price, with the "local muta‐
299 tion" (LM) modification suggested by Kaelo and Ali.
300
301 NLOPT_GN_ISRES
302 Global (G) derivative-free (N) optimization using a genetic
303 algorithm (mutation and differential evolution), using a sto‐
304 chastic ranking to handle nonlinear inequality and equality con‐
305 straints as suggested by Runarsson and Yao.
306
307 NLOPT_G_MLSL_LDS, NLOPT_G_MLSL
308 Global (G) optimization using the multi-level single-linkage
309 (MLSL) algorithm with a low-discrepancy sequence (LDS) or pseu‐
310 dorandom numbers, respectively. This algorithm executes a low-
311 discrepancy or pseudorandom sequence of local searches, with a
312 clustering heuristic to avoid multiple local searches for the
313 same local optimum. The local search algorithm must be speci‐
314 fied, along with termination criteria/tolerances for the local
315 searches, by nlopt_set_local_optimizer. (This subsidiary algo‐
316 rithm can be with or without derivatives, and determines whether
317 the objective function needs gradients.)
318
319 NLOPT_LD_MMA, NLOPT_LD_CCSAQ
320 Local (L) gradient-based (D) optimization using the method of
321 moving asymptotes (MMA), or rather a refined version of the
322 algorithm as published by Svanberg (2002). (NLopt uses an inde‐
323 pendent free-software/open-source implementation of Svanberg's
324 algorithm.) CCSAQ is a related algorithm from Svanberg's paper
325 which uses a local quadratic approximation rather than the more-
326 complicated MMA model; the two usually have similar convergence
327 rates. The NLOPT_LD_MMA algorithm supports both bound-con‐
328 strained and unconstrained optimization, and also supports an
329 arbitrary number (m) of nonlinear inequality (not equality) con‐
330 straints as described above.
331
332 NLOPT_LD_SLSQP
333 Local (L) gradient-based (D) optimization using sequential qua‐
334 dratic programming and BFGS updates, supporting arbitrary non‐
335 linear inequality and equality constraints, based on the code by
336 Dieter Kraft (1988) adapted for use by the SciPy project. Note
337 that this algorithm uses dense-matrix methods requiring O(n^2)
338 storage and O(n^3) time, making it less practical for problems
339 involving more than a few thousand parameters.
340
341 NLOPT_LN_COBYLA
342 Local (L) derivative-free (N) optimization using the COBYLA
343 algorithm of Powell (Constrained Optimization BY Linear Approxi‐
344 mations). The NLOPT_LN_COBYLA algorithm supports both bound-
345 constrained and unconstrained optimization, and also supports an
346 arbitrary number (m) of nonlinear inequality/equality con‐
347 straints as described above.
348
349 NLOPT_LN_NEWUOA
350 Local (L) derivative-free (N) optimization using a variant of
351 the NEWUOA algorithm of Powell, based on successive quadratic
352 approximations of the objective function. We have modified the
353 algorithm to support bound constraints. The original NEWUOA
354 algorithm is also available, as NLOPT_LN_NEWUOA, but this algo‐
355 rithm ignores the bound constraints lb and ub, and so it should
356 only be used for unconstrained problems. Mostly superseded by
357 BOBYQA.
358
359 NLOPT_LN_BOBYQA
360 Local (L) derivative-free (N) optimization using the BOBYQA
361 algorithm of Powell, based on successive quadratic approxima‐
362 tions of the objective function, supporting bound constraints.
363
364 NLOPT_AUGLAG
365 Optimize an objective with nonlinear inequality/equality con‐
366 straints via an unconstrained (or bound-constrained) optimiza‐
367 tion algorithm, using a gradually increasing "augmented
368 Lagrangian" penalty for violated constraints. Requires you to
369 specify another optimization algorithm for optimizing the objec‐
370 tive+penalty function, using nlopt_set_local_optimizer. (This
371 subsidiary algorithm can be global or local and with or without
372 derivatives, but you must specify its own termination criteria.)
373 A variant, NLOPT_AUGLAG_EQ, only uses the penalty approach for
374 equality constraints, while inequality constraints are handled
375 directly by the subsidiary algorithm (restricting the choice of
376 subsidiary algorithms to those that can handle inequality con‐
377 straints).
378
380 Multiple stopping criteria for the optimization are supported, as spec‐
381 ified by the functions to modify a given optimization problem opt. The
382 optimization halts whenever any one of these criteria is satisfied. In
383 some cases, the precise interpretation of the stopping criterion
384 depends on the optimization algorithm above (although we have tried to
385 make them as consistent as reasonably possible), and some algorithms do
386 not support all of the stopping criteria.
387
388 Important: you do not need to use all of the stopping criteria! In
389 most cases, you only need one or two, and can omit the remainder (all
390 criteria are disabled by default).
391
392 nlopt_result nlopt_set_stopval(nlopt_opt opt,
393 double stopval);
394
395 Stop when an objective value of at least stopval is found: stop
396 minimizing when a value <= stopval is found, or stop maximizing
397 when a value >= stopval is found. (Setting stopval to -HUGE_VAL
398 for minimizing or +HUGE_VAL for maximizing disables this stop‐
399 ping criterion.)
400
401 nlopt_result nlopt_set_ftol_rel(nlopt_opt opt,
402 double tol);
403
404 Set relative tolerance on function value: stop when an optimiza‐
405 tion step (or an estimate of the optimum) changes the function
406 value by less than tol multiplied by the absolute value of the
407 function value. (If there is any chance that your optimum func‐
408 tion value is close to zero, you might want to set an absolute
409 tolerance with nlopt_set_ftol_abs as well.) Criterion is dis‐
410 abled if tol is non-positive.
411
412 nlopt_result nlopt_set_ftol_abs(nlopt_opt opt,
413 double tol);
414
415 Set absolute tolerance on function value: stop when an optimiza‐
416 tion step (or an estimate of the optimum) changes the function
417 value by less than tol. Criterion is disabled if tol is non-
418 positive.
419
420 nlopt_result nlopt_set_xtol_rel(nlopt_opt opt,
421 double tol);
422
423 Set relative tolerance on design variables: stop when an opti‐
424 mization step (or an estimate of the optimum) changes every
425 design variable by less than tol multiplied by the absolute
426 value of the design variable. (If there is any chance that an
427 optimal design variable is close to zero, you might want to set
428 an absolute tolerance with nlopt_set_xtol_abs as well.) Crite‐
429 rion is disabled if tol is non-positive.
430
431 nlopt_result nlopt_set_xtol_abs(nlopt_opt opt,
432 const double* tol);
433
434 Set absolute tolerances on design variables. tol is a pointer
435 to an array of length n giving the tolerances: stop when an
436 optimization step (or an estimate of the optimum) changes every
437 design variable x[i] by less than tol[i].
438
439 For convenience, the following function may be used to set the
440 absolute tolerances in all n design variables to the same value:
441
442 nlopt_result nlopt_set_xtol_abs1(nlopt_opt opt,
443 double tol);
444
445 Criterion is disabled if tol is non-positive.
446
447 nlopt_result nlopt_set_maxeval(nlopt_opt opt,
448 int maxeval);
449
450 Stop when the number of function evaluations exceeds maxeval.
451 (This is not a strict maximum: the number of function evalua‐
452 tions may exceed maxeval slightly, depending upon the algo‐
453 rithm.) Criterion is disabled if maxeval is non-positive.
454
455 nlopt_result nlopt_set_maxtime(nlopt_opt opt,
456 double maxtime);
457
458 Stop when the optimization time (in seconds) exceeds maxtime.
459 (This is not a strict maximum: the time may exceed maxtime
460 slightly, depending upon the algorithm and on how slow your
461 function evaluation is.) Criterion is disabled if maxtime is
462 non-positive.
463
465 Most of the NLopt functions return an enumerated constant of type
466 nlopt_result, which takes on one of the following values:
467
468 Successful termination (positive return values):
469 NLOPT_SUCCESS
470 Generic success return value.
471
472 NLOPT_STOPVAL_REACHED
473 Optimization stopped because stopval (above) was reached.
474
475 NLOPT_FTOL_REACHED
476 Optimization stopped because ftol_rel or ftol_abs (above) was
477 reached.
478
479 NLOPT_XTOL_REACHED
480 Optimization stopped because xtol_rel or xtol_abs (above) was
481 reached.
482
483 NLOPT_MAXEVAL_REACHED
484 Optimization stopped because maxeval (above) was reached.
485
486 NLOPT_MAXTIME_REACHED
487 Optimization stopped because maxtime (above) was reached.
488
489 Error codes (negative return values):
490 NLOPT_FAILURE
491 Generic failure code.
492
493 NLOPT_INVALID_ARGS
494 Invalid arguments (e.g. lower bounds are bigger than upper
495 bounds, an unknown algorithm was specified, etcetera).
496
497 NLOPT_OUT_OF_MEMORY
498 Ran out of memory.
499
500 NLOPT_ROUNDOFF_LIMITED
501 Halted because roundoff errors limited progress.
502
503 NLOPT_FORCED_STOP
504 Halted because the user called nlopt_force_stop(opt) on the
505 optimization's nlopt_opt object opt from the user's objective
506 function.
507
509 Some of the algorithms, especially MLSL and AUGLAG, use a different
510 optimization algorithm as a subroutine, typically for local optimiza‐
511 tion. You can change the local search algorithm and its tolerances by
512 calling:
513
514 nlopt_result nlopt_set_local_optimizer(nlopt_opt opt,
515 const nlopt_opt local_opt);
516
517 Here, local_opt is another nlopt_opt object whose parameters are used
518 to determine the local search algorithm and stopping criteria. (The
519 objective function, bounds, and nonlinear-constraint parameters of
520 local_opt are ignored.) The dimension n of local_opt must match that
521 of opt.
522
523 This function makes a copy of the local_opt object, so you can freely
524 destroy your original local_opt afterwards.
525
527 For derivative-free local-optimization algorithms, the optimizer must
528 somehow decide on some initial step size to perturb x by when it begins
529 the optimization. This step size should be big enough that the value
530 of the objective changes significantly, but not too big if you want to
531 find the local optimum nearest to x. By default, NLopt chooses this
532 initial step size heuristically from the bounds, tolerances, and other
533 information, but this may not always be the best choice.
534
535 You can modify the initial step size by calling:
536
537 nlopt_result nlopt_set_initial_step(nlopt_opt opt,
538 const double* dx);
539
540 Here, dx is an array of length n containing the (nonzero) initial step
541 size for each component of the design parameters x. For convenience,
542 if you want to set the step sizes in every direction to be the same
543 value, you can instead call:
544
545 nlopt_result nlopt_set_initial_step1(nlopt_opt opt,
546 double dx);
547
549 Several of the stochastic search algorithms (e.g., CRS, MLSL, and
550 ISRES) start by generating some initial "population" of random points
551 x. By default, this initial population size is chosen heuristically in
552 some algorithm-specific way, but the initial population can by changed
553 by calling:
554
555 nlopt_result nlopt_set_population(nlopt_opt opt,
556 unsigned pop);
557
558 (A pop of zero implies that the heuristic default will be used.)
559
561 For stochastic optimization algorithms, we use pseudorandom numbers
562 generated by the Mersenne Twister algorithm, based on code from Makoto
563 Matsumoto. By default, the seed for the random numbers is generated
564 from the system time, so that they will be different each time you run
565 the program. If you want to use deterministic random numbers, you can
566 set the seed by calling:
567
568 void nlopt_srand(unsigned long seed);
569
570 Some of the algorithms also support using low-discrepancy sequences
571 (LDS), sometimes known as quasi-random numbers. NLopt uses the Sobol
572 LDS, which is implemented for up to 1111 dimensions.
573
575 Written by Steven G. Johnson.
576
577 Copyright (c) 2007-2014 Massachusetts Institute of Technology.
578
580 nlopt_minimize(3)
581
582
583
584MIT 2007-08-23 NLOPT(3)