1math::optimize(n)              Tcl Math Library              math::optimize(n)
2
3
4
5______________________________________________________________________________
6

NAME

8       math::optimize - Optimisation routines
9

SYNOPSIS

11       package require Tcl  8.4
12
13       package require math::optimize  ?1.0?
14
15       ::math::optimize::minimum begin end func maxerr
16
17       ::math::optimize::maximum begin end func maxerr
18
19       ::math::optimize::min_bound_1d   func   begin  end  ?-relerror  reltol?
20       ?-abserror abstol? ?-maxiter maxiter? ?-trace traceflag?
21
22       ::math::optimize::min_unbound_1d  func  begin  end  ?-relerror  reltol?
23       ?-abserror abstol? ?-maxiter maxiter? ?-trace traceflag?
24
25       ::math::optimize::solveLinearProgram objective constraints
26
27       ::math::optimize::linearProgramMaximum objective result
28
29       ::math::optimize::nelderMead  objective  xVector  ?-scale xScaleVector?
30       ?-ftol epsilon? ?-maxiter count? ??-trace? flag?
31
32_________________________________________________________________
33

DESCRIPTION

35       This package implements several optimisation algorithms:
36
37       ·      Minimize or maximize a function over a given interval
38
39       ·      Solve a linear program (maximize a linear  function  subject  to
40              linear constraints)
41
42       ·      Minimize  a function of several variables given an initial guess
43              for the location of the minimum.
44
45       The package is fully implemented in Tcl. No  particular  attention  has
46       been  paid to the accuracy of the calculations. Instead, the algorithms
47       have been used in a straightforward manner.
48
49       This document describes the procedures and explains their usage.
50

PROCEDURES

52       This package defines the following public procedures:
53
54       ::math::optimize::minimum begin end func maxerr
55              Minimize the given (continuous) function by examining the values
56              in  the  given  interval. The procedure determines the values at
57              both ends and in the centre of the interval and then  constructs
58              a new interval of 1/2 length that includes the minimum. No guar‐
59              antee is made that the global minimum is found.
60
61              The procedure returns the "x" value for which  the  function  is
62              minimal.
63
64              This procedure has been deprecated - use min_bound_1d instead
65
66              begin - Start of the interval
67
68              end - End of the interval
69
70              func  - Name of the function to be minimized (a procedure taking
71              one argument).
72
73              maxerr - Maximum relative error (defaults to 1.0e-4)
74
75       ::math::optimize::maximum begin end func maxerr
76              Maximize the given (continuous) function by examining the values
77              in  the  given  interval. The procedure determines the values at
78              both ends and in the centre of the interval and then  constructs
79              a new interval of 1/2 length that includes the maximum. No guar‐
80              antee is made that the global maximum is found.
81
82              The procedure returns the "x" value for which  the  function  is
83              maximal.
84
85              This procedure has been deprecated - use max_bound_1d instead
86
87              begin - Start of the interval
88
89              end - End of the interval
90
91              func  - Name of the function to be maximized (a procedure taking
92              one argument).
93
94              maxerr - Maximum relative error (defaults to 1.0e-4)
95
96       ::math::optimize::min_bound_1d  func  begin  end   ?-relerror   reltol?
97       ?-abserror abstol? ?-maxiter maxiter? ?-trace traceflag?
98              Miminizes a function of one variable in the given interval.  The
99              procedure uses Brent's method of parabolic  interpolation,  pro‐
100              tected  by  golden-section  subdivisions if the interpolation is
101              not converging.  No guarantee is made that a global  minimum  is
102              found.   The  function  to  evaluate, func, must be a single Tcl
103              command; it will be evaluated with an abscissa appended  as  the
104              last argument.
105
106              x1  and x2 are the two bounds of the interval in which the mini‐
107              mum is to be found.  They need not be in increasing order.
108
109              reltol, if specified, is the desired upper bound on the relative
110              error  of the result; default is 1.0e-7.  The given value should
111              never be smaller than the square root of the machine's  floating
112              point precision, or else convergence is not guaranteed.  abstol,
113              if specified, is the desired upper bound on the  absolute  error
114              of  the  result;  default is 1.0e-10.  Caution must be used with
115              small values of abstol to avoid  overflow/underflow  conditions;
116              if  the  minimum  is  expected to lie about a small but non-zero
117              abscissa, you consider either shifting the function or  changing
118              its length scale.
119
120              maxiter  may be used to constrain the number of function evalua‐
121              tions to be performed; default is 100.  If the command evaluates
122              the function more than maxiter times, it returns an error to the
123              caller.
124
125              traceFlag is a Boolean value. If true, it causes the command  to
126              print  a  message on the standard output giving the abscissa and
127              ordinate at each function evaluation, together with  an  indica‐
128              tion of what type of interpolation was chosen.  Default is 0 (no
129              trace).
130
131       ::math::optimize::min_unbound_1d  func  begin  end  ?-relerror  reltol?
132       ?-abserror abstol? ?-maxiter maxiter? ?-trace traceflag?
133              Miminizes a function of one variable over the entire real number
134              line.  The procedure uses parabolic extrapolation combined  with
135              golden-section dilatation to search for a region where a minimum
136              exists, followed by Brent's method of  parabolic  interpolation,
137              protected by golden-section subdivisions if the interpolation is
138              not converging.  No guarantee is made that a global  minimum  is
139              found.   The  function  to  evaluate, func, must be a single Tcl
140              command; it will be evaluated with an abscissa appended  as  the
141              last argument.
142
143              x1  and x2 are two initial guesses at where the minimum may lie.
144              x1 is the starting point for the minimization, and  the  differ‐
145              ence  between  x2 and x1 is used as a hint at the characteristic
146              length scale of the problem.
147
148              reltol, if specified, is the desired upper bound on the relative
149              error  of the result; default is 1.0e-7.  The given value should
150              never be smaller than the square root of the machine's  floating
151              point precision, or else convergence is not guaranteed.  abstol,
152              if specified, is the desired upper bound on the  absolute  error
153              of  the  result;  default is 1.0e-10.  Caution must be used with
154              small values of abstol to avoid  overflow/underflow  conditions;
155              if  the  minimum  is  expected to lie about a small but non-zero
156              abscissa, you consider either shifting the function or  changing
157              its length scale.
158
159              maxiter  may be used to constrain the number of function evalua‐
160              tions to be performed; default is 100.  If the command evaluates
161              the function more than maxiter times, it returns an error to the
162              caller.
163
164              traceFlag is a Boolean value. If true, it causes the command  to
165              print  a  message on the standard output giving the abscissa and
166              ordinate at each function evaluation, together with  an  indica‐
167              tion of what type of interpolation was chosen.  Default is 0 (no
168              trace).
169
170       ::math::optimize::solveLinearProgram objective constraints
171              Solve a linear program in standard form using a  straightforward
172              implementation  of  the  Simplex  algorithm. (In the explanation
173              below: The linear program has N constraints and M variables).
174
175              The procedure returns a list of M values, the values  for  which
176              the  objective  function  is  maximal or a single keyword if the
177              linear program is not feasible or unbounded (either "unfeasible"
178              or "unbounded")
179
180              objective - The M coefficients of the objective function
181
182              constraints  -  Matrix  of coefficients plus maximum values that
183              implement the linear constraints. It is expected to be a list of
184              N  lists  of  M+1  numbers  each, M coefficients and the maximum
185              value.
186
187       ::math::optimize::linearProgramMaximum objective result
188              Convenience function to return  the  maximum  for  the  solution
189              found by the solveLinearProgram procedure.
190
191              objective - The M coefficients of the objective function
192
193              result - The result as returned by solveLinearProgram
194
195       ::math::optimize::nelderMead  objective  xVector  ?-scale xScaleVector?
196       ?-ftol epsilon? ?-maxiter count? ??-trace? flag?
197              Minimizes, in unconstrained fashion, a function of several vari‐
198              able  over  all  of space.  The function to evaluate, objective,
199              must be a single Tcl command. To it will  be  appended  as  many
200              elements  as  appear in the initial guess at the location of the
201              minimum, passed in as a Tcl list, xVector.
202
203              xScaleVector is an initial guess at the problem scale; the first
204              function evaluations will be made by varying the co-ordinates in
205              xVector by the amounts in xScaleVector.  If xScaleVector is  not
206              supplied,  the co-ordinates will be varied by a factor of 1.0001
207              (if the co-ordinate is non-zero) or by a constant 0.0001 (if the
208              co-ordinate is zero).
209
210              epsilon  is the desired relative error in the value of the func‐
211              tion evaluated at the minimum. The default is 1.0e-7, which usu‐
212              ally gives three significant digits of accuracy in the values of
213              the x's.
214
215              pp count is a limit on the number of trips through the main loop
216              of  the  optimizer.   The  number of function evaluations may be
217              several times this number.  If the optimizer  fails  to  find  a
218              minimum  to  within  ftol  in maxiter iterations, it returns its
219              current best guess and an error status. Default is to allow  500
220              iterations.
221
222              flag is a flag that, if true, causes a line to be written to the
223              standard output for each evaluation of the  objective  function,
224              giving  the  arguments  presented  to the function and the value
225              returned. Default is false.
226
227              The nelderMead procedure returns a list of alternating  keywords
228              and  values  suitable for use with array set. The meaning of the
229              keywords is:
230
231              x is the approximate location of the minimum.
232
233              y is the value of the function at x.
234
235              yvec is a vector of the best N+1 function values achieved, where
236              N is the dimension of x
237
238              vertices is a list of vectors giving the function arguments cor‐
239              responding to the values in yvec.
240
241              nIter is the number of iterations required  to  achieve  conver‐
242              gence or fail.
243
244              status  is  'ok' if the operation succeeded, or 'too-many-itera‐
245              tions' if the maximum iteration count was exceeded.
246
247              nelderMead minimizes the given function using the downhill  sim‐
248              plex  method  of  Nelder  and Mead.  This method is quite slow -
249              much faster methods for minimization are known  -  but  has  the
250              advantage  of  being  extremely  robust  in the face of problems
251              where the minimum lies in a valley of complex topology.
252
253              nelderMead can occasionally find itself "stuck" at a point where
254              it  can  make  no  further  progress; it is recommended that the
255              caller run it at least a second time,  passing  as  the  initial
256              guess  the result found by the previous call.  The second run is
257              usually very fast.
258
259              nelderMead can be used in some cases for  constrained  optimiza‐
260              tion.   To  do this, add a large value to the objective function
261              if the parameters are outside  the  feasible  region.   To  work
262              effectively  in  this mode, nelderMead requires that the initial
263              guess be feasible and usually requires that the feasible  region
264              be convex.
265

NOTES

267       Several  of  the above procedures take the names of procedures as argu‐
268       ments. To avoid problems with the visibility of these  procedures,  the
269       fully-qualified name of these procedures is determined inside the opti‐
270       mize routines. For the user this has only one  consequence:  the  named
271       procedure must be visible in the calling procedure. For instance:
272
273           namespace eval ::mySpace {
274              namespace export calcfunc
275              proc calcfunc { x } { return $x }
276           }
277           #
278           # Use a fully-qualified name
279           #
280           namespace eval ::myCalc {
281              puts [min_bound_1d ::myCalc::calcfunc $begin $end]
282           }
283           #
284           # Import the name
285           #
286           namespace eval ::myCalc {
287              namespace import ::mySpace::calcfunc
288              puts [min_bound_1d calcfunc $begin $end]
289           }
290
291       The  simple  procedures  minimum  and maximum have been deprecated: the
292       alternatives are much more flexible, robust and require  less  function
293       evaluations.
294

EXAMPLES

296       Let us take a few simple examples:
297
298       Determine the maximum of f(x) = x^3 exp(-3x), on the interval (0,10):
299
300       proc efunc { x } { expr {$x*$x*$x * exp(-3.0*$x)} }
301       puts "Maximum at: [::math::optimize::max_bound_1d efunc 0.0 10.0]"
302
303
304       The  maximum  allowed  error determines the number of steps taken (with
305       each step in the iteration the interval is reduced with a factor  1/2).
306       Hence, a maximum error of 0.0001 is achieved in approximately 14 steps.
307
308       An example of a linear program is:
309
310       Optimise the expression 3x+2y, where:
311
312          x >= 0 and y >= 0 (implicit constraints, part of the
313                            definition of linear programs)
314
315          x + y   <= 1      (constraints specific to the problem)
316          2x + 5y <= 10
317
318
319       This problem can be solved as follows:
320
321
322          set solution [::math::optimize::solveLinearProgram  { 3.0   2.0 }  { { 1.0   1.0   1.0 }
323               { 2.0   5.0  10.0 } } ]
324
325
326       Note, that a constraint like:
327
328          x + y >= 1
329
330       can be turned into standard form using:
331
332          -x  -y <= -1
333
334
335       The theory of linear programming is the subject of many a text book and
336       the Simplex algorithm that is implemented here is the best-known method
337       to solve this type of problems, but it is not the only one.
338

BUGS, IDEAS, FEEDBACK

340       This  document,  and the package it describes, will undoubtedly contain
341       bugs and other problems.  Please report such in the  category  math  ::
342       optimize     of     the     Tcllib    SF    Trackers    [http://source
343       forge.net/tracker/?group_id=12883].  Please also report any  ideas  for
344       enhancements you may have for either package and/or documentation.
345

KEYWORDS

347       linear program, math, maximum, minimum, optimization
348
350       Copyright (c) 2004 Arjen Markus <arjenmarkus@users.sourceforge.net>
351       Copyright (c) 2004,2005 Kevn B. Kenny <kennykb@users.sourceforge.net>
352
353
354
355
356math                                  1.0                    math::optimize(n)
Impressum