1math::optimize(n)              Tcl Math Library              math::optimize(n)
2
3
4
5______________________________________________________________________________
6

NAME

8       math::optimize - Optimisation routines
9

SYNOPSIS

11       package require Tcl  8.4
12
13       package require math::optimize  ?1.0?
14
15       ::math::optimize::minimize begin end func maxerr
16
17       ::math::optimize::maximize begin end func maxerr
18
19       ::math::optimize::min_bound_1d   func   begin  end  ?-relerror  reltol?
20       ?-abserror abstol? ?-maxiter maxiter? ?-trace traceflag?
21
22       ::math::optimize::min_unbound_1d  func  begin  end  ?-relerror  reltol?
23       ?-abserror abstol? ?-maxiter maxiter? ?-trace traceflag?
24
25       ::math::optimize::solveLinearProgram objective constraints
26
27       ::math::optimize::linearProgramMaximum objective result
28
29       ::math::optimize::nelderMead  objective  xVector  ?-scale xScaleVector?
30       ?-ftol epsilon? ?-maxiter count? ??-trace? flag?
31
32_________________________________________________________________
33

DESCRIPTION

35       This package implements several optimisation algorithms:
36
37       ·      Minimize or maximize a function over a given interval
38
39       ·      Solve a linear program (maximize a linear  function  subject  to
40              linear constraints)
41
42       ·      Minimize  a function of several variables given an initial guess
43              for the location of the minimum.
44
45       The package is fully implemented in Tcl. No  particular  attention  has
46       been  paid to the accuracy of the calculations. Instead, the algorithms
47       have been used in a straightforward manner.
48
49       This document describes the procedures and explains their usage.
50

PROCEDURES

52       This package defines the following public procedures:
53
54       ::math::optimize::minimize begin end func maxerr
55              Minimize the given (continuous) function by examining the values
56              in  the  given  interval. The procedure determines the values at
57              both ends and in the centre of the interval and then  constructs
58              a new interval of 1/2 length that includes the minimum. No guar‐
59              antee is made that the global minimum is found.
60
61              The procedure returns the "x" value for which  the  function  is
62              minimal.
63
64              This procedure has been deprecated - use min_bound_1d instead
65
66              begin - Start of the interval
67
68              end - End of the interval
69
70              func  - Name of the function to be minimized (a procedure taking
71              one argument).
72
73              maxerr - Maximum relative error (defaults to 1.0e-4)
74
75       ::math::optimize::maximize begin end func maxerr
76              Maximize the given (continuous) function by examining the values
77              in  the  given  interval. The procedure determines the values at
78              both ends and in the centre of the interval and then  constructs
79              a new interval of 1/2 length that includes the maximum. No guar‐
80              antee is made that the global maximum is found.
81
82              The procedure returns the "x" value for which  the  function  is
83              maximal.
84
85              This procedure has been deprecated - use max_bound_1d instead
86
87              begin - Start of the interval
88
89              end - End of the interval
90
91              func  - Name of the function to be maximized (a procedure taking
92              one argument).
93
94              maxerr - Maximum relative error (defaults to 1.0e-4)
95
96       ::math::optimize::min_bound_1d  func  begin  end   ?-relerror   reltol?
97       ?-abserror abstol? ?-maxiter maxiter? ?-trace traceflag?
98              Miminizes a function of one variable in the given interval.  The
99              procedure uses Brent's method of parabolic  interpolation,  pro‐
100              tected  by  golden-section  subdivisions if the interpolation is
101              not converging.  No guarantee is made that a global  minimum  is
102              found.   The  function  to  evaluate, func, must be a single Tcl
103              command; it will be evaluated with an abscissa appended  as  the
104              last argument.
105
106              x1  and x2 are the two bounds of the interval in which the mini‐
107              mum is to be found.  They need not be in increasing order.
108
109              reltol, if specified, is the desired upper bound on the relative
110              error  of the result; default is 1.0e-7.  The given value should
111              never be smaller than the square root of the machine's  floating
112              point precision, or else convergence is not guaranteed.  abstol,
113              if specified, is the desired upper bound on the  absolute  error
114              of  the  result;  default is 1.0e-10.  Caution must be used with
115              small values of abstol to avoid  overflow/underflow  conditions;
116              if  the  minimum  is  expected to lie about a small but non-zero
117              abscissa, you consider either shifting the function or  changing
118              its length scale.
119
120              maxiter  may be used to constrain the number of function evalua‐
121              tions to be performed; default is 100.  If the command evaluates
122              the function more than maxiter times, it returns an error to the
123              caller.
124
125              traceFlag is a Boolean value. If true, it causes the command  to
126              print  a  message on the standard output giving the abscissa and
127              ordinate at each function evaluation, together with  an  indica‐
128              tion of what type of interpolation was chosen.  Default is 0 (no
129              trace).
130
131       ::math::optimize::min_unbound_1d  func  begin  end  ?-relerror  reltol?
132       ?-abserror abstol? ?-maxiter maxiter? ?-trace traceflag?
133              Miminizes a function of one variable over the entire real number
134              line.  The procedure uses parabolic extrapolation combined  with
135              golden-section dilatation to search for a region where a minimum
136              exists, followed by Brent's method of  parabolic  interpolation,
137              protected by golden-section subdivisions if the interpolation is
138              not converging.  No guarantee is made that a global  minimum  is
139              found.   The  function  to  evaluate, func, must be a single Tcl
140              command; it will be evaluated with an abscissa appended  as  the
141              last argument.
142
143              x1  and x2 are two initial guesses at where the minimum may lie.
144              x1 is the starting point for the minimization, and  the  differ‐
145              ence  between  x2 and x1 is used as a hint at the characteristic
146              length scale of the problem.
147
148              reltol, if specified, is the desired upper bound on the relative
149              error  of the result; default is 1.0e-7.  The given value should
150              never be smaller than the square root of the machine's  floating
151              point precision, or else convergence is not guaranteed.  abstol,
152              if specified, is the desired upper bound on the  absolute  error
153              of  the  result;  default is 1.0e-10.  Caution must be used with
154              small values of abstol to avoid  overflow/underflow  conditions;
155              if  the  minimum  is  expected to lie about a small but non-zero
156              abscissa, you consider either shifting the function or  changing
157              its length scale.
158
159              maxiter  may be used to constrain the number of function evalua‐
160              tions to be performed; default is 100.  If the command evaluates
161              the function more than maxiter times, it returns an error to the
162              caller.
163
164              traceFlag is a Boolean value. If true, it causes the command  to
165              print  a  message on the standard output giving the abscissa and
166              ordinate at each function evaluation, together with  an  indica‐
167              tion of what type of interpolation was chosen.  Default is 0 (no
168              trace).
169
170       ::math::optimize::solveLinearProgram objective constraints
171              Solve a linear program in standard form using a  straightforward
172              implementation  of  the  Simplex  algorithm. (In the explanation
173              below: The linear program has N constraints and M variables).
174
175              The procedure returns a list of M values, the values  for  which
176              the  objective  function  is  maximal or a single keyword if the
177              linear program is not feasible or unbounded (either "unfeasible"
178              or "unbounded")
179
180              objective - The M coefficients of the objective function
181
182              constraints  -  Matrix  of coefficients plus maximum values that
183              implement the linear constraints. It is expected to be a list of
184              N  lists  of  M+1  numbers  each, M coefficients and the maximum
185              value.
186
187       ::math::optimize::linearProgramMaximum objective result
188              Convenience function to return  the  maximum  for  the  solution
189              found by the solveLinearProgram procedure.
190
191              objective - The M coefficients of the objective function
192
193              result - The result as returned by solveLinearProgram
194
195       ::math::optimize::nelderMead  objective  xVector  ?-scale xScaleVector?
196       ?-ftol epsilon? ?-maxiter count? ??-trace? flag?
197              Minimizes, in unconstrained fashion, a function of several vari‐
198              able  over  all  of space.  The function to evaluate, objective,
199              must be a single Tcl command. To it will  be  appended  as  many
200              elements  as  appear in the initial guess at the location of the
201              minimum, passed in as a Tcl list, xVector.
202
203              xScaleVector is an initial guess at the problem scale; the first
204              function evaluations will be made by varying the co-ordinates in
205              xVector by the amounts in xScaleVector.  If xScaleVector is  not
206              supplied,  the co-ordinates will be varied by a factor of 1.0001
207              (if the co-ordinate is non-zero) or by a constant 0.0001 (if the
208              co-ordinate is zero).
209
210              epsilon  is the desired relative error in the value of the func‐
211              tion evaluated at the minimum. The default is 1.0e-7, which usu‐
212              ally gives three significant digits of accuracy in the values of
213              the x's.  count is a limit on the number of  trips  through  the
214              main  loop of the optimizer.  The number of function evaluations
215              may be several times this number.  If  the  optimizer  fails  to
216              find  a minimum to within ftol in maxiter iterations, it returns
217              its current best guess and an error status. Default is to  allow
218              500 iterations.
219
220              flag is a flag that, if true, causes a line to be written to the
221              standard output for each evaluation of the  objective  function,
222              giving  the  arguments  presented  to the function and the value
223              returned. Default is false.
224
225              The nelderMead procedure returns a list of alternating  keywords
226              and  values  suitable for use with array set. The meaning of the
227              keywords is:
228
229              x is the approximate location of the minimum.
230
231              y is the value of the function at x.
232
233              yvec is a vector of the best N+1 function values achieved, where
234              N is the dimension of x
235
236              vertices is a list of vectors giving the function arguments cor‐
237              responding to the values in yvec.
238
239              nIter is the number of iterations required  to  achieve  conver‐
240              gence or fail.
241
242              status  is  'ok' if the operation succeeded, or 'too-many-itera‐
243              tions' if the maximum iteration count was exceeded.
244
245              nelderMead minimizes the given function using the downhill  sim‐
246              plex  method  of  Nelder  and Mead.  This method is quite slow -
247              much faster methods for minimization are known  -  but  has  the
248              advantage  of  being  extremely  robust  in the face of problems
249              where the minimum lies in a valley of complex topology.
250
251              nelderMead can occasionally find itself "stuck" at a point where
252              it  can  make  no  further  progress; it is recommended that the
253              caller run it at least a second time,  passing  as  the  initial
254              guess  the result found by the previous call.  The second run is
255              usually very fast.
256
257              nelderMead can be used in some cases for  constrained  optimiza‐
258              tion.   To  do this, add a large value to the objective function
259              if the parameters are outside  the  feasible  region.   To  work
260              effectively  in  this mode, nelderMead requires that the initial
261              guess be feasible and usually requires that the feasible  region
262              be convex.
263

NOTES

265       Several  of  the above procedures take the names of procedures as argu‐
266       ments. To avoid problems with the visibility of these  procedures,  the
267       fully-qualified name of these procedures is determined inside the opti‐
268       mize routines. For the user this has only one  consequence:  the  named
269       procedure must be visible in the calling procedure. For instance:
270
271           namespace eval ::mySpace {
272              namespace export calcfunc
273              proc calcfunc { x } { return $x }
274           }
275           #
276           # Use a fully-qualified name
277           #
278           namespace eval ::myCalc {
279              puts [min_bound_1d ::myCalc::calcfunc $begin $end]
280           }
281           #
282           # Import the name
283           #
284           namespace eval ::myCalc {
285              namespace import ::mySpace::calcfunc
286              puts [min_bound_1d calcfunc $begin $end]
287           }
288
289       The  simple  procedures  minimum  and maximum have been deprecated: the
290       alternatives are much more flexible, robust and require  less  function
291       evaluations.
292

EXAMPLES

294       Let us take a few simple examples:
295
296       Determine the maximum of f(x) = x^3 exp(-3x), on the interval (0,10):
297
298       proc efunc { x } { expr {$x*$x*$x * exp(-3.0*$x)} }
299       puts "Maximum at: [::math::optimize::max_bound_1d efunc 0.0 10.0]"
300
301
302       The  maximum  allowed  error determines the number of steps taken (with
303       each step in the iteration the interval is reduced with a factor  1/2).
304       Hence, a maximum error of 0.0001 is achieved in approximately 14 steps.
305
306       An example of a linear program is:
307
308       Optimise the expression 3x+2y, where:
309
310          x >= 0 and y >= 0 (implicit constraints, part of the
311                            definition of linear programs)
312
313          x + y   <= 1      (constraints specific to the problem)
314          2x + 5y <= 10
315
316
317       This problem can be solved as follows:
318
319
320          set solution [::math::optimize::solveLinearProgram  { 3.0   2.0 }  { { 1.0   1.0   1.0 }
321               { 2.0   5.0  10.0 } } ]
322
323
324       Note, that a constraint like:
325
326          x + y >= 1
327
328       can be turned into standard form using:
329
330          -x  -y <= -1
331
332
333       The theory of linear programming is the subject of many a text book and
334       the Simplex algorithm that is implemented here is the best-known method
335       to solve this type of problems, but it is not the only one.
336

KEYWORDS

338       linear program, math, maximum, minimum, optimization
339
341       Copyright (c) 2004 Arjen Markus <arjenmarkus@users.sourceforge.net>
342       Copyright (c) 2004,2005 Kevn B. Kenny <kennykb@users.sourceforge.net>
343
344
345
346
347math                                  1.0                    math::optimize(n)
Impressum