1Math tools(3) QuantLib Math tools(3)
2
3
4
6 Math tools - Math facilities of the library include:
7
9 Implementations of pseudo-random number and low-discrepancy sequence
10 generators. They share the ql/RandomNumbers directory.
11
13 The abstract class QuantLib::Solver1D provides the interface for one-
14 dimensional solvers which can find the zeroes of a given function.
15
16 A number of such solvers is contained in the ql/Solvers1D directory.
17
18 The implementation of the algorithms was inspired by 'Numerical Recipes
19 in C', 2nd edition, Press, Teukolsky, Vetterling, Flannery - Chapter 9
20
21 Some work is needed to resolve the ambiguity of the root finding
22 accuracy defition: for some algorithms it is the x-accuracy, for others
23 it is f(x)-accuracy.
24
26 The optimization framework (corresponding to the ql/Optimization
27 directory) implements some multi-dimensional minimizing methods. The
28 function to be minimized is to be derived from the
29 QuantLib::CostFunction base class (if the gradient is not analytically
30 implemented, it will be computed numerically).
31
32 The simplex method.RS 4
33
34
35This method, implemented in QuantLib::Simplex, is rather raw and requires
36quite a lot of computing resources, but it has the advantage that it does not
37need any evaluation of the cost function's gradient, and that it is quite
38easily implemented. First, we must choose N+1 starting points, given here by a
39starting point $ thbf{P}_{0} $ and N points such that
40thbf{P}_{thbf{i}}=thbf{P}_{0}+bda thbf{e}_{thbf{i}}, ] where $ bda $ is the
41problem's characteristic length scale). These points will form a geometrical
42form called simplex. The principle of the downhill simplex method is, at each
43iteration, to move the worst point (highest cost function value) through the
44opposite face to a better point. When the simplex seems to be constrained in a
45valley, it will be contracted downhill, keeping the best point unchanged.
46
48sophisticated method, implemented in QuantLib::ConjugateGradient . At each
49step, we minimize (using Armijo's line search algorithm, implemented in
51-0bla f(thbf{x_i}i)g+hatcr{ttr^t{20}b}ltahbff({tdh_b{fi{-x1_}i}},)] thbf{d}_{0} = -0bla
52f(thbf{x_{i-1}})
53f(thbf{x}_{0}). ]
54
55As we can see, this optimization method requires the knowledge of the gradient
56of the cost function. See QuantLib::ConjugateGradient .
57
58Version 0.8.1 29 Oct 2007 Math tools(3)