1MatrixOps(3)          User Contributed Perl Documentation         MatrixOps(3)
2
3
4

NAME

6       PDL::MatrixOps -- Some Useful Matrix Operations
7

SYNOPSIS

9           $inv = $x->inv;
10
11           $det = $x->det;
12
13           ($lu,$perm,$par) = $x->lu_decomp;
14           $y = lu_backsub($lu,$perm,$z); # solve $x x $y = $z
15

DESCRIPTION

17       PDL::MatrixOps is PDL's built-in matrix manipulation code.  It contains
18       utilities for many common matrix operations: inversion, determinant
19       finding, eigenvalue/vector finding, singular value decomposition, etc.
20       PDL::MatrixOps routines are written in a mixture of Perl and C, so that
21       they are reliably present even when there is no FORTRAN compiler or
22       external library available (e.g.  PDL::Slatec or any of the PDL::GSL
23       family of modules).
24
25       Matrix manipulation, particularly with large matrices, is a challenging
26       field and no one algorithm is suitable in all cases.  The utilities
27       here use general-purpose algorithms that work acceptably for many cases
28       but might not scale well to very large or pathological (near-singular)
29       matrices.
30
31       Except as noted, the matrices are PDLs whose 0th dimension ranges over
32       column and whose 1st dimension ranges over row.  The matrices appear
33       correctly when printed.
34
35       These routines should work OK with PDL::Matrix objects as well as with
36       normal PDLs.
37

TIPS ON MATRIX OPERATIONS

39       Like most computer languages, PDL addresses matrices in (column,row)
40       order in most cases; this corresponds to (X,Y) coordinates in the
41       matrix itself, counting rightwards and downwards from the upper left
42       corner.  This means that if you print a PDL that contains a matrix, the
43       matrix appears correctly on the screen, but if you index a matrix
44       element, you use the indices in the reverse order that you would in a
45       math textbook.  If you prefer your matrices indexed in (row, column)
46       order, you can try using the PDL::Matrix object, which includes an
47       implicit exchange of the first two dimensions but should be compatible
48       with most of these matrix operations.  TIMTOWDTI.)
49
50       Matrices, row vectors, and column vectors can be multiplied with the
51       'x' operator (which is, of course, broadcastable):
52
53           $m3 = $m1 x $m2;
54           $col_vec2 = $m1 x $col_vec1;
55           $row_vec2 = $row_vec1 x $m1;
56           $scalar = $row_vec x $col_vec;
57
58       Because of the (column,row) addressing order, 1-D PDLs are treated as
59       _row_ vectors; if you want a _column_ vector you must add a dummy
60       dimension:
61
62           $rowvec  = pdl(1,2);             # row vector
63           $colvec  = $rowvec->slice('*1'); # 1x2 column vector
64           $matrix  = pdl([[3,4],[6,2]]);   # 2x2 matrix
65           $rowvec2 = $rowvec x $matrix;    # right-multiplication by matrix
66           $colvec  = $matrix x $colvec;    # left-multiplication by matrix
67           $m2      = $matrix x $rowvec;    # Throws an error
68
69       Implicit broadcasting works correctly with most matrix operations, but
70       you must be extra careful that you understand the dimensionality.  In
71       particular, matrix multiplication and other matrix ops need nx1 PDLs as
72       row vectors and 1xn PDLs as column vectors.  In most cases you must
73       explicitly include the trailing 'x1' dimension in order to get the
74       expected results when you broadcast over multiple row vectors.
75
76       When broadcasting over matrices, it's very easy to get confused about
77       which dimension goes where. It is useful to include comments with every
78       expression, explaining what you think each dimension means:
79
80               $x = xvals(360)*3.14159/180;        # (angle)
81               $rot = cat(cat(cos($x),sin($x)),    # rotmat: (col,row,angle)
82                          cat(-sin($x),cos($x)));
83

ACKNOWLEDGEMENTS

85       MatrixOps includes algorithms and pre-existing code from several
86       origins.  In particular, "eigens_sym" is the work of Stephen Moshier,
87       "svd" uses an SVD subroutine written by Bryant Marks, and "eigens" uses
88       a subset of the Small Scientific Library by Kenneth Geisshirt.  They
89       are free software, distributable under same terms as PDL itself.
90

NOTES

92       This is intended as a general-purpose linear algebra package for small-
93       to-mid sized matrices.  The algorithms may not scale well to large
94       matrices (hundreds by hundreds) or to near singular matrices.
95
96       If there is something you want that is not here, please add and
97       document it!
98

FUNCTIONS

100   identity
101         Signature: (n; [o]a(n,n))
102
103       Return an identity matrix of the specified size.  If you hand in a
104       scalar, its value is the size of the identity matrix; if you hand in a
105       dimensioned PDL, the 0th dimension is the first two dimensions of the
106       matrix, with higher dimensions preserved.
107
108   stretcher
109         Signature: (a(n); [o]b(n,n))
110
111         $mat = stretcher($eigenvalues);
112
113       Return a diagonal matrix with the specified diagonal elements
114
115   inv
116         Signature: (a(m,m); sv opt )
117
118         $a1 = inv($a, {$opt});
119
120       Invert a square matrix.
121
122       You feed in an NxN matrix in $a, and get back its inverse (if it
123       exists).  The code is inplace-aware, so you can get back the inverse in
124       $a itself if you want -- though temporary storage is used either way.
125       You can cache the LU decomposition in an output option variable.
126
127       "inv" uses "lu_decomp" by default; that is a numerically stable
128       (pivoting) LU decomposition method.
129
130       OPTIONS:
131
132       •  s
133
134          Boolean value indicating whether to complain if the matrix is
135          singular.  If this is false, singular matrices cause inverse to
136          barf.  If it is true, then singular matrices cause inverse to return
137          undef.
138
139       •  lu (I/O)
140
141          This value contains a list ref with the LU decomposition,
142          permutation, and parity values for $a.  If you do not mention the
143          key, or if the value is undef, then inverse calls "lu_decomp".  If
144          the key exists with an undef value, then the output of "lu_decomp"
145          is stashed here (unless the matrix is singular).  If the value
146          exists, then it is assumed to hold the LU decomposition.
147
148       •  det (Output)
149
150          If this key exists, then the determinant of $a get stored here,
151          whether or not the matrix is singular.
152
153   det
154         Signature: (a(m,m); sv opt)
155
156         $det = det($a,{opt});
157
158       Determinant of a square matrix using LU decomposition (for large
159       matrices)
160
161       You feed in a square matrix, you get back the determinant.  Some
162       options exist that allow you to cache the LU decomposition of the
163       matrix (note that the LU decomposition is invalid if the determinant is
164       zero!).  The LU decomposition is cacheable, in case you want to re-use
165       it.  This method of determinant finding is more rapid than recursive-
166       descent on large matrices, and if you reuse the LU decomposition it's
167       essentially free.
168
169       OPTIONS:
170
171       •  lu (I/O)
172
173          Provides a cache for the LU decomposition of the matrix.  If you
174          provide the key but leave the value undefined, then the LU
175          decomposition goes in here; if you put an LU decomposition here, it
176          will be used and the matrix will not be decomposed again.
177
178   determinant
179         Signature: (a(m,m))
180
181         $det = determinant($x);
182
183       Determinant of a square matrix, using recursive descent
184       (broadcastable).
185
186       This is the traditional, robust recursive determinant method taught in
187       most linear algebra courses.  It scales like "O(n!)" (and hence is
188       pitifully slow for large matrices) but is very robust because no
189       division is involved (hence no division-by-zero errors for singular
190       matrices).  It's also broadcastable, so you can find the determinants
191       of a large collection of matrices all at once if you want.
192
193       Matrices up to 3x3 are handled by direct multiplication; larger
194       matrices are handled by recursive descent to the 3x3 case.
195
196       The LU-decomposition method "det" is faster in isolation for single
197       matrices larger than about 4x4, and is much faster if you end up
198       reusing the LU decomposition of $a (NOTE: check performance and
199       broadcasting benchmarks with new code).
200
201   eigens_sym
202         Signature: ([phys]a(m); [o,phys]ev(n,n); [o,phys]e(n))
203
204       Eigenvalues and -vectors of a symmetric square matrix.  If passed an
205       asymmetric matrix, the routine will warn and symmetrize it, by taking
206       the average value.  That is, it will solve for 0.5*($a+$a->transpose).
207
208       It's broadcastable, so if $a is 3x3x100, it's treated as 100 separate
209       3x3 matrices, and both $ev and $e get extra dimensions accordingly.
210
211       If called in scalar context it hands back only the eigenvalues.
212       Ultimately, it should switch to a faster algorithm in this case (as
213       discarding the eigenvectors is wasteful).
214
215       The algorithm used is due to J. vonNeumann, which was a rediscovery of
216       Jacobi's Method
217       <http://en.wikipedia.org/wiki/Jacobi_eigenvalue_algorithm> .
218
219       The eigenvectors are returned in COLUMNS of the returned PDL.  That
220       makes it slightly easier to access individual eigenvectors, since the
221       0th dim of the output PDL runs across the eigenvectors and the 1st dim
222       runs across their components.
223
224           ($ev,$e) = eigens_sym $x;  # Make eigenvector matrix
225           $vector = $ev->slice($n);       # Select nth eigenvector as a column-vector
226           $vector = $ev->slice("($n)");     # Select nth eigenvector as a row-vector
227
228           ($ev, $e) = eigens_sym($x); # e-vects & e-values
229           $e = eigens_sym($x);        # just eigenvalues
230
231       eigens_sym ignores the bad-value flag of the input ndarrays.  It will
232       set the bad-value flag of all output ndarrays if the flag is set for
233       any of the input ndarrays.
234
235   eigens
236         Signature: ([phys]a(m); [o,phys]ev(l,n,n); [o,phys]e(l,n))
237
238       Real eigenvalues and -vectors of a real square matrix.
239
240       (See also "eigens_sym", for eigenvalues and -vectors of a real,
241       symmetric, square matrix).
242
243       The eigens function will attempt to compute the eigenvalues and
244       eigenvectors of a square matrix with real components.  If the matrix is
245       symmetric, the same underlying code as "eigens_sym" is used.  If
246       asymmetric, the eigenvalues and eigenvectors are computed with
247       algorithms from the sslib library.  If any imaginary components exist
248       in the eigenvalues, the results are currently considered to be invalid,
249       and such eigenvalues are returned as "NaN"s.  This is true for
250       eigenvectors also.  That is if there are imaginary components to any of
251       the values in the eigenvector, the eigenvalue and corresponding
252       eigenvectors are all set to "NaN".  Finally, if there are any repeated
253       eigenvectors, they are replaced with all "NaN"s.
254
255       Use of the eigens function on asymmetric matrices should be considered
256       experimental!  For asymmetric matrices, nearly all observed matrices
257       with real eigenvalues produce incorrect results, due to errors of the
258       sslib algorithm.  If your assymmetric matrix returns all NaNs, do not
259       assume that the values are complex.  Also, problems with memory access
260       is known in this library.
261
262       Not all square matrices are diagonalizable.  If you feed in a non-
263       diagonalizable matrix, then one or more of the eigenvectors will be set
264       to NaN, along with the corresponding eigenvalues.
265
266       "eigens" is broadcastable, so you can solve 100 eigenproblems by
267       feeding in a 3x3x100 array. Both $ev and $e get extra dimensions
268       accordingly.
269
270       If called in scalar context "eigens" hands back only the eigenvalues.
271       This is somewhat wasteful, as it calculates the eigenvectors anyway.
272
273       The eigenvectors are returned in COLUMNS of the returned PDL (ie the
274       the 0 dimension).  That makes it slightly easier to access individual
275       eigenvectors, since the 0th dim of the output PDL runs across the
276       eigenvectors and the 1st dim runs across their components.
277
278               ($ev,$e) = eigens $x;  # Make eigenvector matrix
279               $vector = $ev->slice($n);   # Select nth eigenvector as a column-vector
280               $vector = $ev->slice("($n)"); # Select nth eigenvector as a row-vector
281
282       DEVEL NOTES:
283
284       For now, there is no distinction between a complex eigenvalue and an
285       invalid eigenvalue, although the underlying code generates complex
286       numbers.  It might be useful to be able to return complex eigenvalues.
287
288           ($ev, $e) = eigens($x); # e'vects & e'vals
289           $e = eigens($x);        # just eigenvalues
290
291       eigens ignores the bad-value flag of the input ndarrays.  It will set
292       the bad-value flag of all output ndarrays if the flag is set for any of
293       the input ndarrays.
294
295   svd
296         Signature: (a(n,m); [o]u(n,m); [o,phys]z(n); [o]v(n,n))
297
298        ($u, $s, $v) = svd($x);
299
300       Singular value decomposition of a matrix.
301
302       "svd" is broadcastable.
303
304       Given an m x n matrix $a that has m rows and n columns (m >= n), "svd"
305       computes matrices $u and $v, and a vector of the singular values $s.
306       Like most implementations, "svd" computes what is commonly referred to
307       as the "thin SVD" of $a, such that $u is m x n, $v is n x n, and there
308       are <=n singular values. As long as m >= n, the original matrix can be
309       reconstructed as follows:
310
311           ($u,$s,$v) = svd($x);
312           $ess = zeroes($x->dim(0),$x->dim(0));
313           $ess->slice("$_","$_").=$s->slice("$_") foreach (0..$x->dim(0)-1); #generic diagonal
314           $a_copy = $u x $ess x $v->transpose;
315
316       If m==n, $u and $v can be thought of as rotation matrices that convert
317       from the original matrix's singular coordinates to final coordinates,
318       and from original coordinates to singular coordinates, respectively,
319       and $ess is a diagonal scaling matrix.
320
321       If n>m, "svd" will barf. This can be avoided by passing in the
322       transpose of $a, and reconstructing the original matrix like so:
323
324           ($u,$s,$v) = svd($x->transpose);
325           $ess = zeroes($x->dim(1),$x->dim(1));
326           $ess->slice($_,$_).=$s->slice($_) foreach (0..$x->dim(1)-1); #generic diagonal
327           $x_copy = $v x $ess x $u->transpose;
328
329       EXAMPLE
330
331       The computing literature has loads of examples of how to use SVD.
332       Here's a trivial example (used in PDL::Transform::map) of how to make a
333       matrix less, er, singular, without changing the orientation of the
334       ellipsoid of transformation:
335
336           { my($r1,$s,$r2) = svd $x;
337             $s++;             # fatten all singular values
338             $r2 *= $s;        # implicit broadcasting for cheap mult.
339             $x .= $r2 x $r1;  # a gets r2 x ess x r1
340           }
341
342       svd ignores the bad-value flag of the input ndarrays.  It will set the
343       bad-value flag of all output ndarrays if the flag is set for any of the
344       input ndarrays.
345
346   lu_decomp
347         Signature: (a(m,m); [o]lu(m,m); [o]perm(m); [o]parity)
348
349       LU decompose a matrix, with row permutation
350
351         ($lu, $perm, $parity) = lu_decomp($x);
352
353         $lu = lu_decomp($x, $perm, $par);  # $perm and $par are outputs!
354
355         lu_decomp($x->inplace,$perm,$par); # Everything in place.
356
357       "lu_decomp" returns an LU decomposition of a square matrix, using
358       Crout's method with partial pivoting. It's ported from Numerical
359       Recipes. The partial pivoting keeps it numerically stable but means a
360       little more overhead from broadcasting.
361
362       "lu_decomp" decomposes the input matrix into matrices L and U such that
363       LU = A, L is a subdiagonal matrix, and U is a superdiagonal matrix. By
364       convention, the diagonal of L is all 1's.
365
366       The single output matrix contains all the variable elements of both the
367       L and U matrices, stacked together. Because the method uses pivoting
368       (rearranging the lower part of the matrix for better numerical
369       stability), you have to permute input vectors before applying the L and
370       U matrices. The permutation is returned either in the second argument
371       or, in list context, as the second element of the list. You need the
372       permutation for the output to make any sense, so be sure to get it one
373       way or the other.
374
375       LU decomposition is the answer to a lot of matrix questions, including
376       inversion and determinant-finding, and "lu_decomp" is used by "inv".
377
378       If you pass in $perm and $parity, they either must be predeclared PDLs
379       of the correct size ($perm is an n-vector, $parity is a scalar) or
380       scalars.
381
382       If the matrix is singular, then the LU decomposition might not be
383       defined; in those cases, "lu_decomp" silently returns undef. Some
384       singular matrices LU-decompose just fine, and those are handled OK but
385       give a zero determinant (and hence can't be inverted).
386
387       "lu_decomp" uses pivoting, which rearranges the values in the matrix
388       for more numerical stability. This makes it really good for large and
389       even near-singular matrices. There is a non-pivoting version
390       "lu_decomp2" available which is from 5 to 60 percent faster for typical
391       problems at the expense of failing to compute a result in some cases.
392
393       Now that the "lu_decomp" is broadcasted, it is the recommended LU
394       decomposition routine.  It no longer falls back to "lu_decomp2".
395
396       "lu_decomp" is ported from Numerical Recipes to PDL. It should probably
397       be implemented in C.
398
399   lu_decomp2
400         Signature: (a(m,m); [o]lu(m,m))
401
402       LU decompose a matrix, with no row permutation
403
404         ($lu, $perm, $parity) = lu_decomp2($x);
405
406         $lu = lu_decomp2($x,$perm,$parity);   # or
407         $lu = lu_decomp2($x);                 # $perm and $parity are optional
408
409         lu_decomp($x->inplace,$perm,$parity); # or
410         lu_decomp($x->inplace);               # $perm and $parity are optional
411
412       "lu_decomp2" works just like "lu_decomp", but it does no pivoting at
413       all.  For compatibility with "lu_decomp", it will give you a
414       permutation list and a parity scalar if you ask for them -- but they
415       are always trivial.
416
417       Because "lu_decomp2" does not pivot, it is numerically unstable -- that
418       means it is less precise than "lu_decomp", particularly for large or
419       near-singular matrices.  There are also specific types of non-singular
420       matrices that confuse it (e.g. ([0,-1,0],[1,0,0],[0,0,1]), which is a
421       90 degree rotation matrix but which confuses "lu_decomp2").
422
423       On the other hand, if you want to invert rapidly a few hundred thousand
424       small matrices and don't mind missing one or two, it could be the
425       ticket.  It can be up to 60% faster at the expense of possible failure
426       of the decomposition for some of the input matrices.
427
428       The output is a single matrix that contains the LU decomposition of $a;
429       you can even do it in-place, thereby destroying $a, if you want.  See
430       "lu_decomp" for more information about LU decomposition.
431
432       "lu_decomp2" is ported from Numerical Recipes into PDL.
433
434   lu_backsub
435         Signature: (lu(m,m); perm(m); b(m))
436
437       Solve A x = B for matrix A, by back substitution into A's LU
438       decomposition.
439
440         ($lu,$perm,$par) = lu_decomp($A);
441
442         $x = lu_backsub($lu,$perm,$par,$A);  # or
443         $x = lu_backsub($lu,$perm,$B);       # $par is not required for lu_backsub
444
445         lu_backsub($lu,$perm,$B->inplace); # modify $B in-place
446
447         $x = lu_backsub(lu_decomp($A),$B); # (ignores parity value from lu_decomp)
448
449         # starting from square matrix A and columns matrix B, with mathematically
450         # correct dimensions
451         $A = identity(4) + ones(4, 4);
452         $A->slice('2,0') .= 0; # break symmetry to see if need transpose
453         $B = sequence(2, 4);
454         # all these functions take B as rows, interpret as though notional columns
455         # mathematically confusing but can't change as back-compat and also
456         # familiar to Fortran users, so just transpose inputs and outputs
457
458         # using lu_backsub
459         ($lu,$perm,$par) = lu_decomp($A);
460         $x = lu_backsub($lu,$perm,$par, $B->transpose)->transpose;
461
462         # or with Slatec LINPACK
463         use PDL::Slatec;
464         gefa($lu=$A->copy, $ipiv=null, $info=null);
465         # 1 = do transpose because Fortran's idea of rows vs columns
466         gesl($lu, $ipiv, $x=$B->transpose->copy, 1);
467         $x = $x->inplace->transpose;
468
469         # or with LAPACK
470         use PDL::LinearAlgebra::Real;
471         getrf($lu=$A->copy, $ipiv=null, $info=null);
472         getrs($lu, 1, $x=$B->transpose->copy, $ipiv, $info=null); # again, need transpose
473         $x=$x->inplace->transpose;
474
475         # or with GSL
476         use PDL::GSL::LINALG;
477         LU_decomp(my $lu=$A->copy, my $p=null, my $signum=null);
478         # $B and $x, first dim is because GSL treats as vector, higher dims broadcast
479         # so we transpose in and back out
480         LU_solve($lu, $p, $B->transpose, my $x=null);
481         $x=$x->inplace->transpose;
482
483         # proof of the pudding is in the eating:
484         print $A x $x;
485
486       Given the LU decomposition of a square matrix (from "lu_decomp"),
487       "lu_backsub" does back substitution into the matrix to solve "a x = b"
488       for given vector "b".  It is separated from the "lu_decomp" method so
489       that you can call the cheap "lu_backsub" multiple times and not have to
490       do the expensive LU decomposition more than once.
491
492       "lu_backsub" acts on single vectors and broadcasts in the usual way,
493       which means that it treats $y as the transpose of the input.  If you
494       want to process a matrix, you must hand in the transpose of the matrix,
495       and then transpose the output when you get it back. that is because
496       pdls are indexed by (col,row), and matrices are (row,column) by
497       convention, so a 1-D pdl corresponds to a row vector, not a column
498       vector.
499
500       If $lu is dense and you have more than a few points to solve for, it is
501       probably cheaper to find "a^-1" with "inv", and just multiply "x = a^-1
502       b".) in fact, "inv" works by calling "lu_backsub" with the identity
503       matrix.
504
505       "lu_backsub" is ported from section 2.3 of Numerical Recipes.  It is
506       written in PDL but should probably be implemented in C.
507
508   simq
509         Signature: ([phys]a(n,n); [phys]b(n); [o,phys]x(n); int [o,phys]ips(n); int flag)
510
511       Solution of simultaneous linear equations, "a x = b".
512
513       $a is an "n x n" matrix (i.e., a vector of length "n*n"), stored row-
514       wise: that is, "a(i,j) = a[ij]", where "ij = i*n + j".
515
516       While this is the transpose of the normal column-wise storage, this
517       corresponds to normal PDL usage.  The contents of matrix a may be
518       altered (but may be required for subsequent calls with flag = -1).
519
520       $y, $x, $ips are vectors of length "n".
521
522       Set "flag=0" to solve.  Set "flag=-1" to do a new back substitution for
523       different $y vector using the same a matrix previously reduced when
524       "flag=0" (the $ips vector generated in the previous solution is also
525       required).
526
527       See also "lu_backsub", which does the same thing with a slightly less
528       opaque interface.
529
530       simq ignores the bad-value flag of the input ndarrays.  It will set the
531       bad-value flag of all output ndarrays if the flag is set for any of the
532       input ndarrays.
533
534   squaretotri
535         Signature: (a(n,n); b(m))
536
537       Convert a symmetric square matrix to triangular vector storage.
538
539       squaretotri does not process bad values.  It will set the bad-value
540       flag of all output ndarrays if the flag is set for any of the input
541       ndarrays.
542

AUTHOR

544       Copyright (C) 2002 Craig DeForest (deforest@boulder.swri.edu), R.J.R.
545       Williams (rjrw@ast.leeds.ac.uk), Karl Glazebrook
546       (kgb@aaoepp.aao.gov.au).  There is no warranty.  You are allowed to
547       redistribute and/or modify this work under the same conditions as PDL
548       itself.  If this file is separated from the PDL distribution, then the
549       PDL copyright notice should be included in this file.
550
551
552
553perl v5.34.0                      2022-02-28                      MatrixOps(3)
Impressum