1MatrixOps(3)          User Contributed Perl Documentation         MatrixOps(3)
2
3
4

NAME

6       PDL::MatrixOps -- Some Useful Matrix Operations
7

SYNOPSIS

9           $inv = $x->inv;
10
11           $det = $x->det;
12
13           ($lu,$perm,$par) = $x->lu_decomp;
14           $y = lu_backsub($lu,$perm,$z); # solve $x x $y = $z
15

DESCRIPTION

17       PDL::MatrixOps is PDL's built-in matrix manipulation code.  It contains
18       utilities for many common matrix operations: inversion, determinant
19       finding, eigenvalue/vector finding, singular value decomposition, etc.
20       PDL::MatrixOps routines are written in a mixture of Perl and C, so that
21       they are reliably present even when there is no FORTRAN compiler or
22       external library available (e.g.  PDL::Slatec or any of the PDL::GSL
23       family of modules).
24
25       Matrix manipulation, particularly with large matrices, is a challenging
26       field and no one algorithm is suitable in all cases.  The utilities
27       here use general-purpose algorithms that work acceptably for many cases
28       but might not scale well to very large or pathological (near-singular)
29       matrices.
30
31       Except as noted, the matrices are PDLs whose 0th dimension ranges over
32       column and whose 1st dimension ranges over row.  The matrices appear
33       correctly when printed.
34
35       These routines should work OK with PDL::Matrix objects as well as with
36       normal PDLs.
37

TIPS ON MATRIX OPERATIONS

39       Like most computer languages, PDL addresses matrices in (column,row)
40       order in most cases; this corresponds to (X,Y) coordinates in the
41       matrix itself, counting rightwards and downwards from the upper left
42       corner.  This means that if you print a PDL that contains a matrix, the
43       matrix appears correctly on the screen, but if you index a matrix
44       element, you use the indices in the reverse order that you would in a
45       math textbook.  If you prefer your matrices indexed in (row, column)
46       order, you can try using the PDL::Matrix object, which includes an
47       implicit exchange of the first two dimensions but should be compatible
48       with most of these matrix operations.  TIMTOWDTI.)
49
50       Matrices, row vectors, and column vectors can be multiplied with the
51       'x' operator (which is, of course, threadable):
52
53           $m3 = $m1 x $m2;
54           $col_vec2 = $m1 x $col_vec1;
55           $row_vec2 = $row_vec1 x $m1;
56           $scalar = $row_vec x $col_vec;
57
58       Because of the (column,row) addressing order, 1-D PDLs are treated as
59       _row_ vectors; if you want a _column_ vector you must add a dummy
60       dimension:
61
62           $rowvec  = pdl(1,2);            # row vector
63           $colvec  = $rowvec->(*1);         # 1x2 column vector
64           $matrix  = pdl([[3,4],[6,2]]);  # 2x2 matrix
65           $rowvec2 = $rowvec x $matrix;   # right-multiplication by matrix
66           $colvec  = $matrix x $colvec;   # left-multiplication by matrix
67           $m2      = $matrix x $rowvec;   # Throws an error
68
69       Implicit threading works correctly with most matrix operations, but you
70       must be extra careful that you understand the dimensionality.  In
71       particular, matrix multiplication and other matrix ops need nx1 PDLs as
72       row vectors and 1xn PDLs as column vectors.  In most cases you must
73       explicitly include the trailing 'x1' dimension in order to get the
74       expected results when you thread over multiple row vectors.
75
76       When threading over matrices, it's very easy to get confused about
77       which dimension goes where. It is useful to include comments with every
78       expression, explaining what you think each dimension means:
79
80               $x = xvals(360)*3.14159/180;        # (angle)
81               $rot = cat(cat(cos($x),sin($x)),    # rotmat: (col,row,angle)
82                          cat(-sin($x),cos($x)));
83

ACKNOWLEDGEMENTS

85       MatrixOps includes algorithms and pre-existing code from several
86       origins.  In particular, "eigens_sym" is the work of Stephen Moshier,
87       "svd" uses an SVD subroutine written by Bryant Marks, and "eigens" uses
88       a subset of the Small Scientific Library by Kenneth Geisshirt.  They
89       are free software, distributable under same terms as PDL itself.
90

NOTES

92       This is intended as a general-purpose linear algebra package for small-
93       to-mid sized matrices.  The algorithms may not scale well to large
94       matrices (hundreds by hundreds) or to near singular matrices.
95
96       If there is something you want that is not here, please add and
97       document it!
98

FUNCTIONS

100   identity
101         Signature: (n; [o]a(n,n))
102
103       Return an identity matrix of the specified size.  If you hand in a
104       scalar, its value is the size of the identity matrix; if you hand in a
105       dimensioned PDL, the 0th dimension is the first two dimensions of the
106       matrix, with higher dimensions preserved.
107
108   stretcher
109         Signature: (a(n); [o]b(n,n))
110
111         $mat = stretcher($eigenvalues);
112
113       Return a diagonal matrix with the specified diagonal elements
114
115   inv
116         Signature: (a(m,m); sv opt )
117
118         $a1 = inv($a, {$opt});
119
120       Invert a square matrix.
121
122       You feed in an NxN matrix in $a, and get back its inverse (if it
123       exists).  The code is inplace-aware, so you can get back the inverse in
124       $a itself if you want -- though temporary storage is used either way.
125       You can cache the LU decomposition in an output option variable.
126
127       "inv" uses "lu_decomp" by default; that is a numerically stable
128       (pivoting) LU decomposition method.
129
130       OPTIONS:
131
132       •  s
133
134          Boolean value indicating whether to complain if the matrix is
135          singular.  If this is false, singular matrices cause inverse to
136          barf.  If it is true, then singular matrices cause inverse to return
137          undef.
138
139       •  lu (I/O)
140
141          This value contains a list ref with the LU decomposition,
142          permutation, and parity values for $a.  If you do not mention the
143          key, or if the value is undef, then inverse calls "lu_decomp".  If
144          the key exists with an undef value, then the output of "lu_decomp"
145          is stashed here (unless the matrix is singular).  If the value
146          exists, then it is assumed to hold the LU decomposition.
147
148       •  det (Output)
149
150          If this key exists, then the determinant of $a get stored here,
151          whether or not the matrix is singular.
152
153   det
154         Signature: (a(m,m); sv opt)
155
156         $det = det($a,{opt});
157
158       Determinant of a square matrix using LU decomposition (for large
159       matrices)
160
161       You feed in a square matrix, you get back the determinant.  Some
162       options exist that allow you to cache the LU decomposition of the
163       matrix (note that the LU decomposition is invalid if the determinant is
164       zero!).  The LU decomposition is cacheable, in case you want to re-use
165       it.  This method of determinant finding is more rapid than recursive-
166       descent on large matrices, and if you reuse the LU decomposition it's
167       essentially free.
168
169       OPTIONS:
170
171       •  lu (I/O)
172
173          Provides a cache for the LU decomposition of the matrix.  If you
174          provide the key but leave the value undefined, then the LU
175          decomposition goes in here; if you put an LU decomposition here, it
176          will be used and the matrix will not be decomposed again.
177
178   determinant
179         Signature: (a(m,m))
180
181         $det = determinant($x);
182
183       Determinant of a square matrix, using recursive descent (threadable).
184
185       This is the traditional, robust recursive determinant method taught in
186       most linear algebra courses.  It scales like "O(n!)" (and hence is
187       pitifully slow for large matrices) but is very robust because no
188       division is involved (hence no division-by-zero errors for singular
189       matrices).  It's also threadable, so you can find the determinants of a
190       large collection of matrices all at once if you want.
191
192       Matrices up to 3x3 are handled by direct multiplication; larger
193       matrices are handled by recursive descent to the 3x3 case.
194
195       The LU-decomposition method "det" is faster in isolation for single
196       matrices larger than about 4x4, and is much faster if you end up
197       reusing the LU decomposition of $a (NOTE: check performance and
198       threading benchmarks with new code).
199
200   eigens_sym
201         Signature: ([phys]a(m); [o,phys]ev(n,n); [o,phys]e(n))
202
203       Eigenvalues and -vectors of a symmetric square matrix.  If passed an
204       asymmetric matrix, the routine will warn and symmetrize it, by taking
205       the average value.  That is, it will solve for 0.5*($a+$a->transpose).
206
207       It's threadable, so if $a is 3x3x100, it's treated as 100 separate 3x3
208       matrices, and both $ev and $e get extra dimensions accordingly.
209
210       If called in scalar context it hands back only the eigenvalues.
211       Ultimately, it should switch to a faster algorithm in this case (as
212       discarding the eigenvectors is wasteful).
213
214       The algorithm used is due to J. vonNeumann, which was a rediscovery of
215       Jacobi's Method
216       <http://en.wikipedia.org/wiki/Jacobi_eigenvalue_algorithm> .
217
218       The eigenvectors are returned in COLUMNS of the returned PDL.  That
219       makes it slightly easier to access individual eigenvectors, since the
220       0th dim of the output PDL runs across the eigenvectors and the 1st dim
221       runs across their components.
222
223           ($ev,$e) = eigens_sym $x;  # Make eigenvector matrix
224           $vector = $ev->($n);       # Select nth eigenvector as a column-vector
225           $vector = $ev->(($n));     # Select nth eigenvector as a row-vector
226
227           ($ev, $e) = eigens_sym($x); # e-vects & e-values
228           $e = eigens_sym($x);        # just eigenvalues
229
230       eigens_sym ignores the bad-value flag of the input ndarrays.  It will
231       set the bad-value flag of all output ndarrays if the flag is set for
232       any of the input ndarrays.
233
234   eigens
235         Signature: ([phys]a(m); [o,phys]ev(l,n,n); [o,phys]e(l,n))
236
237       Real eigenvalues and -vectors of a real square matrix.
238
239       (See also "eigens_sym", for eigenvalues and -vectors of a real,
240       symmetric, square matrix).
241
242       The eigens function will attempt to compute the eigenvalues and
243       eigenvectors of a square matrix with real components.  If the matrix is
244       symmetric, the same underlying code as "eigens_sym" is used.  If
245       asymmetric, the eigenvalues and eigenvectors are computed with
246       algorithms from the sslib library.  If any imaginary components exist
247       in the eigenvalues, the results are currently considered to be invalid,
248       and such eigenvalues are returned as "NaN"s.  This is true for
249       eigenvectors also.  That is if there are imaginary components to any of
250       the values in the eigenvector, the eigenvalue and corresponding
251       eigenvectors are all set to "NaN".  Finally, if there are any repeated
252       eigenvectors, they are replaced with all "NaN"s.
253
254       Use of the eigens function on asymmetric matrices should be considered
255       experimental!  For asymmetric matrices, nearly all observed matrices
256       with real eigenvalues produce incorrect results, due to errors of the
257       sslib algorithm.  If your assymmetric matrix returns all NaNs, do not
258       assume that the values are complex.  Also, problems with memory access
259       is known in this library.
260
261       Not all square matrices are diagonalizable.  If you feed in a non-
262       diagonalizable matrix, then one or more of the eigenvectors will be set
263       to NaN, along with the corresponding eigenvalues.
264
265       "eigens" is threadable, so you can solve 100 eigenproblems by feeding
266       in a 3x3x100 array. Both $ev and $e get extra dimensions accordingly.
267
268       If called in scalar context "eigens" hands back only the eigenvalues.
269       This is somewhat wasteful, as it calculates the eigenvectors anyway.
270
271       The eigenvectors are returned in COLUMNS of the returned PDL (ie the
272       the 0 dimension).  That makes it slightly easier to access individual
273       eigenvectors, since the 0th dim of the output PDL runs across the
274       eigenvectors and the 1st dim runs across their components.
275
276               ($ev,$e) = eigens $x;  # Make eigenvector matrix
277               $vector = $ev->($n);   # Select nth eigenvector as a column-vector
278               $vector = $ev->(($n)); # Select nth eigenvector as a row-vector
279
280       DEVEL NOTES:
281
282       For now, there is no distinction between a complex eigenvalue and an
283       invalid eigenvalue, although the underlying code generates complex
284       numbers.  It might be useful to be able to return complex eigenvalues.
285
286           ($ev, $e) = eigens($x); # e'vects & e'vals
287           $e = eigens($x);        # just eigenvalues
288
289       eigens ignores the bad-value flag of the input ndarrays.  It will set
290       the bad-value flag of all output ndarrays if the flag is set for any of
291       the input ndarrays.
292
293   svd
294         Signature: (a(n,m); [o]u(n,m); [o,phys]z(n); [o]v(n,n))
295
296        ($u, $s, $v) = svd($x);
297
298       Singular value decomposition of a matrix.
299
300       "svd" is threadable.
301
302       Given an m x n matrix $a that has m rows and n columns (m >= n), "svd"
303       computes matrices $u and $v, and a vector of the singular values $s.
304       Like most implementations, "svd" computes what is commonly referred to
305       as the "thin SVD" of $a, such that $u is m x n, $v is n x n, and there
306       are <=n singular values. As long as m >= n, the original matrix can be
307       reconstructed as follows:
308
309           ($u,$s,$v) = svd($x);
310           $ess = zeroes($x->dim(0),$x->dim(0));
311           $ess->slice("$_","$_").=$s->slice("$_") foreach (0..$x->dim(0)-1); #generic diagonal
312           $a_copy = $u x $ess x $v->transpose;
313
314       If m==n, $u and $v can be thought of as rotation matrices that convert
315       from the original matrix's singular coordinates to final coordinates,
316       and from original coordinates to singular coordinates, respectively,
317       and $ess is a diagonal scaling matrix.
318
319       If n>m, "svd" will barf. This can be avoided by passing in the
320       transpose of $a, and reconstructing the original matrix like so:
321
322           ($u,$s,$v) = svd($x->transpose);
323           $ess = zeroes($x->dim(1),$x->dim(1));
324           $ess->slice("$_","$_").=$s->slice("$_") foreach (0..$x->dim(1)-1); #generic diagonal
325           $x_copy = $v x $ess x $u->transpose;
326
327       EXAMPLE
328
329       The computing literature has loads of examples of how to use SVD.
330       Here's a trivial example (used in PDL::Transform::map) of how to make a
331       matrix less, er, singular, without changing the orientation of the
332       ellipsoid of transformation:
333
334           { my($r1,$s,$r2) = svd $x;
335             $s++;             # fatten all singular values
336             $r2 *= $s;        # implicit threading for cheap mult.
337             $x .= $r2 x $r1;  # a gets r2 x ess x r1
338           }
339
340       svd ignores the bad-value flag of the input ndarrays.  It will set the
341       bad-value flag of all output ndarrays if the flag is set for any of the
342       input ndarrays.
343
344   lu_decomp
345         Signature: (a(m,m); [o]lu(m,m); [o]perm(m); [o]parity)
346
347       LU decompose a matrix, with row permutation
348
349         ($lu, $perm, $parity) = lu_decomp($x);
350
351         $lu = lu_decomp($x, $perm, $par);  # $perm and $par are outputs!
352
353         lu_decomp($x->inplace,$perm,$par); # Everything in place.
354
355       "lu_decomp" returns an LU decomposition of a square matrix, using
356       Crout's method with partial pivoting. It's ported from Numerical
357       Recipes. The partial pivoting keeps it numerically stable but means a
358       little more overhead from threading.
359
360       "lu_decomp" decomposes the input matrix into matrices L and U such that
361       LU = A, L is a subdiagonal matrix, and U is a superdiagonal matrix. By
362       convention, the diagonal of L is all 1's.
363
364       The single output matrix contains all the variable elements of both the
365       L and U matrices, stacked together. Because the method uses pivoting
366       (rearranging the lower part of the matrix for better numerical
367       stability), you have to permute input vectors before applying the L and
368       U matrices. The permutation is returned either in the second argument
369       or, in list context, as the second element of the list. You need the
370       permutation for the output to make any sense, so be sure to get it one
371       way or the other.
372
373       LU decomposition is the answer to a lot of matrix questions, including
374       inversion and determinant-finding, and "lu_decomp" is used by "inv".
375
376       If you pass in $perm and $parity, they either must be predeclared PDLs
377       of the correct size ($perm is an n-vector, $parity is a scalar) or
378       scalars.
379
380       If the matrix is singular, then the LU decomposition might not be
381       defined; in those cases, "lu_decomp" silently returns undef. Some
382       singular matrices LU-decompose just fine, and those are handled OK but
383       give a zero determinant (and hence can't be inverted).
384
385       "lu_decomp" uses pivoting, which rearranges the values in the matrix
386       for more numerical stability. This makes it really good for large and
387       even near-singular matrices. There is a non-pivoting version
388       "lu_decomp2" available which is from 5 to 60 percent faster for typical
389       problems at the expense of failing to compute a result in some cases.
390
391       Now that the "lu_decomp" is threaded, it is the recommended LU
392       decomposition routine.  It no longer falls back to "lu_decomp2".
393
394       "lu_decomp" is ported from Numerical Recipes to PDL. It should probably
395       be implemented in C.
396
397   lu_decomp2
398         Signature: (a(m,m); [o]lu(m,m))
399
400       LU decompose a matrix, with no row permutation
401
402         ($lu, $perm, $parity) = lu_decomp2($x);
403
404         $lu = lu_decomp2($x,$perm,$parity);   # or
405         $lu = lu_decomp2($x);                 # $perm and $parity are optional
406
407         lu_decomp($x->inplace,$perm,$parity); # or
408         lu_decomp($x->inplace);               # $perm and $parity are optional
409
410       "lu_decomp2" works just like "lu_decomp", but it does no pivoting at
411       all.  For compatibility with "lu_decomp", it will give you a
412       permutation list and a parity scalar if you ask for them -- but they
413       are always trivial.
414
415       Because "lu_decomp2" does not pivot, it is numerically unstable -- that
416       means it is less precise than "lu_decomp", particularly for large or
417       near-singular matrices.  There are also specific types of non-singular
418       matrices that confuse it (e.g. ([0,-1,0],[1,0,0],[0,0,1]), which is a
419       90 degree rotation matrix but which confuses "lu_decomp2").
420
421       On the other hand, if you want to invert rapidly a few hundred thousand
422       small matrices and don't mind missing one or two, it could be the
423       ticket.  It can be up to 60% faster at the expense of possible failure
424       of the decomposition for some of the input matrices.
425
426       The output is a single matrix that contains the LU decomposition of $a;
427       you can even do it in-place, thereby destroying $a, if you want.  See
428       "lu_decomp" for more information about LU decomposition.
429
430       "lu_decomp2" is ported from Numerical Recipes into PDL.
431
432   lu_backsub
433         Signature: (lu(m,m); perm(m); b(m))
434
435       Solve A x = B for matrix A, by back substitution into A's LU
436       decomposition.
437
438         ($lu,$perm,$par) = lu_decomp($A);
439
440         $x = lu_backsub($lu,$perm,$par,$A);  # or
441         $x = lu_backsub($lu,$perm,$B);       # $par is not required for lu_backsub
442
443         lu_backsub($lu,$perm,$B->inplace); # modify $B in-place
444
445         $x = lu_backsub(lu_decomp($A),$B); # (ignores parity value from lu_decomp)
446
447         # starting from square matrix A and columns matrix B, with mathematically
448         # correct dimensions
449         $A = identity(4) + ones(4, 4);
450         $A->slice('2,0') .= 0; # break symmetry to see if need transpose
451         $B = sequence(2, 4);
452         # all these functions take B as rows, interpret as though notional columns
453         # mathematically confusing but can't change as back-compat and also
454         # familiar to Fortran users, so just transpose inputs and outputs
455
456         # using lu_backsub
457         ($lu,$perm,$par) = lu_decomp($A);
458         $x = lu_backsub($lu,$perm,$par, $B->transpose)->transpose;
459
460         # or with Slatec LINPACK
461         use PDL::Slatec;
462         gefa($lu=$A->copy, $ipiv=null, $info=null);
463         # 1 = do transpose because Fortran's idea of rows vs columns
464         gesl($lu, $ipiv, $x=$B->transpose->copy, 1);
465         $x = $x->inplace->transpose;
466
467         # or with LAPACK
468         use PDL::LinearAlgebra::Real;
469         getrf($lu=$A->copy, $ipiv=null, $info=null);
470         getrs($lu, 1, $x=$B->transpose->copy, $ipiv, $info=null); # again, need transpose
471         $x=$x->inplace->transpose;
472
473         # or with GSL
474         use PDL::GSL::LINALG;
475         LU_decomp(my $lu=$A->copy, my $p=null, my $signum=null);
476         # $B and $x, first dim is because GSL treats as vector, higher dims thread
477         # so we transpose in and back out
478         LU_solve($lu, $p, $B->transpose, my $x=null);
479         $x=$x->inplace->transpose;
480
481         # proof of the pudding is in the eating:
482         print $A x $x;
483
484       Given the LU decomposition of a square matrix (from "lu_decomp"),
485       "lu_backsub" does back substitution into the matrix to solve "a x = b"
486       for given vector "b".  It is separated from the "lu_decomp" method so
487       that you can call the cheap "lu_backsub" multiple times and not have to
488       do the expensive LU decomposition more than once.
489
490       "lu_backsub" acts on single vectors and threads in the usual way, which
491       means that it treats $y as the transpose of the input.  If you want to
492       process a matrix, you must hand in the transpose of the matrix, and
493       then transpose the output when you get it back. that is because pdls
494       are indexed by (col,row), and matrices are (row,column) by convention,
495       so a 1-D pdl corresponds to a row vector, not a column vector.
496
497       If $lu is dense and you have more than a few points to solve for, it is
498       probably cheaper to find "a^-1" with "inv", and just multiply "x = a^-1
499       b".) in fact, "inv" works by calling "lu_backsub" with the identity
500       matrix.
501
502       "lu_backsub" is ported from section 2.3 of Numerical Recipes.  It is
503       written in PDL but should probably be implemented in C.
504
505   simq
506         Signature: ([phys]a(n,n); [phys]b(n); [o,phys]x(n); int [o,phys]ips(n); int flag)
507
508       Solution of simultaneous linear equations, "a x = b".
509
510       $a is an "n x n" matrix (i.e., a vector of length "n*n"), stored row-
511       wise: that is, "a(i,j) = a[ij]", where "ij = i*n + j".
512
513       While this is the transpose of the normal column-wise storage, this
514       corresponds to normal PDL usage.  The contents of matrix a may be
515       altered (but may be required for subsequent calls with flag = -1).
516
517       $y, $x, $ips are vectors of length "n".
518
519       Set "flag=0" to solve.  Set "flag=-1" to do a new back substitution for
520       different $y vector using the same a matrix previously reduced when
521       "flag=0" (the $ips vector generated in the previous solution is also
522       required).
523
524       See also "lu_backsub", which does the same thing with a slightly less
525       opaque interface.
526
527       simq ignores the bad-value flag of the input ndarrays.  It will set the
528       bad-value flag of all output ndarrays if the flag is set for any of the
529       input ndarrays.
530
531   squaretotri
532         Signature: (a(n,n); b(m))
533
534       Convert a symmetric square matrix to triangular vector storage.
535
536       squaretotri does not process bad values.  It will set the bad-value
537       flag of all output ndarrays if the flag is set for any of the input
538       ndarrays.
539

AUTHOR

541       Copyright (C) 2002 Craig DeForest (deforest@boulder.swri.edu), R.J.R.
542       Williams (rjrw@ast.leeds.ac.uk), Karl Glazebrook
543       (kgb@aaoepp.aao.gov.au).  There is no warranty.  You are allowed to
544       redistribute and/or modify this work under the same conditions as PDL
545       itself.  If this file is separated from the PDL distribution, then the
546       PDL copyright notice should be included in this file.
547
548
549
550perl v5.34.0                      2021-08-16                      MatrixOps(3)
Impressum