1MatrixOps(3) User Contributed Perl Documentation MatrixOps(3)
2
3
4
6 PDL::MatrixOps -- Some Useful Matrix Operations
7
9 $inv = $x->inv;
10
11 $det = $x->det;
12
13 ($lu,$perm,$par) = $x->lu_decomp;
14 $y = lu_backsub($lu,$perm,$z); # solve $x x $y = $z
15
17 PDL::MatrixOps is PDL's built-in matrix manipulation code. It contains
18 utilities for many common matrix operations: inversion, determinant
19 finding, eigenvalue/vector finding, singular value decomposition, etc.
20 PDL::MatrixOps routines are written in a mixture of Perl and C, so that
21 they are reliably present even when there is no FORTRAN compiler or
22 external library available (e.g. PDL::Slatec or any of the PDL::GSL
23 family of modules).
24
25 Matrix manipulation, particularly with large matrices, is a challenging
26 field and no one algorithm is suitable in all cases. The utilities
27 here use general-purpose algorithms that work acceptably for many cases
28 but might not scale well to very large or pathological (near-singular)
29 matrices.
30
31 Except as noted, the matrices are PDLs whose 0th dimension ranges over
32 column and whose 1st dimension ranges over row. The matrices appear
33 correctly when printed.
34
35 These routines should work OK with PDL::Matrix objects as well as with
36 normal PDLs.
37
39 Like most computer languages, PDL addresses matrices in (column,row)
40 order in most cases; this corresponds to (X,Y) coordinates in the
41 matrix itself, counting rightwards and downwards from the upper left
42 corner. This means that if you print a PDL that contains a matrix, the
43 matrix appears correctly on the screen, but if you index a matrix
44 element, you use the indices in the reverse order that you would in a
45 math textbook. If you prefer your matrices indexed in (row, column)
46 order, you can try using the PDL::Matrix object, which includes an
47 implicit exchange of the first two dimensions but should be compatible
48 with most of these matrix operations. TIMTOWDTI.)
49
50 Matrices, row vectors, and column vectors can be multiplied with the
51 'x' operator (which is, of course, threadable):
52
53 $m3 = $m1 x $m2;
54 $col_vec2 = $m1 x $col_vec1;
55 $row_vec2 = $row_vec1 x $m1;
56 $scalar = $row_vec x $col_vec;
57
58 Because of the (column,row) addressing order, 1-D PDLs are treated as
59 _row_ vectors; if you want a _column_ vector you must add a dummy
60 dimension:
61
62 $rowvec = pdl(1,2); # row vector
63 $colvec = $rowvec->(*1); # 1x2 column vector
64 $matrix = pdl([[3,4],[6,2]]); # 2x2 matrix
65 $rowvec2 = $rowvec x $matrix; # right-multiplication by matrix
66 $colvec = $matrix x $colvec; # left-multiplication by matrix
67 $m2 = $matrix x $rowvec; # Throws an error
68
69 Implicit threading works correctly with most matrix operations, but you
70 must be extra careful that you understand the dimensionality. In
71 particular, matrix multiplication and other matrix ops need nx1 PDLs as
72 row vectors and 1xn PDLs as column vectors. In most cases you must
73 explicitly include the trailing 'x1' dimension in order to get the
74 expected results when you thread over multiple row vectors.
75
76 When threading over matrices, it's very easy to get confused about
77 which dimension goes where. It is useful to include comments with every
78 expression, explaining what you think each dimension means:
79
80 $x = xvals(360)*3.14159/180; # (angle)
81 $rot = cat(cat(cos($x),sin($x)), # rotmat: (col,row,angle)
82 cat(-sin($x),cos($x)));
83
85 MatrixOps includes algorithms and pre-existing code from several
86 origins. In particular, "eigens_sym" is the work of Stephen Moshier,
87 "svd" uses an SVD subroutine written by Bryant Marks, and "eigens" uses
88 a subset of the Small Scientific Library by Kenneth Geisshirt. They
89 are free software, distributable under same terms as PDL itself.
90
92 This is intended as a general-purpose linear algebra package for small-
93 to-mid sized matrices. The algorithms may not scale well to large
94 matrices (hundreds by hundreds) or to near singular matrices.
95
96 If there is something you want that is not here, please add and
97 document it!
98
100 identity
101 Signature: (n; [o]a(n,n))
102
103 Return an identity matrix of the specified size. If you hand in a
104 scalar, its value is the size of the identity matrix; if you hand in a
105 dimensioned PDL, the 0th dimension is the size of the matrix.
106
107 stretcher
108 Signature: (a(n); [o]b(n,n))
109
110 $mat = stretcher($eigenvalues);
111
112 Return a diagonal matrix with the specified diagonal elements
113
114 inv
115 Signature: (a(m,m); sv opt )
116
117 $a1 = inv($a, {$opt});
118
119 Invert a square matrix.
120
121 You feed in an NxN matrix in $a, and get back its inverse (if it
122 exists). The code is inplace-aware, so you can get back the inverse in
123 $a itself if you want -- though temporary storage is used either way.
124 You can cache the LU decomposition in an output option variable.
125
126 "inv" uses "lu_decomp" by default; that is a numerically stable
127 (pivoting) LU decomposition method.
128
129 OPTIONS:
130
131 · s
132
133 Boolean value indicating whether to complain if the matrix is
134 singular. If this is false, singular matrices cause inverse to
135 barf. If it is true, then singular matrices cause inverse to return
136 undef.
137
138 · lu (I/O)
139
140 This value contains a list ref with the LU decomposition,
141 permutation, and parity values for $a. If you do not mention the
142 key, or if the value is undef, then inverse calls "lu_decomp". If
143 the key exists with an undef value, then the output of "lu_decomp"
144 is stashed here (unless the matrix is singular). If the value
145 exists, then it is assumed to hold the LU decomposition.
146
147 · det (Output)
148
149 If this key exists, then the determinant of $a get stored here,
150 whether or not the matrix is singular.
151
152 det
153 Signature: (a(m,m); sv opt)
154
155 $det = det($a,{opt});
156
157 Determinant of a square matrix using LU decomposition (for large
158 matrices)
159
160 You feed in a square matrix, you get back the determinant. Some
161 options exist that allow you to cache the LU decomposition of the
162 matrix (note that the LU decomposition is invalid if the determinant is
163 zero!). The LU decomposition is cacheable, in case you want to re-use
164 it. This method of determinant finding is more rapid than recursive-
165 descent on large matrices, and if you reuse the LU decomposition it's
166 essentially free.
167
168 OPTIONS:
169
170 · lu (I/O)
171
172 Provides a cache for the LU decomposition of the matrix. If you
173 provide the key but leave the value undefined, then the LU
174 decomposition goes in here; if you put an LU decomposition here, it
175 will be used and the matrix will not be decomposed again.
176
177 determinant
178 Signature: (a(m,m))
179
180 $det = determinant($x);
181
182 Determinant of a square matrix, using recursive descent (threadable).
183
184 This is the traditional, robust recursive determinant method taught in
185 most linear algebra courses. It scales like "O(n!)" (and hence is
186 pitifully slow for large matrices) but is very robust because no
187 division is involved (hence no division-by-zero errors for singular
188 matrices). It's also threadable, so you can find the determinants of a
189 large collection of matrices all at once if you want.
190
191 Matrices up to 3x3 are handled by direct multiplication; larger
192 matrices are handled by recursive descent to the 3x3 case.
193
194 The LU-decomposition method det is faster in isolation for single
195 matrices larger than about 4x4, and is much faster if you end up
196 reusing the LU decomposition of $a (NOTE: check performance and
197 threading benchmarks with new code).
198
199 eigens_sym
200 Signature: ([phys]a(m); [o,phys]ev(n,n); [o,phys]e(n))
201
202 Eigenvalues and -vectors of a symmetric square matrix. If passed an
203 asymmetric matrix, the routine will warn and symmetrize it, by taking
204 the average value. That is, it will solve for 0.5*($a+$a->mv(0,1)).
205
206 It's threadable, so if $a is 3x3x100, it's treated as 100 separate 3x3
207 matrices, and both $ev and $e get extra dimensions accordingly.
208
209 If called in scalar context it hands back only the eigenvalues.
210 Ultimately, it should switch to a faster algorithm in this case (as
211 discarding the eigenvectors is wasteful).
212
213 The algorithm used is due to J. vonNeumann, which was a rediscovery of
214 Jacobi's Method
215 <http://en.wikipedia.org/wiki/Jacobi_eigenvalue_algorithm> .
216
217 The eigenvectors are returned in COLUMNS of the returned PDL. That
218 makes it slightly easier to access individual eigenvectors, since the
219 0th dim of the output PDL runs across the eigenvectors and the 1st dim
220 runs across their components.
221
222 ($ev,$e) = eigens_sym $x; # Make eigenvector matrix
223 $vector = $ev->($n); # Select nth eigenvector as a column-vector
224 $vector = $ev->(($n)); # Select nth eigenvector as a row-vector
225
226 ($ev, $e) = eigens_sym($x); # e-vects & e-values
227 $e = eigens_sym($x); # just eigenvalues
228
229 eigens_sym ignores the bad-value flag of the input piddles. It will
230 set the bad-value flag of all output piddles if the flag is set for any
231 of the input piddles.
232
233 eigens
234 Signature: ([phys]a(m); [o,phys]ev(l,n,n); [o,phys]e(l,n))
235
236 Real eigenvalues and -vectors of a real square matrix.
237
238 (See also "eigens_sym", for eigenvalues and -vectors of a real,
239 symmetric, square matrix).
240
241 The eigens function will attempt to compute the eigenvalues and
242 eigenvectors of a square matrix with real components. If the matrix is
243 symmetric, the same underlying code as "eigens_sym" is used. If
244 asymmetric, the eigenvalues and eigenvectors are computed with
245 algorithms from the sslib library. If any imaginary components exist
246 in the eigenvalues, the results are currently considered to be invalid,
247 and such eigenvalues are returned as "NaN"s. This is true for
248 eigenvectors also. That is if there are imaginary components to any of
249 the values in the eigenvector, the eigenvalue and corresponding
250 eigenvectors are all set to "NaN". Finally, if there are any repeated
251 eigenvectors, they are replaced with all "NaN"s.
252
253 Use of the eigens function on asymmetric matrices should be considered
254 experimental! For asymmetric matrices, nearly all observed matrices
255 with real eigenvalues produce incorrect results, due to errors of the
256 sslib algorithm. If your assymmetric matrix returns all NaNs, do not
257 assume that the values are complex. Also, problems with memory access
258 is known in this library.
259
260 Not all square matrices are diagonalizable. If you feed in a non-
261 diagonalizable matrix, then one or more of the eigenvectors will be set
262 to NaN, along with the corresponding eigenvalues.
263
264 "eigens" is threadable, so you can solve 100 eigenproblems by feeding
265 in a 3x3x100 array. Both $ev and $e get extra dimensions accordingly.
266
267 If called in scalar context "eigens" hands back only the eigenvalues.
268 This is somewhat wasteful, as it calculates the eigenvectors anyway.
269
270 The eigenvectors are returned in COLUMNS of the returned PDL (ie the
271 the 0 dimension). That makes it slightly easier to access individual
272 eigenvectors, since the 0th dim of the output PDL runs across the
273 eigenvectors and the 1st dim runs across their components.
274
275 ($ev,$e) = eigens $x; # Make eigenvector matrix
276 $vector = $ev->($n); # Select nth eigenvector as a column-vector
277 $vector = $ev->(($n)); # Select nth eigenvector as a row-vector
278
279 DEVEL NOTES:
280
281 For now, there is no distinction between a complex eigenvalue and an
282 invalid eigenvalue, although the underlying code generates complex
283 numbers. It might be useful to be able to return complex eigenvalues.
284
285 ($ev, $e) = eigens($x); # e'vects & e'vals
286 $e = eigens($x); # just eigenvalues
287
288 eigens ignores the bad-value flag of the input piddles. It will set
289 the bad-value flag of all output piddles if the flag is set for any of
290 the input piddles.
291
292 svd
293 Signature: (a(n,m); [o]u(n,m); [o,phys]z(n); [o]v(n,n))
294
295 ($u, $s, $v) = svd($x);
296
297 Singular value decomposition of a matrix.
298
299 "svd" is threadable.
300
301 Given an m x n matrix $a that has m rows and n columns (m >= n), "svd"
302 computes matrices $u and $v, and a vector of the singular values $s.
303 Like most implementations, "svd" computes what is commonly referred to
304 as the "thin SVD" of $a, such that $u is m x n, $v is n x n, and there
305 are <=n singular values. As long as m >= n, the original matrix can be
306 reconstructed as follows:
307
308 ($u,$s,$v) = svd($x);
309 $ess = zeroes($x->dim(0),$x->dim(0));
310 $ess->slice("$_","$_").=$s->slice("$_") foreach (0..$x->dim(0)-1); #generic diagonal
311 $a_copy = $u x $ess x $v->transpose;
312
313 If m==n, $u and $v can be thought of as rotation matrices that convert
314 from the original matrix's singular coordinates to final coordinates,
315 and from original coordinates to singular coordinates, respectively,
316 and $ess is a diagonal scaling matrix.
317
318 If n>m, "svd" will barf. This can be avoided by passing in the
319 transpose of $a, and reconstructing the original matrix like so:
320
321 ($u,$s,$v) = svd($x->transpose);
322 $ess = zeroes($x->dim(1),$x->dim(1));
323 $ess->slice("$_","$_").=$s->slice("$_") foreach (0..$x->dim(1)-1); #generic diagonal
324 $x_copy = $v x $ess x $u->transpose;
325
326 EXAMPLE
327
328 The computing literature has loads of examples of how to use SVD.
329 Here's a trivial example (used in PDL::Transform::map) of how to make a
330 matrix less, er, singular, without changing the orientation of the
331 ellipsoid of transformation:
332
333 { my($r1,$s,$r2) = svd $x;
334 $s++; # fatten all singular values
335 $r2 *= $s; # implicit threading for cheap mult.
336 $x .= $r2 x $r1; # a gets r2 x ess x r1
337 }
338
339 svd ignores the bad-value flag of the input piddles. It will set the
340 bad-value flag of all output piddles if the flag is set for any of the
341 input piddles.
342
343 lu_decomp
344 Signature: (a(m,m); [o]lu(m,m); [o]perm(m); [o]parity)
345
346 LU decompose a matrix, with row permutation
347
348 ($lu, $perm, $parity) = lu_decomp($x);
349
350 $lu = lu_decomp($x, $perm, $par); # $perm and $par are outputs!
351
352 lu_decomp($x->inplace,$perm,$par); # Everything in place.
353
354 "lu_decomp" returns an LU decomposition of a square matrix, using
355 Crout's method with partial pivoting. It's ported from Numerical
356 Recipes. The partial pivoting keeps it numerically stable but means a
357 little more overhead from threading.
358
359 "lu_decomp" decomposes the input matrix into matrices L and U such that
360 LU = A, L is a subdiagonal matrix, and U is a superdiagonal matrix. By
361 convention, the diagonal of L is all 1's.
362
363 The single output matrix contains all the variable elements of both the
364 L and U matrices, stacked together. Because the method uses pivoting
365 (rearranging the lower part of the matrix for better numerical
366 stability), you have to permute input vectors before applying the L and
367 U matrices. The permutation is returned either in the second argument
368 or, in list context, as the second element of the list. You need the
369 permutation for the output to make any sense, so be sure to get it one
370 way or the other.
371
372 LU decomposition is the answer to a lot of matrix questions, including
373 inversion and determinant-finding, and "lu_decomp" is used by inv.
374
375 If you pass in $perm and $parity, they either must be predeclared PDLs
376 of the correct size ($perm is an n-vector, $parity is a scalar) or
377 scalars.
378
379 If the matrix is singular, then the LU decomposition might not be
380 defined; in those cases, "lu_decomp" silently returns undef. Some
381 singular matrices LU-decompose just fine, and those are handled OK but
382 give a zero determinant (and hence can't be inverted).
383
384 "lu_decomp" uses pivoting, which rearranges the values in the matrix
385 for more numerical stability. This makes it really good for large and
386 even near-singular matrices. There is a non-pivoting version
387 "lu_decomp2" available which is from 5 to 60 percent faster for typical
388 problems at the expense of failing to compute a result in some cases.
389
390 Now that the "lu_decomp" is threaded, it is the recommended LU
391 decomposition routine. It no longer falls back to "lu_decomp2".
392
393 "lu_decomp" is ported from Numerical Recipes to PDL. It should probably
394 be implemented in C.
395
396 lu_decomp2
397 Signature: (a(m,m); [o]lu(m,m))
398
399 LU decompose a matrix, with no row permutation
400
401 ($lu, $perm, $parity) = lu_decomp2($x);
402
403 $lu = lu_decomp2($x,$perm,$parity); # or
404 $lu = lu_decomp2($x); # $perm and $parity are optional
405
406 lu_decomp($x->inplace,$perm,$parity); # or
407 lu_decomp($x->inplace); # $perm and $parity are optional
408
409 "lu_decomp2" works just like lu_decomp, but it does no pivoting at all.
410 For compatibility with lu_decomp, it will give you a permutation list
411 and a parity scalar if you ask for them -- but they are always trivial.
412
413 Because "lu_decomp2" does not pivot, it is numerically unstable -- that
414 means it is less precise than lu_decomp, particularly for large or
415 near-singular matrices. There are also specific types of non-singular
416 matrices that confuse it (e.g. ([0,-1,0],[1,0,0],[0,0,1]), which is a
417 90 degree rotation matrix but which confuses "lu_decomp2").
418
419 On the other hand, if you want to invert rapidly a few hundred thousand
420 small matrices and don't mind missing one or two, it could be the
421 ticket. It can be up to 60% faster at the expense of possible failure
422 of the decomposition for some of the input matrices.
423
424 The output is a single matrix that contains the LU decomposition of $a;
425 you can even do it in-place, thereby destroying $a, if you want. See
426 lu_decomp for more information about LU decomposition.
427
428 "lu_decomp2" is ported from Numerical Recipes into PDL.
429
430 lu_backsub
431 Signature: (lu(m,m); perm(m); b(m))
432
433 Solve a x = b for matrix a, by back substitution into a's LU
434 decomposition.
435
436 ($lu,$perm,$par) = lu_decomp($x);
437
438 $x = lu_backsub($lu,$perm,$par,$y); # or
439 $x = lu_backsub($lu,$perm,$y); # $par is not required for lu_backsub
440
441 lu_backsub($lu,$perm,$y->inplace); # modify $y in-place
442
443 $x = lu_backsub(lu_decomp($x),$y); # (ignores parity value from lu_decomp)
444
445 Given the LU decomposition of a square matrix (from lu_decomp),
446 "lu_backsub" does back substitution into the matrix to solve "a x = b"
447 for given vector "b". It is separated from the "lu_decomp" method so
448 that you can call the cheap "lu_backsub" multiple times and not have to
449 do the expensive LU decomposition more than once.
450
451 "lu_backsub" acts on single vectors and threads in the usual way, which
452 means that it treats $y as the transpose of the input. If you want to
453 process a matrix, you must hand in the transpose of the matrix, and
454 then transpose the output when you get it back. that is because pdls
455 are indexed by (col,row), and matrices are (row,column) by convention,
456 so a 1-D pdl corresponds to a row vector, not a column vector.
457
458 If $lu is dense and you have more than a few points to solve for, it is
459 probably cheaper to find "a^-1" with inv, and just multiply "x = a^-1
460 b".) in fact, inv works by calling "lu_backsub" with the identity
461 matrix.
462
463 "lu_backsub" is ported from section 2.3 of Numerical Recipes. It is
464 written in PDL but should probably be implemented in C.
465
466 simq
467 Signature: ([phys]a(n,n); [phys]b(n); [o,phys]x(n); int [o,phys]ips(n); int flag)
468
469 Solution of simultaneous linear equations, "a x = b".
470
471 $a is an "n x n" matrix (i.e., a vector of length "n*n"), stored row-
472 wise: that is, "a(i,j) = a[ij]", where "ij = i*n + j".
473
474 While this is the transpose of the normal column-wise storage, this
475 corresponds to normal PDL usage. The contents of matrix a may be
476 altered (but may be required for subsequent calls with flag = -1).
477
478 $y, $x, $ips are vectors of length "n".
479
480 Set "flag=0" to solve. Set "flag=-1" to do a new back substitution for
481 different $y vector using the same a matrix previously reduced when
482 "flag=0" (the $ips vector generated in the previous solution is also
483 required).
484
485 See also lu_backsub, which does the same thing with a slightly less
486 opaque interface.
487
488 simq ignores the bad-value flag of the input piddles. It will set the
489 bad-value flag of all output piddles if the flag is set for any of the
490 input piddles.
491
492 squaretotri
493 Signature: (a(n,n); b(m))
494
495 Convert a symmetric square matrix to triangular vector storage.
496
497 squaretotri does not process bad values. It will set the bad-value
498 flag of all output piddles if the flag is set for any of the input
499 piddles.
500
502 Copyright (C) 2002 Craig DeForest (deforest@boulder.swri.edu), R.J.R.
503 Williams (rjrw@ast.leeds.ac.uk), Karl Glazebrook
504 (kgb@aaoepp.aao.gov.au). There is no warranty. You are allowed to
505 redistribute and/or modify this work under the same conditions as PDL
506 itself. If this file is separated from the PDL distribution, then the
507 PDL copyright notice should be included in this file.
508
509
510
511perl v5.32.0 2020-09-17 MatrixOps(3)