1MatrixOps(3) User Contributed Perl Documentation MatrixOps(3)
2
3
4
6 PDL::MatrixOps -- Some Useful Matrix Operations
7
9 $inv = $a->inv;
10
11 $det = $a->det;
12
13 ($lu,$perm,$par) = $a->lu_decomp;
14 $x = lu_backsub($lu,$perm,$b); # solve $a x $x = $b
15
17 PDL::MatrixOps is PDL's built-in matrix manipulation code. It contains
18 utilities for many common matrix operations: inversion, determinant
19 finding, eigenvalue/vector finding, singular value decomposition, etc.
20 PDL::MatrixOps routines are written in a mixture of Perl and C, so that
21 they are reliably present even when there is no FORTRAN compiler or
22 external library available (e.g. PDL::Slatec or PDL::GSL).
23
24 Matrix manipulation, particularly with large matrices, is a challenging
25 field and no one algorithm is suitable in all cases. The utilities
26 here use general-purpose algorithms that work acceptably for many cases
27 but might not scale well to very large or pathological (near-singular)
28 matrices.
29
30 Except as noted, the matrices are PDLs whose 0th dimension ranges over
31 column and whose 1st dimension ranges over row. The matrices appear
32 correctly when printed.
33
34 These routines should work OK with PDL::Matrix objects as well as with
35 normal PDLs.
36
38 Like most computer languages, PDL addresses matrices in (column,row)
39 order in most cases; this corresponds to (X,Y) coordinates in the
40 matrix itself, counting rightwards and downwards from the upper left
41 corner. This means that if you print a PDL that contains a matrix, the
42 matrix appears correctly on the screen, but if you index a matrix
43 element, you use the indices in the reverse order that you would in a
44 math textbook. If you prefer your matrices indexed in (row, column)
45 order, you can try using the PDL::Matrix object, which includes an
46 implicit exchange of the first two dimensions but should be compatible
47 with most of these matrix operations. TIMTOWDTI.)
48
49 Matrices, row vectors, and column vectors can be multiplied with the
50 'x' operator (which is, of course, threadable):
51
52 $m3 = $m1 x $m2;
53 $col_vec2 = $m1 x $col_vec1;
54 $row_vec2 = $row_vec1 x $m1;
55 $scalar = $row_vec x $col_vec;
56
57 Because of the (column,row) addressing order, 1-D PDLs are treated as
58 _row_ vectors; if you want a _column_ vector you must add a dummy
59 dimension:
60
61 $rowvec = pdl(1,2); # row vector
62 $colvec = $rowvec->(*1); # 1x2 column vector
63 $matrix = pdl([[3,4],[6,2]]); # 2x2 matrix
64 $rowvec2 = $rowvec x $matrix; # right-multiplication by matrix
65 $colvec = $matrix x $colvec; # left-multiplication by matrix
66 $m2 = $matrix x $rowvec; # Throws an error
67
68 Implicit threading works correctly with most matrix operations, but you
69 must be extra careful that you understand the dimensionality. In
70 particular, matrix multiplication and other matrix ops need nx1 PDLs as
71 row vectors and 1xn PDLs as column vectors. In most cases you must
72 explicitly include the trailing 'x1' dimension in order to get the
73 expected results when you thread over multiple row vectors.
74
75 When threading over matrices, it's very easy to get confused about
76 which dimension goes where. It is useful to include comments with every
77 expression, explaining what you think each dimension means:
78
79 $a = xvals(360)*3.14159/180; # (angle)
80 $rot = cat(cat(cos($a),sin($a)), # rotmat: (col,row,angle)
81 cat(-sin($a),cos($a)));
82
84 MatrixOps includes algorithms and pre-existing code from several
85 origins. In particular, "eigens_sym" is the work of Stephen Moshier,
86 "svd" uses an SVD subroutine written by Bryant Marks, and "eigens" uses
87 a subset of the Small Scientific Library by Kenneth Geisshirt. They
88 are free software, distributable under same terms as PDL itself.
89
91 This is intended as a general-purpose linear algebra package for small-
92 to-mid sized matrices. The algorithms may not scale well to large
93 matrices (hundreds by hundreds) or to near singular matrices.
94
95 If there is something you want that is not here, please add and
96 document it!
97
99 identity
100 Signature: (n; [o]a(n,n))
101
102 Return an identity matrix of the specified size. If you hand in a
103 scalar, its value is the size of the identity matrix; if you hand in a
104 dimensioned PDL, the 0th dimension is the size of the matrix.
105
106 stretcher
107 Signature: (a(n); [o]b(n,n))
108
109 $mat = stretcher($eigenvalues);
110
111 Return a diagonal matrix with the specified diagonal elements
112
113 inv
114 Signature: (a(m,m); sv opt )
115
116 $a1 = inv($a, {$opt});
117
118 Invert a square matrix.
119
120 You feed in an NxN matrix in $a, and get back its inverse (if it
121 exists). The code is inplace-aware, so you can get back the inverse in
122 $a itself if you want -- though temporary storage is used either way.
123 You can cache the LU decomposition in an output option variable.
124
125 "inv" uses lu_decomp by default; that is a numerically stable
126 (pivoting) LU decomposition method. If you ask it to thread then a
127 numerically unstable (non-pivoting) method is used instead, so avoid
128 threading over collections of large (say, more than 4x4) or near-
129 singular matrices unless precision is not important.
130
131 OPTIONS:
132
133 · s
134
135 Boolean value indicating whether to complain if the matrix is
136 singular. If this is false, singular matrices cause inverse to
137 barf. If it is true, then singular matrices cause inverse to return
138 undef. In the threading case, no checking for singularity is
139 performed, if any of the matrices in your threaded collection are
140 singular, they receive NaN entries.
141
142 · lu (I/O)
143
144 This value contains a list ref with the LU decomposition,
145 permutation, and parity values for $a. If you do not mention the
146 key, or if the value is undef, then inverse calls lu_decomp. If the
147 key exists with an undef value, then the output of lu_decomp is
148 stashed here (unless the matrix is singular). If the value exists,
149 then it is assumed to hold the lu decomposition.
150
151 · det (Output)
152
153 If this key exists, then the determinant of $a get stored here,
154 whether or not the matrix is singular.
155
156 det
157 Signature: (a(m,m); sv opt)
158
159 $det = det($a,{opt});
160
161 Determinant of a square matrix using LU decomposition (for large
162 matrices)
163
164 You feed in a square matrix, you get back the determinant. Some
165 options exist that allow you to cache the LU decomposition of the
166 matrix (note that the LU decomposition is invalid if the determinant is
167 zero!). The LU decomposition is cacheable, in case you want to re-use
168 it. This method of determinant finding is more rapid than recursive-
169 descent on large matrices, and if you reuse the LU decomposition it's
170 essentially free.
171
172 If you ask det to thread (by giving it a 3-D or higher dim piddle) then
173 lu_decomp drops you through to lu_decomp2, which is numerically
174 unstable (and hence not useful for very large matrices) but quite fast.
175
176 If you want to use threading on a matrix that's less than, say, 10x10,
177 and might be near singular, then you might want to use determinant,
178 which is a more robust (but slower) determinant finder, instead.
179
180 OPTIONS:
181
182 · lu (I/O)
183
184 Provides a cache for the LU decomposition of the matrix. If you
185 provide the key but leave the value undefined, then the LU
186 decomposition goes in here; if you put an LU decomposition here, it
187 will be used and the matrix will not be decomposed again.
188
189 determinant
190 Signature: (a(m,m))
191
192 $det = determinant($a);
193
194 Determinant of a square matrix, using recursive descent (threadable).
195
196 This is the traditional, robust recursive determinant method taught in
197 most linear algebra courses. It scales like "O(n!)" (and hence is
198 pitifully slow for large matrices) but is very robust because no
199 division is involved (hence no division-by-zero errors for singular
200 matrices). It's also threadable, so you can find the determinants of a
201 large collection of matrices all at once if you want.
202
203 Matrices up to 3x3 are handled by direct multiplication; larger
204 matrices are handled by recursive descent to the 3x3 case.
205
206 The LU-decomposition method det is faster in isolation for single
207 matrices larger than about 4x4, and is much faster if you end up
208 reusing the LU decomposition of $a, but does not thread well.
209
210 eigens_sym
211 Signature: ([phys]a(m); [o,phys]ev(n,n); [o,phys]e(n))
212
213 Eigenvalues and -vectors of a symmetric square matrix. If passed an
214 asymmetric matrix, the routine will warn and symmetrize it, by taking
215 the average value. That is, it will solve for 0.5*($a+$a->mv(0,1)).
216
217 It's threadable, so if $a is 3x3x100, it's treated as 100 separate 3x3
218 matrices, and both $ev and $e get extra dimensions accordingly.
219
220 If called in scalar context it hands back only the eigenvalues.
221 Ultimately, it should switch to a faster algorithm in this case (as
222 discarding the eigenvectors is wasteful).
223
224 The algorithm used is due to J. vonNeumann, which was a rediscovery of
225 Jacobi's method.
226 http://en.wikipedia.org/wiki/Jacobi_eigenvalue_algorithm
227
228 The eigenvectors are returned in COLUMNS of the returned PDL. That
229 makes it slightly easier to access individual eigenvectors, since the
230 0th dim of the output PDL runs across the eigenvectors and the 1st dim
231 runs across their components.
232
233 ($ev,$e) = eigens_sym $a; # Make eigenvector matrix
234 $vector = $ev->($n); # Select nth eigenvector as a column-vector
235 $vector = $ev->(($n)); # Select nth eigenvector as a row-vector
236
237 ($ev, $e) = eigens_sym($a); # e'vects & e'vals
238 $e = eigens_sym($a); # just eigenvalues
239
240 eigens_sym ignores the bad-value flag of the input piddles. It will
241 set the bad-value flag of all output piddles if the flag is set for any
242 of the input piddles.
243
244 eigens
245 Signature: ([phys]a(m); [o,phys]ev(l,n,n); [o,phys]e(l,n))
246
247 Real eigenvalues and -vectors of a real square matrix.
248
249 (See also "eigens_sym", for eigenvalues and -vectors of a real,
250 symmetric, square matrix).
251
252 The eigens function will attempt to compute the eigenvalues and
253 eigenvectors of a square matrix with real components. If the matrix is
254 symmetric, the same underlying code as "eigens_sym" is used. If
255 asymmetric, the eigenvalues and eigenvectors are computed with
256 algorithms from the sslib library. If any imaginary components exist
257 in the eigenvalues, the results are currently considered to be invalid,
258 and such eigenvalues are returned as "NaN"s. This is true for
259 eigenvectors also. That is if there are imaginary components to any of
260 the values in the eigenvector, the eigenvalue and corresponding
261 eigenvectors are all set to "NaN". Finally, if there are any repeated
262 eigenvectors, they are replaced with all "NaN"s.
263
264 Use of the eigens function on asymmetric matrices should be considered
265 experimental! For asymmetric matrices, nearly all observed matrices
266 with real eigenvalues produce incorrect results, due to errors of the
267 sslib algorithm. If your assymmetric matrix returns all NaNs, do not
268 assume that the values are complex. Also, problems with memory access
269 is known in this library.
270
271 Not all square matrices are diagonalizable. If you feed in a non-
272 diagonalizable matrix, then one or more of the eigenvectors will be set
273 to NaN, along with the corresponding eigenvalues.
274
275 "eigens" is threadable, so you can solve 100 eigenproblems by feeding
276 in a 3x3x100 array. Both $ev and $e get extra dimensions accordingly.
277
278 If called in scalar context "eigens" hands back only the eigenvalues.
279 This is somewhat wasteful, as it calculates the eigenvectors anyway.
280
281 The eigenvectors are returned in COLUMNS of the returned PDL (ie the
282 the 0 dimension). That makes it slightly easier to access individual
283 eigenvectors, since the 0th dim of the output PDL runs across the
284 eigenvectors and the 1st dim runs across their components.
285
286 ($ev,$e) = eigens $a; # Make eigenvector matrix
287 $vector = $ev->($n); # Select nth eigenvector as a column-vector
288 $vector = $ev->(($n)); # Select nth eigenvector as a row-vector
289
290 DEVEL NOTES:
291
292 For now, there is no distinction between a complex eigenvalue and an
293 invalid eigenvalue, although the underlying code generates complex
294 numbers. It might be useful to be able to return complex eigenvalues.
295
296 ($ev, $e) = eigens($a); # e'vects & e'vals
297 $e = eigens($a); # just eigenvalues
298
299 eigens ignores the bad-value flag of the input piddles. It will set
300 the bad-value flag of all output piddles if the flag is set for any of
301 the input piddles.
302
303 svd
304 Signature: (a(n,m); [o]u(n,m); [o,phys]z(n); [o]v(n,n))
305
306 ($r1, $s, $r2) = svd($a);
307
308 Singular value decomposition of a matrix.
309
310 "svd" is threadable.
311
312 $r1 and $r2 are rotation matrices that convert from the original
313 matrix's singular coordinates to final coordinates, and from original
314 coordinates to singular coordinates, respectively. $s is the diagonal
315 of the singular value matrix, so that, if $a is square, then you can
316 make an expensive copy of $a by saying:
317
318 $ess = zeroes($r1); $ess->diagonal(0,1) .= $s;
319 $a_copy .= $r2 x $ess x $r1;
320
321 EXAMPLE
322
323 The computing literature has loads of examples of how to use SVD.
324 Here's a trivial example (used in PDL::Transform::Map) of how to make a
325 matrix less, er, singular, without changing the orientation of the
326 ellipsoid of transformation:
327
328 { my($r1,$s,$r2) = svd $a;
329 $s++; # fatten all singular values
330 $r2 *= $s; # implicit threading for cheap mult.
331 $a .= $r2 x $r1; # a gets r2 x ess x r1
332 }
333
334 svd ignores the bad-value flag of the input piddles. It will set the
335 bad-value flag of all output piddles if the flag is set for any of the
336 input piddles.
337
338 lu_decomp
339 Signature: (a(m,m); [o]b(n); [o]c; [o]lu)
340
341 LU decompose a matrix, with row permutation
342
343 ($lu, $perm, $parity) = lu_decomp($a);
344
345 $lu = lu_decomp($a, $perm, $par); # $perm and $par are outputs!
346
347 lu_decomp($a->inplace,$perm,$par); # Everything in place.
348
349 lu_decomp returns an LU decomposition of a square matrix, using Crout's
350 method with partial pivoting. It's ported from Numerical Recipes. The
351 partial pivoting keeps it numerically stable but defeats efficient
352 threading, so if you have a few matrices to decompose accurately, you
353 should use lu_decomp, but if you have a million matrices to decompose
354 and don't mind a higher error budget you probably want to use
355 lu_decomp2, which doesn't do the pivoting (and hence gives wrong
356 answers for near-singular or large matrices), but does do threading.
357
358 lu_decomp decomposes the input matrix into matrices L and U such that
359 LU = A, L is a subdiagonal matrix, and U is a superdiagonal matrix. By
360 convention, the diagonal of L is all 1's.
361
362 The single output matrix contains all the variable elements of both the
363 L and U matrices, stacked together. Because the method uses pivoting
364 (rearranging the lower part of the matrix for better numerical
365 stability), you have to permute input vectors before applying the L and
366 U matrices. The permutation is returned either in the second argument
367 or, in list context, as the second element of the list. You need the
368 permutation for the output to make any sense, so be sure to get it one
369 way or the other.
370
371 LU decomposition is the answer to a lot of matrix questions, including
372 inversion and determinant-finding, and lu_decomp is used by inverse.
373
374 If you pass in $perm and $parity, they either must be predeclared PDLs
375 of the correct size ($perm is an n-vector, $parity is a scalar) or
376 scalars.
377
378 If the matrix is singular, then the LU decomposition might not be
379 defined; in those cases, lu_decomp silently returns undef. Some
380 singular matrices LU-decompose just fine, and those are handled OK but
381 give a zero determinant (and hence can't be inverted).
382
383 lu_decomp uses pivoting, which rearranges the values in the matrix for
384 more numerical stability. This makes it really good for large and even
385 near-singular matrices, but makes it unable to properly vectorize
386 threaded operation. If you have a LOT of small matrices to invert
387 (like, say, a 3x3x1000000 PDL) you should use lu_decomp2, which doesn't
388 pivot and is therefore threadable (and, of course, works in-place).
389
390 If you ask lu_decomp to thread (by having a nontrivial third dimension
391 in the matrix) then it will call lu_decomp2 instead. That is a
392 numerically unstable (non-pivoting) method that is mainly useful for
393 smallish, not-so-singular matrices but is threadable.
394
395 lu_decomp is ported from _Numerical_Recipes to PDL. It should probably
396 be implemented in C.
397
398 lu_decomp2
399 Signature: (a(m,m); [0]lu(n)
400
401 LU decompose a matrix, with no row permutation (threadable!)
402
403 ($lu, $perm, $parity) = lu_decomp2($a);
404
405 $lu = lu_decomp2($a,[$perm,$par]);
406
407 lu_decomp($a->inplace,[$perm,$par]);
408
409 "lu_decomp2" works just like lu_decomp, but it does no pivoting at all
410 and hence can be usefully threaded. For compatibility with lu_decomp,
411 it will give you a permutation list and a parity scalar if you ask for
412 them -- but they are always trivial.
413
414 Because "lu_decomp2" does not pivot, it is numerically unstable -- that
415 means it is less precise than lu_decomp, particularly for large or
416 near-singular matrices. There are also specific types of non-singular
417 matrices that confuse it (e.g. ([0,-1,0],[1,0,0],[0,0,1]), which is a
418 90 degree rotation matrix but which confuses lu_decomp2). On the other
419 hand, if you want to invert rapidly a few hundred thousand small
420 matrices and don't mind missing one or two, it's just the ticket.
421
422 The output is a single matrix that contains the LU decomposition of $a;
423 you can even do it in-place, thereby destroying $a, if you want. See
424 lu_decomp for more information about LU decomposition.
425
426 lu_decomp2 is ported from _Numerical_Recipes_ into PDL. If lu_decomp
427 were implemented in C, then lu_decomp2 might become unnecessary.
428
429 lu_backsub
430 Signature: (lu(m,m); perm(m); b(m))
431
432 Solve A X = B for matrix A, by back substitution into A's LU
433 decomposition.
434
435 ($lu,$perm) = lu_decomp($a);
436 $x = lu_backsub($lu,$perm,$par,$b);
437
438 lu_backsub($lu,$perm,$b->inplace); # modify $b in-place
439
440 $x = lu_backsub(lu_decomp($a),$b); # (ignores parity value from lu_decomp)
441
442 Given the LU decomposition of a square matrix (from lu_decomp),
443 lu_backsub does back substitution into the matrix to solve "A X = B"
444 for given vector "B". It is separated from the lu_decomp method so
445 that you can call the cheap lu_backsub multiple times and not have to
446 do the expensive LU decomposition more than once.
447
448 lu_backsub acts on single vectors and threads in the usual way, which
449 means that it treats $b as the transpose of the input. If you want to
450 process a matrix, you must hand in the transpose of the matrix, and
451 then transpose the output when you get it back. That is because PDLs
452 are indexed by (col,row), and matrices are (row,column) by convention,
453 so a 1-D PDL corresponds to a row vector, not a column vector.
454
455 If $lu is dense and you have more than a few points to solve for, it is
456 probably cheaper to find "A^-1" with inverse, and just multiply "X =
457 A^-1 B".) In fact, inverse works by calling lu_backsub with the
458 identity matrix.
459
460 lu_backsub is ported from Section 2.3 of Numerical Recipes. It is
461 written in PDL but should probably be implemented in C.
462
463 simq
464 Signature: ([phys]a(n,n); [phys]b(n); [o,phys]x(n); int [o,phys]ips(n); int flag)
465
466 Solution of simultaneous linear equations, "a x = b".
467
468 $a is an "n x n" matrix (i.e., a vector of length "n*n"), stored row-
469 wise: that is, "a(i,j) = a[ij]", where "ij = i*n + j".
470
471 While this is the transpose of the normal column-wise storage, this
472 corresponds to normal PDL usage. The contents of matrix a may be
473 altered (but may be required for subsequent calls with flag = -1).
474
475 $b, $x, $ips are vectors of length "n".
476
477 Set "flag=0" to solve. Set "flag=-1" to do a new back substitution for
478 different $b vector using the same a matrix previously reduced when
479 "flag=0" (the $ips vector generated in the previous solution is also
480 required).
481
482 See also lu_backsub, which does the same thing with a slightly less
483 opaque interface.
484
485 simq ignores the bad-value flag of the input piddles. It will set the
486 bad-value flag of all output piddles if the flag is set for any of the
487 input piddles.
488
489 squaretotri
490 Signature: (a(n,n); b(m))
491
492 Convert a symmetric square matrix to triangular vector storage.
493
494 squaretotri does not process bad values. It will set the bad-value
495 flag of all output piddles if the flag is set for any of the input
496 piddles.
497
499 Copyright (C) 2002 Craig DeForest (deforest@boulder.swri.edu), R.J.R.
500 Williams (rjrw@ast.leeds.ac.uk), Karl Glazebrook
501 (kgb@aaoepp.aao.gov.au). There is no warranty. You are allowed to
502 redistribute and/or modify this work under the same conditions as PDL
503 itself. If this file is separated from the PDL distribution, then the
504 PDL copyright notice should be included in this file.
505
506
507
508perl v5.12.3 2011-03-31 MatrixOps(3)