1MatrixOps(3) User Contributed Perl Documentation MatrixOps(3)
2
3
4
6 PDL::MatrixOps -- Some Useful Matrix Operations
7
9 $inv = $a->inv;
10
11 $det = $a->det;
12
13 ($lu,$perm,$par) = $a->lu_decomp;
14 $x = lu_backsub($lu,$perm,$b); # solve $a x $x = $b
15
17 PDL::MatrixOps is PDL's built-in matrix manipulation code. It contains
18 utilities for many common matrix operations: inversion, determinant
19 finding, eigenvalue/vector finding, singular value decomposition, etc.
20 PDL::MatrixOps routines are written in a mixture of Perl and C, so that
21 they are reliably present even when there no FORTRAN compiler or exter‐
22 nal library available (e.g. PDL::Slatec or PDL::GSL).
23
24 Matrix manipulation, particularly with large matrices, is a challenging
25 field and no one algorithm is suitable in all cases. The utilities
26 here use general-purpose algorithms that work acceptably for many cases
27 but might not scale well to very large or pathological (near-singular)
28 matrices.
29
30 Except as noted, the matrices are PDLs whose 0th dimension ranges over
31 column and whose 1st dimension ranges over row. The matrices appear
32 correctly when printed.
33
34 These routines should work OK with PDL::Matrix objects as well as with
35 normal PDLs.
36
38 Like most computer languages, PDL addresses matrices in (column,row)
39 order in most cases; this corresponds to (X,Y) coordinates in the
40 matrix itself, counting rightwards and downwards from the upper left
41 corner. This means that if you print a PDL that contains a matrix, the
42 matrix appears correctly on the screen, but if you index a matrix ele‐
43 ment, you use the indices in the reverse order that you would in a math
44 textbook. If you prefer your matrices indexed in (row, column) order,
45 you can try using the PDL::Matrix object, which includes an implicit
46 exchange of the first two dimensions but should be compatible with most
47 of these matrix operations. TIMTOWDTI.)
48
49 Matrices, row vectors, and column vectors can be multiplied with the
50 'x' operator (which is, of course, threadable):
51
52 $m3 = $m1 x $m2;
53 $col_vec2 = $m1 x $col_vec1;
54 $row_vec2 = $row_vec1 x $m1;
55 $scalar = $row_vec x $col_vec;
56
57 Because of the (column,row) addressing order, 1-D PDLs are treated as
58 _row_ vectors; if you want a _column_ vector you must add a dummy
59 dimension:
60
61 $rowvec = pdl(1,2); # row vector
62 $colvec = $rowvec->(*1); # 1x2 column vector
63 $matrix = pdl([[3,4],[6,2]]); # 2x2 matrix
64 $rowvec2 = $rowvec x $matrix; # right-multiplication by matrix
65 $colvec = $matrix x $colvec; # left-multiplication by matrix
66 $m2 = $matrix x $rowvec; # Throws an error
67
68 Implicit threading works correctly with most matrix operations, but you
69 must be extra careful that you understand the dimensionality. In par‐
70 ticular, matrix multiplication and other matrix ops need nx1 PDLs as
71 row vectors and 1xn PDLs as column vectors. In most cases you must
72 explicitly include the trailing 'x1' dimension in order to get the
73 expected results when you thread over multiple row vectors.
74
75 When threading over matrices, it's very easy to get confused about
76 which dimension goes where. It is useful to include comments with every
77 expression, explaining what you think each dimension means:
78
79 $a = xvals(360)*3.14159/180; # (angle)
80 $rot = cat(cat(cos($a),sin($a)), # rotmat: (col,row,angle)
81 cat(-sin($a),cos($a)));
82
84 MatrixOps includes algorithms and pre-existing code from several ori‐
85 gins. In particular, "eigens_sym" is the work of Stephen Moshier,
86 "svd" uses an SVD subroutine written by Bryant Marks, and "eigens" uses
87 a subset of the Small Scientific Library by Kenneth Geisshirt. All are
88 free software, distributable under same terms as PDL itself.
89
91 This is intended as a general-purpose linear algebra package for small-
92 to-mid sized matrices. The algorithms may not scale well to large
93 matrices (hundreds by hundreds) or to near singular matrices.
94
95 If there is something you want that is not here, please add and docu‐
96 ment it!
97
99 identity
100
101 Signature: (n; [o]a(n,n))
102
103 Return an identity matrix of the specified size. If you hand in a
104 scalar, its value is the size of the identity matrix; if you hand in a
105 dimensioned PDL, the 0th dimension is the size of the matrix.
106
107 stretcher
108
109 Signature: (a(n); [o]b(n,n))
110
111 $mat = stretcher($eigenvalues);
112
113 Return a diagonal matrix with the specified diagonal elements
114
115 inv
116
117 Signature: (a(m,m); sv opt )
118
119 $a1 = inv($a, {$opt});
120
121 Invert a square matrix.
122
123 You feed in an NxN matrix in $a, and get back its inverse (if it
124 exists). The code is inplace-aware, so you can get back the inverse in
125 $a itself if you want -- though temporary storage is used either way.
126 You can cache the LU decomposition in an output option variable.
127
128 "inv" uses lu_decomp by default; that is a numerically stable (pivot‐
129 ing) LU decomposition method. If you ask it to thread then a numeri‐
130 cally unstable (non-pivoting) method is used instead, so avoid thread‐
131 ing over collections of large (say, more than 4x4) or near-singular
132 matrices unless precision is not important.
133
134 OPTIONS:
135
136 * s
137 Boolean value indicating whether to complain if the matrix is singu‐
138 lar. If this is false, singular matrices cause inverse to barf. If
139 it is true, then singular matrices cause inverse to return undef.
140 In the threading case, no checking for singularity is performed, if
141 any of the matrices in your threaded collection are singular, they
142 receive NaN entries.
143
144 * lu (I/O)
145 This value contains a list ref with the LU decomposition, permuta‐
146 tion, and parity values for $a. If you do not mention the key, or
147 if the value is undef, then inverse calls lu_decomp. If the key
148 exists with an undef value, then the output of lu_decomp is stashed
149 here (unless the matrix is singular). If the value exists, then it
150 is assumed to hold the lu decomposition.
151
152 * det (Output)
153 If this key exists, then the determinant of $a get stored here,
154 whether or not the matrix is singular.
155
156 det
157
158 Signature: (a(m,m); sv opt)
159
160 $det = det($a,{opt});
161
162 Determinant of a square matrix using LU decomposition (for large matri‐
163 ces)
164
165 You feed in a square matrix, you get back the determinant. Some
166 options exist that allow you to cache the LU decomposition of the
167 matrix (note that the LU decomposition is invalid if the determinant is
168 zero!). The LU decomposition is cacheable, in case you want to re-use
169 it. This method of determinant finding is more rapid than recursive-
170 descent on large matrices, and if you reuse the LU decomposition it's
171 essentially free.
172
173 If you ask det to thread (by giving it a 3-D or higher dim piddle) then
174 lu_decomp drops you through to lu_decomp2, which is numerically unsta‐
175 ble (and hence not useful for very large matrices) but quite fast.
176
177 If you want to use threading on a matrix that's less than, say, 10x10,
178 and might be near singular, then you might want to use determinant,
179 which is a more robust (but slower) determinant finder, instead.
180
181 OPTIONS:
182
183 * lu (I/O)
184 Provides a cache for the LU decomposition of the matrix. If you
185 provide the key but leave the value undefined, then the LU decompo‐
186 sition goes in here; if you put an LU decomposition here, it will be
187 used and the matrix will not be decomposed again.
188
189 determinant
190
191 Signature: (a(m,m))
192
193 $det = determinant($a);
194
195 Determinant of a square matrix, using recursive descent (threadable).
196
197 This is the traditional, robust recursive determinant method taught in
198 most linear algebra courses. It scales like "O(n!)" (and hence is
199 pitifully slow for large matrices) but is very robust because no divi‐
200 sion is involved (hence no division-by-zero errors for singular matri‐
201 ces). It's also threadable, so you can find the determinants of a
202 large collection of matrices all at once if you want.
203
204 Matrices up to 3x3 are handled by direct multiplication; larger matri‐
205 ces are handled by recursive descent to the 3x3 case.
206
207 The LU-decomposition method det is faster in isolation for single
208 matrices larger than about 4x4, and is much faster if you end up
209 reusing the LU decomposition of $a, but does not thread well.
210
211 eigens_sym
212
213 Signature: ([phys]a(m); [o,phys]ev(n,n); [o,phys]e(n))
214
215 Eigenvalues and -vectors of a symmetric square matrix. If passed an
216 asymmetric matrix, the routine will warn and symmetrize it, by taking
217 the average value. That is, it will solve for 0.5*($a+$a->mv(0,1)).
218
219 It's threadable, so if $a is 3x3x100, it's treated as 100 separate 3x3
220 matrices, and both $ev and $e get extra dimensions accordingly.
221
222 If called in scalar context it hands back only the eigenvalues. Ulti‐
223 mately, it should switch to a faster algorithm in this case (as dis‐
224 carding the eigenvectors is wasteful).
225
226 The algorithm used is due to J. vonNeumann, which was a rediscovery of
227 Jacobi's method. http://en.wikipedia.org/wiki/Jacobi_eigenvalue_algo‐
228 rithm
229
230 The eigenvectors are returned in COLUMNS of the returned PDL. That
231 makes it slightly easier to access individual eigenvectors, since the
232 0th dim of the output PDL runs across the eigenvectors and the 1st dim
233 runs across their components.
234
235 ($ev,$e) = eigens_sym $a; # Make eigenvector matrix
236 $vector = $ev->($n); # Select nth eigenvector as a column-vector
237 $vector = $ev->(($n)); # Select nth eigenvector as a row-vector
238
239 ($ev, $e) = eigens_sym($a); # e'vects & e'vals
240 $e = eigens_sym($a); # just eigenvalues
241
242 eigens
243
244 Signature: ([phys]a(m); [o,phys]ev(l,n,n); [o,phys]e(l,n))
245
246 Real eigenvalues and -vectors of a real square matrix.
247
248 (See also "eigens_sym", for eigenvalues and -vectors of a real, symmet‐
249 ric, square matrix).
250
251 PLEASE NOTE: THE EIGENS FUNCTION HAS BEEN DISABLED FOR ASYMMETRIC
252 MATRICES PENDING FIX TO THE CODE.
253
254 The eigens function will attempt to compute the eigenvalues and eigen‐
255 vectors of a square matrix with real components. If the matrix is sym‐
256 metric, the same underlying code as "eigens_sym" is used. If asymmet‐
257 ric, the eigenvalues and eigenvectors are computed with algorithms from
258 the sslib library (anyone know what algorithm is used?). If any imagi‐
259 nary components exist in the eigenvalues, the results are currently
260 considered to be invalid, and such eigenvalues are returned as "NaN"s.
261 This is true for eigenvectors also. That is if there are imaginary
262 components to any of the values in the eigenvector, the eigenvalue and
263 corresponding eigenvectors are all set to "NaN". Finally, if there are
264 any repeated eigenvectors, they are replaced with all "NaN"s.
265
266 Use of the eigens function on asymmetric matrices should be considered
267 experimental! For asymmetric matrices, nearly all observed matrices
268 with real eigenvalues produce incorrect results, due to errors of the
269 sslib algorithm. If your assymmetric matrix returns all NaNs, do not
270 assume that the values are complex. Also, problems with memory access
271 is known in this library.
272
273 Not all square matrices are diagonalizable. If you feed in a non-diag‐
274 onalizable matrix, then one or more of the eigenvectors will be set to
275 NaN, along with the corresponding eigenvalues.
276
277 "eigens" is threadable, so you can solve 100 eigenproblems by feeding
278 in a 3x3x100 array. Both $ev and $e get extra dimensions accordingly.
279
280 If called in scalar context "eigens" hands back only the eigenvalues.
281 This is somewhat wasteful, as it calculates the eigenvectors anyway.
282
283 The eigenvectors are returned in COLUMNS of the returned PDL (ie the
284 the 0 dimension). That makes it slightly easier to access individual
285 eigenvectors, since the 0th dim of the output PDL runs across the
286 eigenvectors and the 1st dim runs across their components.
287
288 ($ev,$e) = eigens $a; # Make eigenvector matrix
289 $vector = $ev->($n); # Select nth eigenvector as a column-vector
290 $vector = $ev->(($n)); # Select nth eigenvector as a row-vector
291
292 DEVEL NOTES:
293
294 For now, there is no distinction between a complex eigenvalue and an
295 invalid eigenvalue, although the underlying code generates complex num‐
296 bers. It might be useful to be able to return complex eigenvalues.
297
298 ($ev, $e) = eigens($a); # e'vects & e'vals
299 $e = eigens($a); # just eigenvalues
300
301 svd
302
303 Signature: (a(n,m); [o]u(n,m); [o,phys]z(n); [o]v(n,n))
304
305 ($r1, $s, $r2) = svd($a);
306
307 Singular value decomposition of a matrix.
308
309 "svd" is threadable.
310
311 $r1 and $r2 are rotation matrices that convert from the original
312 matrix's singular coordinates to final coordinates, and from original
313 coordinates to singular coordinates, respectively. $s is the diagonal
314 of the singular value matrix, so that, if $a is square, then you can
315 make an expensive copy of $a by saying:
316
317 $ess = zeroes($r1); $ess->diagonal(0,1) .= $s;
318 $a_copy .= $r2 x $ess x $r1;
319
320 EXAMPLE
321
322 The computing literature has loads of examples of how to use SVD.
323 Here's a trivial example (used in PDL::Transform::Map) of how to make a
324 matrix less, er, singular, without changing the orientation of the
325 ellipsoid of transformation:
326
327 { my($r1,$s,$r2) = svd $a;
328 $s++; # fatten all singular values
329 $r2 *= $s; # implicit threading for cheap mult.
330 $a .= $r2 x $r1; # a gets r2 x ess x r1
331 }
332
333 lu_decomp
334
335 Signature: (a(m,m); [o]b(n); [o]c; [o]lu)
336
337 LU decompose a matrix, with row permutation
338
339 ($lu, $perm, $parity) = lu_decomp($a);
340
341 $lu = lu_decomp($a, $perm, $par); # $perm and $par are outputs!
342
343 lu_decomp($a->inplace,$perm,$par); # Everything in place.
344
345 lu_decomp returns an LU decomposition of a square matrix, using Crout's
346 method with partial pivoting. It's ported from Numerical Recipes. The
347 partial pivoting keeps it numerically stable but defeats efficient
348 threading, so if you have a few matrices to decompose accurately, you
349 should use lu_decomp, but if you have a million matrices to decompose
350 and don't mind a higher error budget you probably want to use
351 lu_decomp2, which doesn't do the pivoting (and hence gives wrong
352 answers for near-singular or large matrices), but does do threading.
353
354 lu_decomp decomposes the input matrix into matrices L and U such that
355 LU = A, L is a subdiagonal matrix, and U is a superdiagonal matrix. By
356 convention, the diagonal of L is all 1's.
357
358 The single output matrix contains all the variable elements of both the
359 L and U matrices, stacked together. Because the method uses pivoting
360 (rearranging the lower part of the matrix for better numerical stabil‐
361 ity), you have to permute input vectors before applying the L and U
362 matrices. The permutation is returned either in the second argument
363 or, in list context, as the second element of the list. You need the
364 permutation for the output to make any sense, so be sure to get it one
365 way or the other.
366
367 LU decomposition is the answer to a lot of matrix questions, including
368 inversion and determinant-finding, and lu_decomp is used by inverse.
369
370 If you pass in $perm and $parity, they either must be predeclared PDLs
371 of the correct size ($perm is an n-vector, $parity is a scalar) or
372 scalars.
373
374 If the matrix is singular, then the LU decomposition might not be
375 defined; in those cases, lu_decomp silently returns undef. Some singu‐
376 lar matrices LU-decompose just fine, and those are handled OK but give
377 a zero determinant (and hence can't be inverted).
378
379 lu_decomp uses pivoting, which rearranges the values in the matrix for
380 more numerical stability. This makes it really good for large and even
381 near-singular matrices, but makes it unable to properly vectorize
382 threaded operation. If you have a LOT of small matrices to invert
383 (like, say, a 3x3x1000000 PDL) you should use lu_decomp2, which doesn't
384 pivot and is therefore threadable (and, of course, works in-place).
385
386 If you ask lu_decomp to thread (by having a nontrivial third dimension
387 in the matrix) then it will call lu_decomp2 instead. That is a numeri‐
388 cally unstable (non-pivoting) method that is mainly useful for small‐
389 ish, not-so-singular matrices but is threadable.
390
391 lu_decomp is ported from _Numerical_Recipes to PDL. It should probably
392 be implemented in C.
393
394 lu_decomp2
395
396 Signature: (a(m,m); [0]lu(n)
397
398 LU decompose a matrix, with no row permutation (threadable!)
399
400 ($lu, $perm, $parity) = lu_decomp2($a);
401
402 $lu = lu_decomp2($a,[$perm,$par]);
403
404 lu_decomp($a->inplace,[$perm,$par]);
405
406 "lu_decomp2" works just like lu_decomp, but it does no pivoting at all
407 and hence can be usefully threaded. For compatibility with lu_decomp,
408 it will give you a permutation list and a parity scalar if you ask for
409 them -- but they are always trivial.
410
411 Because "lu_decomp2" does not pivot, it is numerically unstable -- that
412 means it is less precise than lu_decomp, particularly for large or
413 near-singular matrices. There are also specific types of non-singular
414 matrices that confuse it (e.g. ([0,-1,0],[1,0,0],[0,0,1]), which is a
415 90 degree rotation matrix but which confuses lu_decomp2). On the other
416 hand, if you want to invert rapidly a few hundred thousand small matri‐
417 ces and don't mind missing one or two, it's just the ticket.
418
419 The output is a single matrix that contains the LU decomposition of $a;
420 you can even do it in-place, thereby destroying $a, if you want. See
421 lu_decomp for more information about LU decomposition.
422
423 lu_decomp2 is ported from _Numerical_Recipes_ into PDL. If lu_decomp
424 were implemented in C, then lu_decomp2 might become unnecessary.
425
426 lu_backsub
427
428 Signature: (lu(m,m); perm(m); b(m))
429
430 Solve A X = B for matrix A, by back substitution into A's LU decomposi‐
431 tion.
432
433 ($lu,$perm) = lu_decomp($a);
434 $x = lu_backsub($lu,$perm,$par,$b);
435
436 lu_backsub($lu,$perm,$b->inplace); # modify $b in-place
437
438 $x = lu_backsub(lu_decomp($a),$b); # (ignores parity value from lu_decomp)
439
440 Given the LU decomposition of a square matrix (from lu_decomp),
441 lu_backsub does back substitution into the matrix to solve "A X = B"
442 for given vector "B". It is separated from the lu_decomp method so
443 that you can call the cheap lu_backsub multiple times and not have to
444 do the expensive LU decomposition more than once.
445
446 lu_backsub acts on single vectors and threads in the usual way, which
447 means that it treats $b as the transpose of the input. If you want to
448 process a matrix, you must hand in the transpose of the matrix, and
449 then transpose the output when you get it back. That is because PDLs
450 are indexed by (col,row), and matrices are (row,column) by convention,
451 so a 1-D PDL corresponds to a row vector, not a column vector.
452
453 If $lu is dense and you have more than a few points to solve for, it is
454 probably cheaper to find "A^-1" with inverse, and just multiply "X =
455 A^-1 B".) In fact, inverse works by calling lu_backsub with the iden‐
456 tity matrix.
457
458 lu_backsub is ported from Section 2.3 of Numerical Recipes. It is
459 written in PDL but should probably be implemented in C.
460
461 simq
462
463 Signature: ([phys]a(n,n); [phys]b(n); [o,phys]x(n); int [o,phys]ips(n); int flag)
464
465 Solution of simultaneous linear equations, "a x = b".
466
467 $a is an "n x n" matrix (i.e., a vector of length "n*n"), stored
468 row-wise: that is, "a(i,j) = a[ij]", where "ij = i*n + j".
469
470 While this is the transpose of the normal column-wise storage, this
471 corresponds to normal PDL usage. The contents of matrix a may be
472 altered (but may be required for subsequent calls with flag = -1).
473
474 $b, $x, $ips are vectors of length "n".
475
476 Set "flag=0" to solve. Set "flag=-1" to do a new back substitution for
477 different $b vector using the same a matrix previously reduced when
478 "flag=0" (the $ips vector generated in the previous solution is also
479 required).
480
481 See also lu_backsub, which does the same thing with a slightly less
482 opaque interface.
483
484 squaretotri
485
486 Signature: (a(n,n); b(m))
487
488 Convert a symmetric square matrix to triangular vector storage.
489
491 Copyright (C) 2002 Craig DeForest (deforest@boulder.swri.edu), R.J.R.
492 Williams (rjrw@ast.leeds.ac.uk), Karl Glazebrook
493 (kgb@aaoepp.aao.gov.au). There is no warranty. You are allowed to
494 redistribute and/or modify this work under the same conditions as PDL
495 itself. If this file is separated from the PDL distribution, then the
496 PDL copyright notice should be included in this file.
497
498
499
500perl v5.8.8 2006-12-02 MatrixOps(3)